Predictive knowledge is an amazing tool that has emerged in the 21st century and it has allowed for all kinds of technological advancements.  Machine learning allows predictive knowledge algorithms to predict trends for the future that companies can use to shape their own policy and goals for the short-term in order to increase their profits and succeed in other ways.  But the draw back to predictive knowledge is that the world is a volatile place, and things are constantly in sway.  Predictive knowledge algorithms have their limits, and could not have predicted something like the Great Depression, due to unforeseeable circumstances and speculation that led to it.  Cyber policy and cyber infrastructure should certainly take what we are able to glean from predictive knowledge into account, but with some leeway to it. The future is impossible to be certain about, and these algorithms only do their best to make an estimate based on what information and trends they are given.  This information can be used by professionals to make recommendations on how cyber policy and infrastructure should evolve.  I would say that it is important to make plans that are flexible.  In the short-term the policy and infrastructure would likely be sound, but as we go a few years down the line, the algorithms can only do their best to guess what will happen.  Plenty of things can change the landscape of cyber technology very quickly and these predictive knowledge algorithms won’t be able to account for that.  Therefore, the people creating the policy and infrastructure of the cyber world have to be the ones to incorporate some level of fail-safe plans.  If the policy is able to account for exceptions to its rules, then it will be more effective and not have to be amended as often.  If the infrastructure is created in such a way that it can handle adverse circumstances and continue to function relatively normally, it will not have to be reconstructed as often.