Digital twins, or virtual replications of real-world systems, enabled by sophisticated predictive computing technology are gaining popularity in boardrooms, think tanks and governments. Google’s Supply Chain Twin, for example, has risen to prominence during the post-Covid global supply chain crisis, as a way for logistics partners to play around with different potential solutions to supply chain bottlenecks in a low-cost, low-risk environment.
As more and more companies and policy makers are turning to these sorts of simulations to model reality and test out business and economic strategies and tactics in a virtual world before they use them in the real world, we should bear in mind that the map is not the territory.
However, even the most sophisticated digital model, by definition, leaves some things out, and some of those things we leave out could have a significant impact in the real world. More to the point, all our models include assumptions made by human beings; assumptions which could be (indeed, probably are) biased to the biases of the individuals who designed the programme and biased by the data input sets the model has been trained on. Even worse, the assumptions and inputs could be just plain wrong.
This can lead to a false sense of security. Chaos theory tells us that even very small measurement errors in otherwise accurate data inputs (let alone biased or inaccurate data) in even the most accurate models can end up with results that diverge hugely from reality.
If boards and executives rely on these sophisticated simulations, and accept their forecasts and outputs at face value, they could end up making suboptimal decisions with a higher than realistic degree of certainty.
There is also the reality of self-fulfilling prophecies. In the real world, expectations do impact reality. If a decision-maker believes (even inaccurately) that a particular investment is risky, they are less likely to invest in it, and by withholding their investment, they inadvertently increase the odds of that project failing (or failing to materialise at all). This means that biased simulation models can end up perpetuating or worse, exacerbating those in-built biases in the real world.
The scale of the impact of machine-nudged errors is worth considering.
For example, a fallible human manager might make a bad purchasing call that costs a company a competitive advantage on a particular deal. A bad purchasing algorithm, however, could potentially perpetuate overpayment on a particular element in your supply chain for years without anyone noticing the mistake.
The implication here is, that even the best forecasting tools should be balanced with more divergent foresight too, to balance out the false sense of certainty fancy-looking technology can nudge us into.
Foresight | Futurist | Strategist | Economist | Trend Analyst
Interested in finding a balanced approach between your tech-driven data insights and real-life foresight? Our “Strategic Games” Foresight Workshop is the best option for you. Using your data and harnessing trends highlighted by our trend experts, you’ll be able to get a better idea of future opportunities of growth as well as possible blindspots and black swans.
—
Image credit: Dynamic Wang