Superforecasting will not save us

The value of forecasts lies in how they help us shape the future. We should prioritise understanding what we can do.

A sketch of Romulus receiving an augury from the gods, a means of predicting the future. Credit: Artokoloro / Alamy Stock Photo.
A sketch of Romulus receiving an augury from the gods, a means of predicting the future. Credit: Artokoloro / Alamy Stock Photo.

Maybe there isn’t going to be an energy crisis after all. Ross Clark, writing for The Spectator on 25 October asks: ‘Is Europe’s chilly winter destined to become another Millennium bug — a much-feared disaster that never transpires?,’ and concludes that: ‘It appears as if some of the grim predictions for the winter of 2022/23 may have been exaggerated.’ Two months earlier, British newspaper headlines on 23 August expected energy bills would rise to £5,000 a year. ‘Spiralling inflation is forecast to hit 18%’ was the headline in The Times. The Financial Times predicted ‘UK inflation projected to top 18% as gas prices surge.’ The Express asked: ‘UK inflation to top 18%: how will millions cope?’ Wholesale gas prices had been surging, leading to predictions of blackouts and rationing. But on 24 October they fell below 100 Euros per MWh for the first time since June. Benchmark futures – which define the value of natural gas – have lost more than 70 per cent from summer peaks.

Does this current state of affairs demonstrate the predictions were exaggerated? Or does it prove we often misunderstand the purpose of predictions? False predictions are often held up as examples of incompetence, but in some cases, like the example above, they fulfil their function as a warning. It worked in this case because it enabled action to be taken to avert, or at least reduce, the disaster. The real value of any prediction lies in its conditionality and context. The headlines skipped the implicit second half of the sentence, which should have been ‘…unless action is taken to stop it.’ The predictions led to a huge increase in anxiety, but also made the forecasts less likely to be right because, intentionally or not, they influenced the way people behaved. European countries responded by rapidly filling their gas storage facilities with LNG: the original target of filling to 80 per cent of their total capacity by 1 November has been met, and exceeded, far ahead of schedule and data suggests that storage is now at nearly 95 per cent. Off the coasts of the UK, Holland, Spain, and Portugal are dozens of giant tankers waiting to unload.  At the same time, demand for gas is around seven per cent lower than in recent years — owing, presumably, to three factors: choices by individuals on the basis of price rises and warnings; decisions by some European governments to reduce consumption, and thirdly, by, so far, unseasonably mild weather.

How have we become so in thrall to predictions? One of the natural responses to uncertainty and pessimism over the last few years has been a desire to find a means to increase certainty about the future. It has been ever thus: at a certain point in history, the majority of humankind stopped turning to prophecies or visions from God for answers, and instead started to make predictions — estimates of what was likely to happen next based on an understanding of what had come before — of underlying conditions, and of the likely actions of those involved. The more science progressed, the greater the expectation that it would facilitate those predictions. The greater the uncertainty, the greater the demand for expert assessments of the likely future. Newspapers cover forecasts — on the economy, the likely impact of policies, or the probable actions of a hostile state, for example — as often than they cover actual events. In our desire to reduce the number of shocks we are subject to and to prepare for, and predict, the future, a forecasting industry has grown. Military experts and commentators observe tactics and predict outcomes of battles. Environmentalists reach far into the future and forecast catastrophe. In most fields, in fact, catastrophising has become the most popular and marketable form of forecasting, whether on the economy, geopolitics, security or the environment. To get everybody’s attention, you have to predict at least a riot. But this is inherently inflationary and creates enormous anxiety.

Neither predictions nor prophesies should be passive. Tempered forecasts, set within context and conditionality, are more likely to produce a beneficial outcome. There is a risk now that energy and attention is being diverted to ‘horizon-scanning’ and ‘future-forecasting’ or ‘foresight’ because they appear to offer a means of seeing into the future, but too often this exercise has become a disjointed commentary. A new species of ‘Superforecasters’ have been hailed as a solution because of their ability to get a greater percentage of predictions right than the average, even while never leaving their front room in Nebraska (‘Bill has free time and he spends some of it forecasting,’ says the introduction to  Philip Tetlock and Dan Gardner’s ‘Superforecasting: The Art and Science of Prediction’, which was published in 2015 and spawned a new wave of forecasting experiments).  But crowdsourcing predictions result in an abstract articulation of a passive future, and rests on a set of unclear assumptions about our own actions. If these are not explicit, then the experiment is flawed. Even those who have a role in shaping events have started to behave and speak about them as though they are merely watchers. As our real and virtual worlds merge, this becomes, at its worst, a hypnotising voyeurism.

We have become overly focused on answering the question ‘what is going to happen next?’ at the expense of devoting resource to understanding what is happening now and what we need to do to change what happens next. Instead of trying to predict the future, we should be embracing uncertainty and using analysis, monitoring, warning and action to bring about a different version of the future (ideally one we recognise and like). We want the gloomy prediction to be wrong. We should focus on how we shape the future we want, rather than trying to achieve the impossible, which is to forecast the future.

If we go back to first principles, it is difficult to imagine a way that a completely accurate prediction about the future could possibly work. Could a statement be made about the future that in itself had no impact on it, and nonetheless turned out to be true? Only if either there were zero possibility of any action affecting what happens next (in which case, the forecaster is cast in the mythological role of seer of a pre-destined future), or if one had absolute power over what happened next (Putin could have predicted he would invade Ukraine). So let us say not.

Most geopolitical events are a form of a complex adaptive system – a dynamic network of interactions where the study of the individual components will not enable a prediction of the effect they may have on each other. Much effort and resource is dedicated to the production of algorithms that might reduce uncertainty by modelling risks, and yet somehow we are still surprised when they don’t quite work when it comes to international affairs, or even economics. There is talk that one day quantum computing will be able to solve this problem. But I doubt it. We are still some way off, for example, modelling when and how a terrorist group will next strike, given that a successful attack depends on the plotters being able to conceive and plan in secret.

The identity and the agenda of the forecaster is important. Forecasts might be produced by people or organisations in a position to do something about the future issue they are analysing, or by observers who have no primary impact on the issue. Even in the case of the latter, the forecast seldom has zero effect on the issue, although it is not clear whether the forecaster considers this when they produce the forecast. If the forecaster is well-known, with a strong reputation, then what they say might well change the future if it encourages thinking about the right course of action. They may well have an agenda, but that does not necessarily mean they are wrong. In some cases, such as predictions about climate catastrophe and species extinction, the purpose of the prediction is precisely to galvanise action. Separating a prediction from an agenda can be hard: sometimes a forecast presented in apocalyptic terms has an increased chance of being heard, but sometimes predictions are so outlandish, with timescales so far into the future, that it is easy to disbelieve them. To be really useful, any forecast or prediction should be produced alongside — or even within — the process which considers answers to the question: ‘What should we do about it?’ The ideal prediction would include a set of possible future outcomes that set out a series of micro-predictions: ‘if we do this, x might happen, and if we do that, y might happen.’

An illustration of this lack of clarity might be the unusually apocalyptic economic outlook produced by the Bank of England in its quarterly Monetary Policy Report published on 4 August 2022. This stated that higher energy prices were expected to push inflation to 13 per cent, as opposed to the Bank’s two per cent target, with annual price gains still close to ten per cent in a year’s time. It added a long recession forecast, with no growth expected for almost two years and an overall contraction in gross domestic product of more than two per cent. Unemployment was expected to rise by two-thirds from its present 3.8 per cent.

This forecast caused alarm and headlines. The Bank’s predictions continue to be consumed by non-specialists as well as specialists. While the specialists may be able to contextualise them, the general audience needs to know whether there is something that can be done. For the non-economists amongst us, it was not clear whether the Bank was saying: ‘we are powerless to act in the face of an overwhelming force; we can’t see any way to stop this happening’ or: ‘unless we take corrective action, this will happen.’ If the latter, it would have been helpful to accompany the forecast with an appropriate recommendation for corrective action. And if corrective action is taken successfully, the forecast will turn out to be incorrect — inflation will peak at seven per cent, for example, and everyone will say the Bank got their forecasts wrong. As always, the act of the publishing the forecast impacted markets and economic behaviour.

Subsequent events showed how difficult long-term forecasts are. It would be unfair to suggest that the Bank of England might have predicted what happened next with the British Conservative Party’s mini-budget, although the clues were all there during the party’s leadership campaign. The row over the provision — or not — of a forecast from the Office of Budget Responsibility was a manifestation of our confusion about the role of prediction. It would clearly have been preferable to conduct a thorough test of the likely impact of budgetary decisions before those decisions were made (policy options analysis), but to call it a forecast and publish it separately is to create a self-fulfilling prophecy: if the OBR forecasts a budget will be a disaster this increases the chances it will be, indeed, a disaster. Rather than an insightful tool to help policy decisions, forecasting becomes an exploitable political instrument. It cannot easily be both.

There is a moment in the film In The Loop when the actor Tom Hollander, playing a junior British minister, has to explain to the media his shifting position regarding a future war which he has earlier declared to be ‘unforseeable.’ ‘All sorts of things that are actually very likely are also unforseeable,’ he declares, to spin-doctor Malcolm Tucker’s exasperation. So many retrospectives on foreign and defence policy decision-making ask similar questions: how did we not know? Why did we not anticipate? Reviews often tend to focus on the phrase ‘warning failure,’ a failure to provide an alert clearly enough such that action could be taken.

We all participated in a vigorous but abstract debate in the latter half of 2021 and into early 2022 about whether Russia was going to invade Ukraine or not. The warnings were sounded clearly, but to be effective they would have had to be believed, and acted upon. That they weren’t may have arisen from confusion, again, about what a forecast or prediction can and can’t do. Influential figures have argued that the EU could be forgiven for not believing US and UK warnings about Russian intentions because intelligence had been wrong before, particularly in Iraq, and had led to embarrassing failures. But this muddles two fundamentally different concepts. In the case of Iraq, the question was ‘does Saddam Hussein have weapons of mass destruction?’ — the intelligence required was on an already existing fact; he either did or he didn’t. The intelligence that provided the basis of the decision for action was factually incorrect. But on the question: ‘will Vladimir Putin invade Ukraine?’ we are looking not for facts but intent. One may draw conclusions about intent from observing preparations, as indeed before Russia invaded Ukraine, but until a decision is taken for action, it is still possible for something not to happen. We knew Russia had the capability, but the question was what Putin would decide to do. By focusing almost exclusively on predicting and forecasting abstractly what he would do, we were misdirecting our effort: it should have been necessary only to believe there was a possibility Putin would decide to order Russian forces to invade Ukraine for planning meetings to begin to work out the action needed to ensure that possibility did not come to pass. At the very latest, the threshold for that planning ought to be when analysis suggests that something becomes more likely than not.

It may help if questions are not simply framed in stark predictive terms. The usual question asks: ‘what will Putin do next?’; at present the question is: ‘Will Putin decide to use nuclear weapons in Ukraine?’ If we think this is a possibility, better questions for the analyst would be: what factors will alter what he decides to do? What are his risk calculations? Do we know what other states (including our allies) might do and what their capabilities are? And then comes the policy options analysis: what might be the impact of our actions on what Putin decides to do?

On Ukraine, the future is yet to be shaped. Action taken by Ukraine — and the degree of useful support it receives — will determine exactly how far and how quickly Russia can be pushed back. Military analysis provides situation updates that enable adjustments. Observation and intelligence indicate Russia’s strategic vulnerabilities. Diplomatic reporting and security partnerships provide insight into the position of allied countries. Assessment suggests a range of possible futures, a series of things that might happen if no action is taken. And then thinking should commence on how to prevent them from happening. Western governments are in a position to influence the strength of the Ukrainian fighting capability, to support the Ukrainian economy and weaken the Russian one, to alter decisions taken by Putin and his generals. Nothing in the future of this conflict is fixed. Any gloomy predictions should sit alongside suggestions of what might be done to stop them coming true.

Author

Suzanne Raine