Wise warnings matter even more in this age of pandemics and big data

We need to focus on how we warn about threats and hazards. Data and forecasting can only get us so far - the critical factors are human instinct and courage.

Row of children sit down with gas masks on. A warning from history.
Children taking part in a gas mask drill during the Second World War. Credit: Bettmann / Getty Images

We talk a lot about uncertainty in these uncertain times, but here are some certainties: terrorists will surprise you; volcanoes will erupt; floods will inundate communities; hostile states will commit hostile acts; the economy will suffer shocks; cyber-systems will be attacked; and new diseases will rage out of control. Anxiety about uncertainty has become an orthodoxy, but we’re anxious about the wrong thing: we should not worry whether events will happen, but when and where. And because we cannot possibly predict this, what really matters is the quality of our warning systems. These alone will enable the best possible protection and preparedness to prevent crises from happening, or make them seem less of a crises when they do.

All the things we worry about will probably happen sooner or later in some form, even those at the ‘low probability and high impact’ end of the risk register.  Although the political focus is now on developing resilience and the ability to absorb a shock, adapt, and emerge even more competitive than before, it is still not clear how to get that posture right. In March 2021 the UK government published its Integrated Review of Security, Defence, Development and Foreign Policy. Among other things, it said ‘we will improve our ability – and that of our allies and partners – to anticipate, prevent, prepare for, respond to and recover from risks to our security and prosperity.’ The word ‘risk’ is used on forty of 111 pages of the review, sometimes multiple times, and the word ‘resilience’ on thirty-six pages. In contrast, the word ‘analysis’ is used on only twelve pages, and the word ‘assessment’ on thirteen; the word ‘warning’ is not used at all. This is a crude metric, but a striking one. You can’t develop resilience against risks if you don’t have any means of warning that events are about to happen. Warning is not yet the buzzword it needs to be.

How then to build a system crouched like a leopard: alert, poised to spring, knowing it can outrun an unfortunate gazelle even if it can’t catch it with a single leap? The answer – possibly prosaically – lies in having effective warning systems backed up by deep analysis and monitoring of risk, and in having people who are capable of deciding when to sound a warning. Computers and data are a part of the answer, but just as important is retaining confidence in human instinct and intuition. In June, University College London launched the Warning Research Centre, the first of its kind, bringing together global expertise across a range of disciplines and geographies, to explore the role of warnings in managing vulnerabilities, hazards and threats, and disasters. Its aim is to make warnings more effective on a local, national and international level and to raise the profile of warning itself.

What can be warned against? It helps to break down risks into two different categories: bad things that are impossible to prevent from happening (such as those listed above), and bad things that must never happen (extreme, existential, catastrophic risks such as mass extinction or artificial intelligence takeover). The two should be treated in different ways. In the case of the former, the issue is how to develop a system of monitoring and warning that can instantly flex to identify and respond. The latter, however, cannot be managed through such a system; if the premise is that the risk must never be allowed to happen, it must be addressed by a plan of action. The risk management comes through doubling down on risks to the delivery of the plan.

Changing the future

Properly done, risk management and warning is not about guessing or predicting the future, but changing it. It exists to make sure things don’t happen. For this reason, it has little to do with the kind of tools to predict the future which have been embraced – arguably too warmly – by enthusiasts of  ‘superforecasting,’ a mix of crowd-sourced future thinking, foresight and horizon scanning. One of the shortcomings of superforecasting is that it assumes a passive position towards the future; there is little space for personal agency. There is also a problem with timescales: looking too far into the future, or using too large a scale creates a limitless number of possibilities – any of these might happen with varying probabilities. A probability assessment has limited usefulness in these circumstances because at the point of deciding whether a warning should be issued, the analyst has a yes/no choice: is something more likely to happen than not – i.e. is a warning necessary now? It might be better to stop talking about this kind of forecasting as ‘predicting the future’ and instead say it provides a tool for thinking about the future. There is a role for it as a method of identifying future risks, and it can help hone resource decisions, but it does not identify when to warn. It should act in support of, rather than distracting from, a warning system.

Any decision on whether to warn in advance about a predictably unpredictable event – low likelihood but high impact scenarios such as terrorism, floods, or pandemics – will have to confound likelihood probabilities. This category of hazard or threat is extremely unlikely to happen in a particular place at any given moment in time, but will happen eventually somewhere. So any system that relies on probabilities (which includes most forecasting tools and increasingly advanced algorithms – all the potential ‘quick-wins’) will miss the instinctive moment to warn. The fact of warning is by definition an act of setting one’s face against the statistics. A good warning anticipates and in turn enables anticipation. While the current trend is for ‘data-driven decision-making’, a warning cannot rely solely on the science: if you wait for evidence you will lose confidence in your ability to act before something happens.

The sounding of the alarm, the lighting of the beacon, the cry from the watchtower – these are only one part of a well-functioning warning system. In simple terms, a warning system needs, firstly, the technical capability to access, filter, record, and make sense of all available and rapidly changing data and information. Data and information is the start, not the end, point: it requires designated professional analysis and assessment by human beings to turn data into understanding of how each risk is changing. The analyst should be vigilant: watching everything, enquiring, tracking trends, flagging as soon as they begin to escalate. It is this that enables the necessary short-term projection that guides the analyst’s judgement on warning: at a certain point, the analyst needs to take the decision on whether to warn; the time for debate is now over. Ideally this is a formal warning from a designated body (the simplest example of which is a threat level system such as the one the UK’s Joint Terrorism Analysis Centre operates for terrorism, or the US National Hurricane Center’s for tropical cyclones). A warning needs to be heard, so it must be communicated effectively with a single, empowered voice that will not get lost in debate or ambient noise. And finally, it should be acted upon: it should be absolutely clear who leads on responding to the warning, either to stop an event happening, or to set up protections.

By this logic, when should the warning of an approaching pandemic have been sounded? And how would it have worked? Biosecurity risks could be managed in exactly the same way as terrorism or volcanoes or the weather, and the fact that they weren’t was simply a systemic failing. The problem was not lack of foresight (a pandemic had been clearly identified as a risk), but a lack of investment in, and co-ordination of, assessment, monitoring, trajectory mapping and – for whatever reason – a decision (or rather, a non-decision) that a formal warning system was not necessary. The UK government’s cross-departmental Biological Security Strategy published in July 2018 aimed to ‘detect, characterise and report biological risks when they do emerge as early and reliably as possible’ and to ‘respond to biological risks that have reached the UK or UK interests to lessen their impact and to enable a rapid return to business as usual.’ The strategy drew all the right conclusions about threat and likely cost, and used the right language about monitoring and assessment. But while it talked briefly about warning, it did not articulate clearly whose job it would be, missing the chance to set up a comprehensive warning system.

If the pandemic threat had been terrorism or the weather, there would have been, in an ideal scenario, in January 2020, co-ordinated teams of analysts at a national level in an empowered analytical body watching from every angle, knowing it was their job to warn of danger. Questions such as where are new variants of infectious diseases emerging? What are their vectors and rates of transmission, and what are the conditions they thrive in? would have been asked. Another team would have analysed and understood how people move, and from where to where: flights, other transport routes, particular patterns of movement, and pinch-point vulnerabilities could all have been identified. That alone would have provided a model for how a virus might evolve. Those involved would have sounded formal warnings which would have been clearly heard by those agencies needed to take preventative or protective action – because it is their job to respond to the warnings. The foundation of the UK’s Joint Biosecurity Centre in May 2020, and the establishment of a Covid-19 alert level was a first step.

Ideally national warning systems would be co-ordinated at an international level. The WHO’s warning system failed to function effectively at the beginning of 2020. On 30 Jan 2020, the director-deneral of the World Health Organisation declared the novel coronavirus outbreak a public health emergency of international concern (PHEIC), WHO’s highest level of alarm. During February it convened a Global Research and Innovation Forum, which discussed the issue, and it had a joint mission with China, which warned on 24 February that much of the global community was not yet ready and called for fast decision-making and near-term readiness. But it did not declare the coronavirus outbreak to be a pandemic until 11 March 2020, noting that in the past two weeks the number of cases outside China had increased thirteen-fold. The WHO waited for it to become a pandemic before they declared it a pandemic. It could, alternatively, have sounded a pandemic warning at least a month earlier saying: everything about the trajectory indicates this will be a pandemic within three weeks. Nations must act now.

The critical component of any warning system, often completely overlooked, under-resourced, or not integrated properly into decision-making, is analysis and assessment. Only world-class analysis and assessment capabilities will enable governments to position themselves against all strategic risks. There is a bewildering amount of static noise: analysts now have an infinite – and growing – amount of information at their fingertips, in large pieces, and in fragments. Every fragment of information might be the critical one, and the real skill is as much in deciding what to discard as it is what to keep. The possession of an infinite amount of information can in itself be more of a liability than an asset; the more data you hold, the greater the risk of simply holding the data. It quickly becomes a risk if it is not stored correctly (lost in computer systems, or stolen by hackers), for example, or if it is not considered in hindsight to have been properly acted upon.

This puts a premium on the systems which hold the data: they must be secure but, more importantly, also accessible to people who know how to search, what to search for, what to discard, and how to assemble what they’ve found in a way that provides genuine insight and enables swift, anticipatory decision-making. It also puts a premium on the people using the data and the questions they ask of it. Some of this will be done by data analysts, but good analytical teams consist of a blend of experiences and skills and – most importantly – challenge each other. In this way, the organising of the data is just as important as the collection of it. You can have all the information in the world at your fingertips, but that doesn’t mean that you know anything.

Deciding to warn

When it comes down to it, this is not just about computers and data but people and hunches. For those operating the warning function, deciding when to warn is a very difficult, nail-biting process. It should not be a policy or a political decision: those are what to do about the warning. A warning needs to be demonstrably free of political influence or agendas, to ensure it is taken seriously. It needs to be authoritative, which means it needs to come from a body which has weight and knows what it is talking about, and to maintain its authority it needs to be right more often than not. There is a very fine line between crying wolf and warning failure, and a real risk that over-warning reduces the effectiveness of those warnings which need to be taken seriously. It takes real nerve to avoid warning-inflation and hold off until the last moment while still managing to warn before the event happens. Paradoxically, the best warning may end up looking to outsiders like a false alarm, because it enables action to be taken; mitigation put in place or an attack stopped. The warnings people see and remember are those that come too late, or were not heeded.

Some people are more confident than others with making a judgement based on a lack of evidence, in choosing which risks to escalate and when to let it go. While it is important to have clear indicators and criteria for analysts to apply before sounding a warning, the point of decision comes before, not after, the data is perfect; in terrorism the greatest risk is before, not after, an attack takes place, and in a pandemic the time for action is before it becomes a pandemic. The critical additional factor is a human one: analysis needs to be rooted in an understanding of all available information, but it also needs wisdom, experience, judgement, and the courage to be able to say: ‘I know this is highly unlikely, but I think it is going to happen now.’

Author

Suzanne Raine