The making of an intelligence failure

  • Themes: Geopolitics, Intelligence

At a time when western intelligence agencies are confronted by mounting threats from hostile states and terrorists, averting disaster requires clear logical processes, strategic foresight, and the will to act.

Two surveillance cameras isolated on black background.
Two surveillance cameras isolated on black background. Credit: roibu / Alamy

In September, when drones forced Copenhagen airport to close, the questions came: was Russia responsible? How was the attack mounted? Within hours, conspiracy theories flooded social media claiming that the authorities had deliberately looked away. This pattern – shock, questions, accusations – has become grimly familiar after every attack on western soil.

After the attack on Heaton Park synagogue in Manchester, it was to be expected that the media would investigate how much the authorities knew about the terrorist beforehand. What was his motivation? Were there accomplices? Will there be further attacks? As the shock wears off, however, a sharper question follows: could the attacks have been prevented?

Following the collapse of the prosecution of two individuals in the United Kingdom on official secrets charges, a case in which the hand of Chinese intelligence was alleged to have been at work, the questions become political: did the previous government, in office when the alleged offences took place, assess China to be a national security threat? What is the intelligence evidence behind the prosecution (which the government has now published)?

When answers come, they will emerge from a careful multi-stage process of intelligence activity. But it is in the nature of human intelligence that those answers may be incomplete, fragmentary, and sometimes wrong. Nor can everything be made public. Were intelligence targets to understand what sources and methods have been used, they would be able to dodge and deceive them in the future. Equally, the inferential nature of reasoning applied to an intelligence judgement (for example, the attribution of a cyberattack to China, as the US and UK have done several times) may not be sufficient to meet the different evidential standards of a criminal court.

So how can western countries successfully use intelligence to thwart threats from terrorists and hostile powers? Understanding how intelligence judgements are arrived at reveals much about how national security success and failure comes about, and how narrow the path is between them. Intelligence warnings follow several linked stages – and problems at any one of them can lead to failure. It starts with the very existence of information that may be capable of revealing adversary acts, intentions and plans. Then comes the capability of human and technical intelligence sources to spot, access and collect that information.

The third stage is assessment by analysts of that reported intelligence, together with all other relevant data they have from both open and closed sources and from allies and partners as well. The primary goal is to explain what is going on, especially whether what is being observed involves hostile intent.

After this comes the fourth stage of using that understanding to make forward-leaning, probabilistic judgments of how events may unfold. To be most useful, this analysis should not just present the most likely course of events, but also flag up strategic notice of less likely, but potentially more serious, longer-term developments.

Next comes reporting these analytic judgements up to senior customers clearly and without spin, and with any warnings of trouble ahead clearly highlighted. For strategic intelligence in the UK, that is the task of the Joint Intelligence Committee. But good intelligence reporting is not just a case of ‘fire and forget’.

The next, sixth, stage will then involve follow-up to ensure the customers – especially senior political and military leaders – fully understand what the assessments tell them (and what they don’t), even if the message is unwelcome.

The seventh and final rung on the ladder leading to success involves recognising that, even when customers do take the intelligence seriously, they must be in a position to act on it. Even when governments do not have the power to prevent anticipated threats materialising, they may be able to take mitigating actions to try to reduce their impact. That is the purpose, for example, of counter-terrorist alert systems and public warnings.

Analytic judgements will have to be made in conditions of great uncertainty; and, at the end, policy decisions will also have to be made in similar circumstances. Yet, when used consistently and in good faith, governments and their publics end up significantly ahead of the odds. Over the last four years in the United Kingdom, according to the Director General of MI5, 31 late-stage terrorist plots have been foiled through integrated security, intelligence and police work – an extraordinary record.

In this multi-layered process, nevertheless, many opportunities exist for things to go awry, especially when the system has to run at high speed to keep up with the pace of events. The first consideration is obvious: evidence can only be found if it exists. There must be clues to detect and relevant data capable of being collected – traces our adversaries go to great lengths to avoid leaving.

In 1945, Soviet schoolchildren presented the US ambassador in Moscow a wooden carving of the US Great Seal, complete with its bald eagle, as a goodwill gift. American security X-rayed every inch and found nothing. For seven years it hung in the ambassador’s office. But, throughout that whole time, a hollow resonating cavity in the carving was being flooded by microwaves fired by the Soviets at the embassy. Sounds in the room modulated the microwave signals and those were captured by Soviet equipment across the street, enabling them to recover conversations in the room. Sometimes we can look for something but still not see.

The second consideration in critiquing an intelligence process is whether there have been failures to recognise deception. Intelligence case-officers running human sources, for example, go to huge lengths to avoid arousing suspicion. They employ the traditional tradecraft of covert communications, microdots and dead drops to frustrate counter-intelligence. Today’s digital world enables spies to update the ancient practice of steganography, with messages hidden in billions of pixels of pictures and electronic games.

Espionage literature is filled with cases where the intelligence officer’s gaze was deliberately diverted. Before its surprise 1973 attack on Israel during Yom Kippur, the Egyptian army established a pattern of mobilising and then demobilising reserves to confuse Israeli intelligence. The head of Israeli military intelligence calculated that, without fresh Russian-supplied fighters and air-defence systems, Egypt and Syria could not defeat Israel. Since there were no indicators of such Russian support, an impending attack could be discounted. What he missed, however, was that Egypt’s President Sadat knew that, too. The eventual Egyptian and Syrian surprise attack did not attempt to win a total victory; rather, it aimed at the more modest, and perhaps more achievable, goal of reoccupying the Sinai Desert and Golan Heights, both of which were lost in 1967, to create an improved basis for peace negotiations. As a result, when Egypt and Syria struck on 6 October 1973, the unsuspecting Israelis were caught on the back foot.

Data relevant to hostile intent can be acquired if intelligence agencies have the tradecraft and wit to know what may be relevant among a mass of data and test their threat hypotheses against it. The advent of generative AI will have major impact here with the ability to extract patterns from large datasets, already used to identify military targets from overhead imagery, and to apply automated causal reasoning to provide analysts with alternative explanations to test against the evidence available.

Even where clues of an adversary’s intentions are obtained, their significance may be overlooked. In August 2001, flight instructors at a Minnesota flying school reported Zacarias Moussaoui to the FBI, concerned that he wanted to learn to fly commercial jets but showed little interest in learning to land. Moussaoui was not, in fact, among the 9/11 pilots chosen by Al Qaeda – he was intended for a later operation. Moussaoui was arrested three weeks before 9/11 on immigration charges. But the flying school’s report might have provided an opportunity to begin to unravel the Al Qaeda intentions to use hijacked airliners, if it had been put together with other clues that were emerging. In the end, however, the opportunity was passed over.

To help with the allocation of resources to intelligence collection and analysis there is in the UK a well-developed intelligence priorities system based around the Joint Intelligence Committee, which provides agencies with prioritised requirements from working-level customers in foreign affairs, defence and security. An obvious challenge is that priorities rightly reflect the urgent preoccupations of policy staff and ministers. To complement such a system, there must be understanding in government that there will always be an overriding interest in obtaining intelligence warnings for counter-surprise. That cannot be guaranteed, but agency chiefs need latitude to reserve capacity for investigating weak signals where their experience leads them to worry there may be new threats emerging.

The fourth stage in the generation of intelligence warnings is the really hard part: all-source assessment. Now comes the stage where things may go badly wrong. What is the best explanation of the available data? Does it point to threatening developments or not? Remember: data by itself has no intrinsic meaning except that projected onto it by the analyst, who turns it from data into information illuminating a situation.

In this stage of analytic judgement, many unconscious cognitive biases can operate: expectation bias (seeing what, deep down, we want to find), optimism bias (not wanting to accept the reality of a threat of real harm), mirror-imaging (imagining that an adversary thinks as we would in those circumstances), and perseveration (clinging to old ideas beyond their sell-by date). These biases can all skew the thinking of individual analysts and, at a later stage, that of their customers. Groupthink can affect analytic teams without their realising that they are not being sufficiently challenging.

To combat these tendencies, the best analysts construct alternative hypotheses to explain available information and provide Bayesian estimates of likelihood for each, including any less likely but more threatening cases, showing how events might unfold. Ideally, they advance as most likely the hypothesis with least evidence against it, avoiding the unconscious temptation to cherry-pick data supporting a favoured theory.

When it comes to the stage of writing up and reporting the results of analysis, understanding the distinction between secrets, mysteries and complexities is crucial. Professor R.V. Jones, the distinguished founder of scientific intelligence practice during the Second World War, used to separate judgements based on analysis of ‘secrets’ from those resting on assessment of ‘mysteries’. Secret intelligence is capable of revealing what adversaries hide. If the digital and human tradecraft is good enough, if cost is no obstacle, and we have continuing cooperation with close allies and intelligence partners, then in principle any secret could be obtained although in practice we can access only a small proportion of the secrets we might seek.

Mysteries are different. The key data-points don’t yet exist. Even if individuals harbour sympathies with ISIS or hold white supremacist views, no visible signs may emerge before such an individual decides to resort to violence. Some sudden shift in personal circumstances, or exposure to extreme propaganda on social media, can rapidly flip them into committing individual acts of violence. That absence of reliable indicators – unlike the detectable preparatory activities of traditional terrorist groups constructing complex mass casualty plots – makes intelligence warning of lone-actor terrorism extraordinarily hard.

The same logic applies to the autocrat massing military forces on a neighbour’s border – as the Soviet Union did in 1968 with Czechoslovakia, Saddam did in 1990 with Kuwait, and Putin in 2021 and 2022 with Ukraine – intelligence analysts cannot know what the autocrat himself has not yet decided. Will coercion secure his objective, or has the time come to order invasion? However well-placed the human agents, however good the signals intelligence, no one can read minds or predict the future.

Matters become more complicated with ‘complexities’. Here the assessment ought to depend on the analysts’ view of what if he were to order offensive action the autocrat might believe would be the response of the US or the EU, including any deterrent signalling that may have been sent. That the analysts cannot be sure of. They can only present their opinions based on a range of assumptions and sources of information, using their experience and expertise.

When it comes to arriving at key judgements, a decision will need to be taken on whether they meet the threshold for issuing an intelligence warning or for changing the counter-terrorism alert state. Anticipation of trouble should create a powerful mental picture in the analyst’s mind of what may happen, its initial impact, and the likely extent and duration of trouble. Warning is a communicative act motivated by the intention to increase the chances of preventive action. Here, the analyst may run into understandable scepticism from intelligence bosses, sensitive to the danger of ‘crying wolf’ and aware of the costs of raising alert states and of requesting military or police deployments. Persuasion, and sometimes advocacy, may be needed to justify the knowledge claim about what might happen and when. Analysts might also face frustrating requests to review evidence again or hold off until more convincing intelligence arrives. Even experienced intelligence chiefs can suffer from cognitive biases, especially if intelligence appears to contradict long-accepted points of view.

A vital consideration comes into play in the interaction between senior intelligence officers and their customers. Intelligence chiefs should never shrink from telling politicians what they need to know, even when well aware that their bosses would prefer not to hear bad news. That feeling can inhibit candid reporting or, in worst cases, create feedback distorting the original assessments. That may have been the case with the apparent FSB failure to warn President Putin of Ukrainian national feeling in 2022, resulting in a badly conceived plan to topple President Zelensky. Putin had publicly stated his belief that Ukraine wasn’t entitled to the status of an independent nation. His intelligence chiefs would have been all too aware of the fact that he would be impervious to arguments suggesting that Ukrainians were likely to fiercely resist any Russian invasion – and that he would be angered if his view was implicitly contradicted.

The final rung in the intelligence warning-ladder concerns the ability to use a warning to pre-empt trouble or at least mitigate it. A dramatic example of this occurred in the lead-up to Russia’s invasion of Ukraine in 2022. The US and UK took the unprecedented step of repeatedly declassifying and publicly releasing their intelligence about Russian military preparations, false-flag operation plans, and invasion timelines. They revealed, before events unfolded, specific operational details gleaned from intelligence they would normally never confirm. The goal was explicitly preventive and narrative-shaping. Yet even then the specific US and UK warnings of imminent Russian military invasion weren’t fully accepted by French and German counterparts – nor by Zelensky himself, who woke to the sound of Russian bombs falling on the capital.

According to Israeli media, before the 7 October Hamas attack on Israel there were Israeli military intelligence sightings of Hamas fighters training and conducting reconnaissance. The IDF even acquired a Hamas attack plan. But these signs appear to have been explained away higher up the chain as Hamas posturing to keep up their morale and wind up the IDF. Hamas was assessed to have neither capability nor intent to carry out a major attack. As a result, no warning was issued, and no operational contingency plan was created.

Behind that flawed judgment may lie a fatal policy failure at Cabinet level to recognise and plan for major risks in the political strategy towards Gaza. It’s always dangerous to live in a political climate where the accepted wisdom is that the worst cannot happen because the policy is that it cannot happen. Tragically, even very unlikely things do sometimes happen. That is why contingency planning should be based on reasonable worst-case scenarios.

In conclusion, intelligence can improve decision quality by reducing the ignorance of those who take decisions on vital issues of national security regarding the potential threats they face. Secret intelligence achieves this using information that those who mean us harm don’t wish us to have. It is therefore a vital component of statecraft, public safety and security. Intelligence is not easy to acquire, analyse, or use wisely. But it can confer invaluable decision-making advantage. Equally, failure at any stage can have calamitous results.

Somewhere tonight, analysts will be weighing fragmentary evidence, deciding whether it merits waking a Cabinet minister. They balance institutional caution against the imperative to warn. They may get it wrong. The odds are that they will get it right. In that tension between fallibility and necessity, lies the irreducible nature of intelligence work – and the narrow path between security and disaster.

Author

David Omand