Disinformation in the information age

The line between disinformation, propaganda and fake news is often blurred. This is especially the case when it is unclear whether these untruths or half truths are being disseminated by the 'good' or 'bad' guys.
Soviet propaganda at the Cold War museum in Plokstine, Lithuania.
Soviet propaganda at the Cold War museum in Plokstine, Lithuania. Credit: Franz Aberham via Getty Images.
Share on facebook
Share on twitter
Share on email

Disinformation is a term currently in common use in the media and in academic and political discourse, along with related concepts like ‘fake news’. It almost always carries a negative connotation (particularly, in the West, in relation to Russian activities). There are a number of investigations going on in various countries into alleged disinformation campaigns and their impact, and the term is often used as a shorthand for ideas we do not like or think are dangerous. Methods of disseminating disinformation can range from feeding stories to journalists, using state-controlled media, setting up fake websites and social media accounts, using automated bot accounts, or making prank phone calls. 

But what does disinformation really mean? How does it differ, if at all, from misinformation, propaganda, deception, fake news, or just plain lies? Is it always a bad thing, or can it be a useful and indeed necessary tool of statecraft? And if we are targeted by disinformation, what can we do about it? There are no easy or straightforward answers to these questions, but tackling them is a useful way of learning to navigate the stormy waters of the oceans of information disseminated constantly, globally and at lightning speed in the 21st century. 

Dictionary definitions generally agree that, at root, disinformation is false information disseminated intentionally to deceive; it may seem truthful, relevant and based on objective fact, but it is designed to mislead the recipient in order to obtain advantage, whether electoral, military, monetary or political. Misinformation might sound like the wrong information put out by mistake, but disinformation sounds bad, untruthful, malign, a deliberate strategy of deceit. Is it really that simple? Every day, stories are put out, either intentionally or because of incomplete information, that are false, but judging them on the basis of a simple description seems a risky strategy. It involves a possibly premature value judgement of the intent of the individual, organisation or state propagating the information.

Defining propaganda can be tricky, too. It may be perceived as material intended to promote a point of view or to persuade, a little exaggerated maybe, but essentially truthful, and therefore seen as benign or at least harmless. Alternatively, it may be regarded as an attempt by a political party or interest group to try to win people over to its viewpoint, persuasive possibly, but also clearly recognisable for what it is. In the late 1940s, the British Foreign Office created a whole department, the Information Research Department (IRD), in order to combat Communist propaganda from the Eastern bloc; both sides’ activities were semi-clandestine, in order that the recipients of its product could not necessarily distinguish its official origin. Yet the difference between propaganda and disinformation is not easily pinned down. A brief put out by the National Endowment for Democracy states that some define propaganda as ‘the use of non-rational arguments to either advance or undermine a political ideal’, while disinformation is a word for undermining such propaganda.

Cynthia Grabo, the former CIA analyst, says that propaganda can be either true or false, or somewhere in between: if the information being disseminated is true, it counts as public diplomacy; if false, it is disinformation. These distinctions plunge us into very deep water indeed: truth and falsehood can be subjective concepts in the age of social media and the 24-hour news cycle. 

Distinguishing between propaganda, fake news and disinformation is clearly not straightforward. As for deception, although by definition it would seem to differ little from fake news or lies, when the term is used in a military context its connotation is sometimes positive, even celebratory. One of the most famous successful deception operations of the Second World War was Operation Fortitude, intended to make Hitler and his generals believe that the Allied invasion in June 1944 would land in the Pas de Calais rather than Normandy. A more recent example, with the same objective, was Operation Desert Deception, executed in the run-up to the First Gulf War in 1991 following the Iraqi invasion of Kuwait, to convince the Iraqis that the main Coalition attack would come through the Kuwaiti-Iraqi western border rather than through the desert to attack from the rear. Both operations were elaborate, involving subterfuge, media manipulation and decoy tactics – lies, in fact. It is very difficult to see how this differs from disinformation, even if it was disseminated by the ‘good’, rather than the ‘bad’ guys. And as soon as we accept that disinformation can be used for good purposes, we stray even farther into the realm of subjective judgement, as illustrated by the recent episode in May 2018 when a Russian dissident journalist, Arkady Babchenko, faked his own death with the connivance of the Ukrainian authorities. 

There is a tendency for those commenting on these issues to slip into hyperbole, talking about an unprecedented threat to democracy, as if the use of disinformation were something new (some even insist it originated in Soviet Russia, with the tactic of dezinformatsiya adopted in the 1950s). Yet there is nothing new about disinformation: governments, organisations and individuals have always employed it, even if they did not call it by that name. Indeed, as we have seen, it has sometimes been regarded, particularly in wartime, as a perfectly legitimate tool of policy. Examples of disinformation can be found throughout history, in the Roman Empire, in religiously-motivated information battles in the 16th and 17th centuries, in the development of news and popular literature in the 19th century, in both Bolshevik and anti-Bolshevik activities in the interwar period, and in contemporary right-wing populist movements, as well as in more recent cases such as Russian stories put out during the Ukraine conflict. 

In fact, disinformation is an ancient concept. Professor Neville Morley of the University of Exeter argued, in his evidence to the UK’s parliamentary Culture, Media and Sport Select Committee investigating fake news, that the underlying issues can be traced back to ancient Greece. Thucydides, he said, identified the ‘deliberate manipulation of information in order to influence decision-making, but also the distorting effects of political polarisation on cultures of truth and democratic discourse, and the cognitive biases of the mass of citizens’; a pretty good description of disinformation and the reasons it causes concern in the 21st century. 

Plato took a rather different approach. In The Republic, Socrates makes the argument that the ‘noble lie’ could be used by rulers (the good guys, he meant) in order to secure the stability of the state and the wellbeing of its citizens: ‘Is it not useful against enemies, and a good remedy to divert so-called friends from any evil intention they may form in madness or folly?’ If, he says, the rulers ‘make the falsehood as near the truth as possible, is our action not useful?’ Can this be classed as disinformation? It certainly seems like it. 

There are two points about Plato’s thesis that are very relevant to disinformation in the information age. One is that the justification for using the noble lie is that it assumes the person employing it knows ‘what is best’ for the people to whom the lie is being told. While there are clearly dangers in this – that it is fine for those who ‘know best’ to spread disinformation for the benefit of those less knowledgeable – this line of argument is worth pursuing, since it underlines the importance of trying to identify the intent of the source of the disinformation. For example, when looking at alleged Russian disinformation activities, it is vital to try and see how things look from Moscow, to understand the motivation without necessarily accepting the premise. The same applies to any perceived disinformation activities, whatever their source. 

The other relevant point about Plato is that the reason why he considered that the noble lie should be used was because contemporary Athenian society was in the process of a social revolution. With the rise of democracy and individualism causing a severe strain on society, the old constraints of religion and magic weakening, people felt the dislocation of life in a more open society, pressure to think for themselves, to be rational and accept responsibilities, to live and cooperate with different kinds of people. In particular, this dislocation arose from the development of faster sea communications and international trade, leading to increased immigration, an influx of new people and new ideas, competition for scarce resources and with other trading nations. The Athenian oligarchs felt their authority and the foundations of their society threatened by these developments. In other words, Plato, born in the 5th century BC, argued that disinformation became necessary because of a loss of social cohesion, a breakdown in the old patterns of trust and deference, and a more international world. It is an argument strikingly analogous to those made about use of disinformation in the age of the internet, social media and the 24-hour media cycle. 

Another example of the historical use of disinformation illustrates the complexities of getting to grips with it. This one is rather more modern, though still nearly a century old: the Zinoviev Letter of 1924. It was a letter ostensibly written by Grigori Zinoviev, head of the Executive Committee of the Comintern, the Bolshevik propaganda organisation, to the Communist Party of Great Britain, in September 1924 (at the tail end of the first ever Labour government in Britain), exhorting them to greater revolutionary effort. The text of the letter was leaked, published in the Daily Mail, and used to discredit the Labour Party during the campaign leading up to a general election in October 1924. The letter was almost certainly forged, though it is not certain who forged it. The British security services were implicated, possibly in the forgery but more likely in the leakage and political manipulation; other possible candidates include both Bolsheviks and White Russians, Polish and German intelligence and the British Conservative Party. Even if the letter had been genuine, it was used as part of a disinformation campaign, to influence the British electorate against the Labour Party (and the letter has been brought up in the course of British election campaigns ever since). 

The Zinoviev Letter was a classic piece of disinformation; by whom, targeted against whom and at whose expense is by no means certain, but it was used for political purposes, by a number of different interest groups. One aspect of the affair brings us back to the present day, and illustrates the dangers of disinformation. In October 1924, the British Foreign Office drafted a letter of protest to the Soviet government, in response to what the Labour government called an unwarranted act of provocation and interference in the British political process. The evidence (including Russian evidence) suggests that when the protest arrived in Moscow, the Politburo initially had not the slightest idea what the British were talking about (one of the reasons to believe the letter is a forgery). But the Soviet leaders decided very quickly what the proper course of action should be: first, to deny everything; secondly, to suggest that the British themselves must be responsible for the forgery. This response to disinformation is a classic response, today as well as in 1924. Denial of guilt and an attempt to deflect blame back on the accuser is no guarantee of truth or falsehood in any disinformation campaign. Historically, it is a recognised tactic that, apart from anything else, is intended to convince the domestic audience of what a state wishes its people to believe. Obviously this is easier to bring off in an authoritarian state, where the media may be constrained, but it can be adopted anywhere. 

Disinformation may be employed for well-intentioned reasons, to reassure the populace, to shore up the authority of the state or to strengthen it against what is perceived to be an external threat: disinformation as a tool of policy. The danger is that this can mask both truth and falsehood, undermining public trust in authority and encouraging the idea that truth is a subjective concept – whatever those in power say it is – leading to a situation of ‘post-truth’. In turn, this can alienate ordinary citizens from the political system and encourage the development of intolerance and extreme views. It can also exert pressure on decision-makers, with potential for premature or unwise courses of action. Ted Sorensen, special counsel to President JF Kennedy, later argued that if during the Cuban Missile Crisis of 1962 Kennedy and his team had been subject to the kind of media pressure exerted today, they would never have been able to keep confidential for a week the presence of Soviet missiles in Cuba. That would have made it more likely that the response favoured initially by the US military – an air-strike followed by invasion – would have been selected, possibly leading to a nuclear war. 

In the information age, the speed and ubiquity of communications, social media, combined with international and political instability, together with a range of other factors including climate change, the proliferation of nuclear weapons and religious intolerance, all combine to create an environment in which disinformation can flourish and be embraced by a range of actors as a tool of policy. How, then, should we deal with it? There is no point in throwing up our hands in outrage and pretending that only the bad guys use disinformation. It is, in one sense, part of strategic communications, a pervasive part of contemporary statecraft. But we need to be able to detect it when it is used against us, and have some strategies for handling its impact. We should be educating people to recognise disinformation, even if they are unable to discern the ‘real truth’. 

Recently, the BBC has developed a new game, iReporter, in which young teenagers take on the role of a journalist and are challenged to make decisions on which sources, claims, pictures and social media comments should be trusted, while Facebook is hosting a game, devised by Nato’s Strategic Communications Centre of Excellence, to teach people how to spot disinformation. These are positive developments, as well as indicating the level of concern prevalent in official and commercial circles. But what tools are available to those people already involved in policy making, foreign affairs, or external communications? 

One way of looking at this is to consider disinformation in the same way that we consider intelligence (whether from secret or open sources). Like intelligence, disinformation is only ever a small part of the overall picture on which decision-making is based, and it is only significant if detected and made use of in the wider context of policy. We know, for example, that disinformation, or at the least ambiguity about information, can be a tool in a state’s strategic toolbox: witness the Russian hybrid attack on Estonia in 2007, and disinformation campaigns during the annexation of South Ossetia in 2008 and Crimea in 2014. Or to take a different kind of example: the US Department of Defense disseminated stories about UFO sightings in the 1980s, to conceal the trials of high-tech weapons whose existence they did not want to reveal. Some of these uses of disinformation are examples of ‘deflection’ or ‘perception management’, when governments or organisations disseminate information that will attract attention and distract from what is actually happening. Sometimes this can be from apparently benign motives, for example to reassure the public or prevent unnecessary panic, or to prepare them for unpleasant news. Alternatively, the motive might be to project uncertainty, either to warn against a potential threat or conceal its nature. 

If we are going to think of disinformation as a tool in the strategic toolbox, the first thing is to recognise it as such. This means treating disinformation the same way as intelligence: interrogating sources, avoiding mirror imaging and confirmation bias, and trying to look at the situation from the point of view of the source of the disinformation. It means separating capability from intent: just because a particular state or organisation has the capability to use disinformation to interfere in, or affect outcomes in another state, does not mean that they did so. (And even if it can be proved that they did, measuring impact is extremely difficult.) Nevertheless, to take a military approach to the problem, capability plus intent equals a threat, and combined with vulnerability that adds up to risk. The first element in defence against disinformation must be the acceptance of that risk. Former intelligence professional Sir David Omand, in his seminal work Securing the State (2010), argued that the best way for governments and their advisers to reduce risk and protect the state was to ‘sustain a supportive public opinion and a proper understanding of the intelligence community and its constituent parts’, to develop a ‘modern citizen-centred approach to national security strategy’. The same approach must be adopted for disinformation. 

This may be a counsel of perfection, but it is vital, not just for leaders and policymakers, but for the wider public as well. In the ancient world, philosophers argued that political authority depends on citizens who think, judge and fact-check for themselves. This is absolutely true for disinformation. Defence against disinformation means understanding what might happen if information is compromised, collaborating with others to identify the risk and working together to mitigate it. Recognition of disinformation, and accepting shared responsibility for the risks it brings, is an essential tool in the box of those seeking to protect themselves against it.

This essay originally appeared in ‘Knowledge and Information – Perspectives from Engelsberg Seminar, 2018’, Bokförlaget Stolpe, in collaboration with the Axel and Margaret Ax:son Johnson Foundation.

Gill Bennett

Gill Bennett was Chief Historian of the Foreign & Commonwealth Office from 1995–2005, and Senior Editor of the UK’s official history of British foreign policy, Documents on British Policy Overseas. Since then she has been involved in a number of research and writing projects in Whitehall, including working on the official history of the Secret Intelligence Service. She is an Associate Fellow of the Royal United Services Institute, and a Fellow of the Royal Historical Society. Her books include Churchill’s Man of Mystery: Desmond Morton and the World of Intelligence (2006) and Six Moments of Crisis: Inside British Foreign Policy (2013).

Subscribe to Engelsberg Ideas

Receive the Engelsberg Ideas weekly email from our editorial team.

By subscribing, you consent to us contacting you by email. You may unsubscribe at any time, and we’ll keep your personal data safe in accordance with our privacy policy.

Related

Relief of the Council of Chalcedon

Reassessing Christian history

While Christianity may strive to sing in a single voice, no one modern denomination ought to claim a monopoly on the truth. The region’s history is in fact far more eclectic.

Subscribe to Engelsberg Ideas

Receive the Engelsberg Ideas weekly email from our editorial team.

By subscribing, you consent to us contacting you by email. You may unsubscribe at any time, and we’ll keep your personal data safe in accordance with our privacy policy.