The Spying Game Will Find a Use for AI
- May 22, 2023
- David Omand
Spies have always applied the latest technology to their craft: Artificial Intelligence will be no different.
To be a good intelligence officer is to know how to penetrate the defences of the enemy in order to acquire his secrets – information that the adversary desperately does not want you to have – preferably doing so in ways of which he is unaware. Especially in time of war, intelligence officers have turned to science and technology to help overcome the obstacles of concealment, camouflage, deception, and encryption, deliberately placed in their path to obscure capabilities and intentions. Today it is no different, as our intelligence agencies harness the very recent revolutionary advances in machine learning and Artificial Intelligence (AI). At the same time our adversaries can exploit these technologies to try to undermine our way of life and democratic institutions.
The association of secret intelligence with technology may bring to mind the image of James Bond’s quartermaster Q providing hi-tech gadgetry to keep MI6 officers and their agents safe. During the American Civil War and the later Franco-Prussian War it was the new electric telegraph cables that were the target of intelligence activity. By the end of the First World War interception and geolocation of radio transmissions was conducted on all sides. The response was to develop stronger encryption, and that in turn boosted demand for the human skills of the expert codebreaker, people like the renowned chess grandmaster Hugh Alexander at Bletchley Park and the lesser-known but brilliant Emily Anderson, with in her case a rare combination of exceptional linguistic skills, focused problem solving, and musicological talent (her multi-volume translations of the letters of Mozart and Beethoven are still standard references). But there are very few such gifted people.
The introduction of electro-mechanical devices to encipher messages, such as the Second World War German Enigma machine, threatened to overwhelm the codebreakers. One effective response was to use machines to replace human effort, on the lines of the mathematician Alan Turing’s pioneering work on Enigma at Bletchley Park. By 1944, Colossus, the world’s first electronic computer was able to automate cryptography against even harder systems. After his wartime experience, Turing was able to pose the now famous question ‘Can a machine think?’ and offer the ‘Turing Test’, where an interrogator would try to distinguish between the text responses of a computer and a human.
The fundamental shift towards the end of the twentieth century from analogue to digital technologies allowed information about the world – text, images, sound, location and much else – to be expressed as strings of numbers. And numbers can be cheaply stored, retrieved, searched and communicated at speed using the networks of the digital Internet. Problems that would be simply infeasible for humans to attempt – such as searching for patterns in large data sets – became tractable. After 9/11, demands increased for intelligence on individuals of interest, terrorists, dictators, criminals and hackers, to establish their location, identities, movements, internet usage, and finances. This coincided with the proliferation of mobile devices and social media providing some of the answers sought. But networks can be compromised and data denied, ransomed, stolen or corrupted by adversaries. The equivalent of a cyber arms race is under way.
Comparable developments took place with photo interpretation moving from a craft skill – think of the sharp eyes of RAF photo-interpreter Constance Babington-Smith spotting in 1943 the German V1 rocket base at Peenemunde – to mechanised scrutiny of high-resolution satellite images. Today, machine-learning algorithms can scan thousands of images a minute to detect changes and unusual activity. Such machines are not, in Turing’s term, ‘thinking’ like humans, but using neural networks trained using annotated sets of training data to optimise performance. Whether recognising rocket launchers, handwriting or human faces, their speed and accuracy betters any human expert.
The exploitation of digital technology by the State, including bulk data sets, social media and facial recognition, raises ethical concerns requiring legal regulation and oversight. The same will be true for the latest developments in AI with Large Language Models (such as ChatGPT and BARD) and AI Image Generators. These powerful tools are trained on vast data sets scraped from the internet and therefore capable of performances beyond any single individual, however gifted. They are astonishingly good at summarising information from the web, giving the analyst an easy way of keeping in touch with international media and academic writing, and even answering tricky questions that require expert knowledge. They can also very quickly generate coherent text on a given subject, but AI models are known to ‘hallucinate’ (make facts up). They make creation of deep fakes easier, and can express unpleasant biases (dependent on the training data), which make them natural agents of social media subversion. Hostile states have, therefore, considerably more opportunity to interfere covertly in democratic debates and processes, and undermine confidence in the ability to distinguish truth from falsehood. We should not be surprised. Any new technology brings with it risks and creates vulnerabilities, as well as offering opportunities.
There are many lessons to be drawn from all these examples of technology interacting with the worlds of intelligence and security. We can see the speed with which each scientific advance has been turned into useable technologies for espionage and military reconnaissance. There has also been remarkable open mindedness displayed in otherwise traditionally minded intelligence agencies to harness those technologies. That dynamic is unlikely to change. Sometimes it will be new commercial innovation that the intelligence world adapts to use for its purposes (such as recent steps to derive intelligence from the use of social media). Sometimes the driver will be new technology including quantum encryption, AI and LLMs. The only use limitation lies in the human imagination, and intelligence communities have over the years shown that they have plenty of that. This piece was written without the assistance of ChatGPT, BARD or other LLMs – but it could have been – and if it had been, dear reader, how would you have been able to tell?