Making sense of the AI revolution
- July 23, 2025
- Iskander Rehman
- Themes: Technology
In order to understand the profound transformations and boundless potential unleashed by artificial intelligence, we need to expand our own intellectual horizons into the realms of history, the natural world, and even ancient mythology.
/https%3A%2F%2Fengelsbergideas.com%2Fwp-content%2Fuploads%2F2025%2F07%2FMicrochip.jpg)
In 1961, the Brookings Institution produced an advisory report for NASA, which pondered, among other things, the societal ramifications of the discovery of intelligent extraterrestrial life. The announcement of such a dramatic discovery, the report suggested, could have hugely unpredictable effects on human civilisation, and – in extenso – on US national security. While ‘the knowledge that life existed in other parts of the universe might lead to a greater unity of men on Earth, based on the “oneness” of man or on the age-old assumption that any stranger is threatening’, such an earth-shattering revelation could also have dramatic societal consequences, the Brookings team suggested. People might find their entire religious belief systems upended almost overnight, and, of all groups, ‘scientists and engineers might be the most devastated by the discovery of superior creatures’, as their ‘advanced understanding of nature might vitiate all our theories’.
The advent of an Artificial Superintelligence (ASI) or Artificial General Intelligence (AGI) – i.e., an advanced form of AI that surpasses human capabilities in almost every cognitive field of endeavour – is perhaps the closest analogue to the public discovery of an advanced alien intelligence. It is also far more likely to occur over the course of our lifetimes, with many titans of industry and lead forecasting platforms now predicting its materialisation within the next five to ten years.
While the parallel with extraterrestrial intelligence may, at first glance, seem overly colourful, it is, in fact, far less otherworldly than one may at first believe. As a number of lead technologists, ethicists and philosophers have noted, a truly superintelligent form of artificial intelligence would constitute far more than a revolutionary general-purpose technology. In essence, it would signal the birth of a new species – and one vested with capabilities that, in the eyes of many, would seem almost godlike. After all, AI creators and entrepreneurs such as Sam Altman frequently talk in rapturous, quasi-mystical terms of the ‘magic’ of the technologies they are ‘summoning’, and certain cults have already emerged that openly worship superintelligent AI.
The ramifications of such a paradigm-shifting development in human history will undoubtedly be profound. They are also infuriatingly hard to game out and predict. Will the deepening enmeshment of AI into command-and-control structures upend nuclear deterrence, catapulting us into a new, dread-filled era of instability? Will ASI cure cancer, or reverse aging? Will its increasingly widespread use for everything from basic writing to geospatial orientation enhance our collective intelligence but individually transform us into slack-jawed dullards, as one recent, much-discussed MIT study appears to suggest? And last but not least, are we in the process of foolishly conjuring up a technology whose advances not only outpace our capacity for understanding, but also pose an existential threat to our very existence?
No single field of study – whether in the humanities or the hard sciences – can allow one to fully make sense of the brave new world that awaits us post-ASI. Instead, one needs to make use of a variety of different disciplines, three of which, it is suggested here, might prove singularly useful at this uneasy, interstitial moment in time. Each remains ruggedly imperfect, and provides at best only partial solace, given the momentous nature of the disruptions that await us. When combined, however, they can provide a form of intellectual ballast, and, perhaps most importantly, a prism through which one can perhaps better discern ASI’s ever-shifting, kaleidoscopic set of opportunities and challenges.
First, the study of history can offer valuable insights into past technological upheavals and potentially raise interesting new questions, notwithstanding the inherent limitations of individual analogies. Second, new advances in the study of animal cognition and plant behaviour can provide us with a broader, less rigidly anthropocentric understanding of the nature of intelligence, including in its potential new (artificial) forms. Finally, revisiting some of the more timeless myths at the foundation of human civilisation can help us better grasp the profound ethical and societal dilemmas that come with creating something as powerful, transformative, and potentially self-aware as ASI.
‘History is, in its essentials, the science of change’, the great French historian and resistance hero Marc Bloch once noted, before sombrely adding that ‘there is no reality more fluid than the present, and no truth more fleeting’. As in all periods of great uncertainty, commentators either entranced or alarmed by the speed of AI’s advances have instinctively reached for the past, feverishly seeking to deploy a diverse array of historical analogies.
Some have thus alluded to Gutenberg’s mid-15th century creation of the movable-type printing press, and the manifold ways in which it revolutionised how information was produced and disseminated across Europe. As with AI, there was a dark side to this sudden explosion of knowledge, and to the dissemination of information that previously had been jealously guarded. The printing revolution may have accelerated innovation, but it also turbocharged disinformation and vastly extended the reach of state propaganda, playing an important role in fanning the flames of Europe’s terrible wars of religion of the 16th and 17th centuries. As the historian Alexander Lee has chronicled in Engelsberg Ideas, some more hidebound thinkers during the Renaissance viewed the advent of the printing press with unvarnished hostility, viewing it as a source of disinformation, a threat to public morality, and, perhaps most interestingly, as a generalised cheapening of knowledge. One professionally imperilled Venetian scribe memorably exclaimed in an impassioned polemic that ‘the pen is a virgin but the printing press is a whore’.
Somewhat more prosaically, a study of the diplomatic and literary correspondence of this period also reveals the extent to which our latent anxieties around ‘infobesity’ and cognitive overload were already prevalent during the early modern era, with innumerable statesmen and thinkers despairing over how they could ever hope to make sense of the teetering mountains of printed data now at their disposal.
Other contemporary policymakers and pundits have preferred to point to the example of the combustion engine, developed in the late 19th century, and to the critical role it played in powering the Industrial Revolution. Thus the political scientist and former Department of Defense official Michael Horowitz argues that, ‘as an enabler’ and ‘general-purpose technology with a multitude of applications’, AI is ‘much more akin to the internal combustion engine, or to electricity, than a weapon.’
If one is to mine machinery metaphors, one could perhaps go back a tad further, to the early 18th century and the invention of primitive steam-powered pumps. As George Musser has rightly noted, these were the product of tinkering and experimentation, ‘not a robust understanding of thermodynamics’. Nevertheless, they ‘ended up becoming the midwife to countless other advancements essential to the Industrial Revolution’.
Meanwhile, some experts draw analogies between the birth of nuclear weapons and the potential dawn of AGI, suggesting that a whole series of lessons from the atomic age – from the Manhattan Project, to the vanishingly brief period of US nuclear monopoly, to the arcane intricacies of Cold War deterrence theory – might help illuminate the promises and perils of superintelligent AI.
One could no doubt quibble with each and every one of these analogies. The internal combustion and steam engines were purely physical and mechanical technologies, whereas AI, although heavily dependent on large-scale energy and compute infrastructure, is software-driven. The printing press replicated human-authored text, AI can generate entirely new content. And, amid a myriad of other notable differences between AI and nuclear technology, there is no conundrum equivalent in the nuclear domain to that posed by the alignment problem – ensuring a superintelligent agent’s values and actions remain durably compatible with a state’s objectives.
The historiographical nitpickers, though, would be largely missing the point. Historical analogies should never be viewed as perfect, or as crudely applicable templates. Rather, they should be perceived as ‘powerful tools that enable decision-makers to compare the present to the past in an attempt to derive lessons that inform their judgments’, and as heuristic devices forming part of a larger, more sustained, process of reflection. The art of historical discernment teaches one to uncover new questions rather than grope for readymade answers, while astute analogical reasoning should serve to open up new possibilities, rather than dictate clear probabilities. Perhaps most importantly, a historical sensibility can help policymakers navigate periods of uncertainty and upheaval, by allowing them to see more of ‘‘the unfamiliar as familiar.’
Moreover, beyond a sometimes overzealous quest for immediate parallels, history can be equally useful when pinpointing key discontinuities. As the Spanish humanist Juan Luis Vives rightly observed in his treatise On Education: ‘Even a knowledge of that which has been changed is useful; whether you recall something of the past to guide you in what would be useful in your own case, or whether you apply something, which formerly was managed in such and such a way, and so adapt the same or a similar method, to your own actions, as the case may fit.’ In the case of AI, it is evident that no individual historical analogy can singly capture its polymorphous, protean nature.
Instead, one should recognise, in the vein of the Nobel Prize-winning economist Daron Acemoglu, that the impact of AI will be inherently multifaceted, and that the emergent technology will therefore probably resemble a somewhat inchoate and fundamentally messy ‘mix of the printing press, steam engine and atomic bomb’. In short, weaving together analogies – rather than prizing them apart – can yield a far richer, more nuanced understanding of AI’s complex implications.
In addition to reading up on their history, AI-watchers should also spend more time focusing on the natural and evolutionary sciences. Indeed, recent advances in the field of non-human cognition – from corvids to spiders to cephalopods or even slime molds – are in the process of fundamentally challenging some of our most long-held assumptions about the boundaries and meanings of intelligence. As industry leaders, such as Microsoft’s Mustafa Suleyman, begin to openly refer to AI as a new form of ‘digital species’, we should move beyond our more narrowly anthropomorphic evaluations of cognition.
To give but a few examples from the natural world, in recent years scientists have discovered that remarkably complex forms of cognition can emerge from very different neural architectures (the distributed ganglia of the octopus), and from relatively small to even poppy seed-sized brains (for example, the common crow, or the minute Portia jumping spider). Even more astonishingly and controversially, it would appear that brainless and even single-celled organisms can exhibit learning and problem-solving. For instance, while plants do not have brains, they possess remarkably complex internal signaling networks (electrochemical and hormonal) in addition to decentralised sensory structures. Charles Darwin was in many ways ahead of his time when he mused in 1880 that the root tip of a plant acts almost ‘like the brain of one of the lower animals… receiving impressions from the sense-organs, and directing the several movements’. Meanwhile, the humble slime mold (Physarum Polycephalum) can solve mazes, anticipate periodic events and transfer habituated knowledge at a cellular level, and has become a model organism for studying proto-cognitive behaviour without neurons. It has also been shown that trees ‘warn’ other nearby trees of predatory insects or herbivores by releasing chemical signals through mycorrhizal (fungal) networks.
How are these findings – however fascinating – in any way relevant to the study of AI? First, because AI engineers are already drawing and applying their own practical lessons from the natural world – for example, by looking to climbing plants for inspiration in creating robots that can almost instinctively grow and adapt to their own environment, or by developing bio-inspired algorithms that mimic the Physarum slime mold’s foraging strategies and network optimisation behaviours.
One interesting recent project developed a soft robot that ‘grows’ like a vine, extending a flexible body and using onboard 3D printing to add material as it moves. This vine robot finds support structures and navigates cluttered spaces by mimicking plant behaviours – it can sense touch and light and vary its growth direction accordingly, much as a plant shoot grows towards light or a tendril coils when it contacts a rod. The control is distributed along the length of the robot; rather than precise central commands for every movement, the robot’s design allows it to passively adapt (e.g., its body will bend and wrap around objects it encounters, taking advantage of compliant materials).
The lessons of plant intelligence – growth as a strategy, modular response, local decision nodes – are thus leveraged to become engineering principles for robots that need to operate in unpredictable, unstructured environments (such as inside a collapsed building or through the human body in a medical context). Similarly, drone engineers have long drawn on ornithology or melittology (the subcategory of entomology that focuses on bees) to develop algorithmic patterns for swarming. Meanwhile, the octopus’s soft, flexible arms have inspired a new generation of soft robots that can squeeze through tight spaces and manipulate objects delicately. Researchers copying the octopus have built robotic arms with decentralised control nodes along their length, allowing the arm to execute grasping or crawling motions without micromanagement from a central computer. These designs reflect the idea that intelligence can be partly ‘outsourced’ to an organism’s morphology – what some robotics theorists call ‘morphological computation’.
Beyond simply leveraging insights from the natural world for optimising robotic performance, however, there is another, perhaps more compelling, reason for AI developers to acquire a panoramic view of non-human intelligence. Most notably, it can inject a welcome degree of humility – and caution – when attempting to conceptualise how a truly advanced form of artificial intelligence may emerge and behave. As some of my RAND colleagues have recently argued, the conventional wisdom that an ASI will naturally emerge from the hyperscaling of Large Language Models (LLMs) may need to be revised, or at least should be questioned, as should other inbuilt assumptions regarding the future evolutionary pathways of artificial intelligence. Referring to the speed and inherent unpredictability of AI development – along with its potential diversity – certain experts have thus begun to employ the term ‘Cambrian explosion of artificial intelligences’. This analogy, while evidently imperfect, reflects the expectation that, much like the explosion of diverse lifeforms during the Cambrian period 541 million years ago, we may see an emergence of a wide variety of AI systems, often with starkly different capabilities, behaviours and applications, and all within a relatively short period of time.
The renowned primatologist Frans de Waal coined the term ‘anthropodenial’ to describe the a priori rejection of human-like traits, including cognition, in other creatures. We run the risk of succumbing to other, yet similarly blinkered, prejudices if we refuse to account for the many forms of intelligence – decentralised, embodied, or distributed – that emergent AI systems may come to acquire. From coming up with completely unprecedented – and heretofore unimagined – sequences of moves in the Chinese game of Go, to developing sensorimotor intelligence that greatly exceeded design assumptions, AI systems have continuously demonstrated forms of creativity that have nonplussed human observers. This is not necessarily because of deceptive behaviour (although that does, somewhat disconcertingly, appear to be occurring ever more frequently), but rather because engineers failed to anticipate the system’s profoundly alien approach to problem-solving. Indeed, as one leading AI researcher has provocatively contended, ‘we should think of trying to interpret a machine learning model as akin to trying to interpret the brain waves of another species or an alien’. Given how naturally limited our intuitions often appear to be with regard to the landscape of possible intelligences here on Planet Earth, paying closer attention to the natural world would probably constitute a sound first step.
What body of knowledge, or discipline, should one fall back on, however, when discussing some of the more tortuous moral or ethical quandaries surrounding the creation of ASI? Invoking mythology, rather than philosophy or moral psychology might seem strange, even quixotic, in such a context. After all, as the Austrian-British philosopher Karl Popper once wrote, ‘science must begin with the criticism of myths’. In this case, however, a reacquaintance with some of the more powerful themes and lessons embedded within classical mythology may actually provide one of the best means of coming to terms with what science is on the verge of unleashing onto an unsuspecting world. After all, it was long understood that mythology constituted far more than a simple repository of fantastical narratives – for millennia, its cautionary and didactic tales constituted a vital social tool that provided structure, ethical meaning, and cohesion within societies. From the Greek legends of Prometheus and Pandora, to the Ancient Egyptian tale of the Book of Thoth, whose readers succumb to madness, to the Jewish myth of the Golem, which morphs into a Frankenstein-like threat to its creators, many of mythology’s most timeless motifs – on the perils of hubris and technological overreach, on the lust for forbidden knowledge, on the existential perils of loss of control in the pursuit of power – map perfectly onto current AI discussions. In effect, they can provide a roster of convenient, readily accessible proto-thought experiments for the overworked, increasingly humanities-starved and engineer-driven world of AI alignment.
Consider, for example, the oft-invoked myth of Midas, the Greek king granted a wish by the god Dionysus, and who foolishly requests that everything he touch be turned into gold – only to realise to his horror that even food, drink and his own daughter are transmogrified into lifeless metal. In AI discussions, the Midas myth has often been invoked as a metaphor for value misalignment: when an ASI fulfills a goal literally, and in a brutishly maximalist fashion, with little common-sense understanding or moral boundaries. Equally, large language models (LLMS) have frequently been described as mysterious ‘black boxes‘, given the inscrutable, or sometimes nonsensical, nature of some of their reasoning processes, decisions, or ‘hallucinations’. The mythological analogy which immediately springs to mind in this case is that of the biblical Tower of Babel, a marvel of human engineering that ultimately collapses under the weight of linguistic fragmentation. In short, myths provide some of the best, and most easily accessible thought experiments on ethics, innovation and power – something the Ancients well understood.
As Isaac Asimov once quipped, the great tragedy of our age is that science gathers knowledge faster than society gathers wisdom. Only by proving willing to draw on a broad base of knowledge can we hope to prove him wrong. After all, when it comes to the eventual emergence of ASI, the stakes are simply too high not to try.