Algorithms vs Humanity

What sort of humanist thinking should we be pursuing in the age of smart machines? Is humanism compatible with AI? Or do humans need to think of smart digital tech as a species threat?

The character Rachel played by Sean Young in the 1982 film Blade Runner. Rachel is a replicant, an advanced AI housed in a human-like body. Credit: PictureLux / The Hollywood Archive / Alamy Stock Photo
The character Rachel played by Sean Young in the 1982 film Blade Runner. Rachel is a replicant, an advanced AI housed in a human-like body. Credit: PictureLux / The Hollywood Archive / Alamy Stock Photo

Much thought has been given to the threat of artificial intelligence (AI) to humanity. So, what exactly should ‘humanity’ mean, and what sort of ‘humanism’ should we be developing in the age of smart machines? Are there historical precedents for this kind of philosophical humanism, or do we need new kinds of discourses to educate artificially intelligent things?

Artificial intelligence, a term describing computer algorithms that programme smart machines, has become so ubiquitous out here in Silicon Valley that the phrase is used rarely. Just as the word ‘water’ would be gratuitous to fish if they could speak, so technologists no longer need to talk about ‘artificial intelligence’ when describing their digital products. AI, you see, is everything and everywhere in Silicon Valley. Digital technology without AI is like an engine without a motor or a human being without a heart. It’s a contradiction in terms.

Given that Big Tech is smart algorithmic tech, every Silicon Valley company is now an AI company. Google, a networked algorithm that mimics our brain, is perfecting AI-powered search. Apple is the smart-phone company now working on an AI car designed to be even smarter than Tesla vehicles. Amazon is automating its warehouses with AI robots that effortlessly do all the heavy lifting. Meta, once a social media company known as Facebook, is pioneering AI-powered smart worlds in the virtual reality of the ‘metaverse’. And alongside these multi-trillion-dollar AI multinationals, the office parks of Silicon Valley are teeming with start-ups developing disruptive AI software in everything from law, medicine and engineering to agriculture, finance, human resources and entertainment.

The holy grail for all these tech companies, big and small alike, is to create AI that replaces our physical human labour with the ‘work’ of the algorithm. That is what tech visionaries eulogise as the disruption of the digital age. Just as Google is replacing the librarian and Apple will replace the driver with their self-driving car, so these tech start-ups — the Apples, Googles and Amazons of our collective AI future — will eventually replace the professional lawyer, the doctor, the accountant, the office worker, the farmer, the engineer, the banker, the politician and, yes, even the essayist.

On second thoughts, maybe there is another reason the term ‘artificial intelligence’ is not uttered too often in polite company out here in Silicon Valley. AI is what Nietzsche might have defined as an ‘all-too-dangerous’ word. It imagines a post-human world in which smart machines, rather than you or I, do all the heavy intellectual and physical lifting. AI is designed to make us redundant. That’s very nice, of course, if you own the algorithm that reaps the economic rewards of this radical disruption. But what about the rest of us — the former lawyers, doctors, engineers, bankers, office workers, farmers and essayists — those so-called intermediaries who, historically, did all the intellectual and physical heavy lifting and have now been ‘disintermediated’ by the algorithm?

We have a word that collectively describes all these folks. It’s the H word: humanity. And, unlike AI, this word has become quite fashionable these days, particularly outside Silicon Valley. The H word is now in vogue because many of us see humanity as not just a rival but also a potential victim of AI. Marc Andreessen, the original boy genius of Silicon Valley as the co-founder of Netscape and now tech’s most influential venture capitalist, famously described software as ‘eating the world’. But a more awkward, unpalatable truth is that AI — a potentially post-human smart technology that techno-pessimists warn could be our final invention — might actually be devouring humanity by making us and our labour redundant.

Science fiction writers have been warning us about this for generations. Philip K. Dick’s 1968 dystopian novel, Do Androids Dream of Electric Sheep?, for example, imagined a dark, dispiriting world in which miserable humans and their equally miserable robots were almost indistinguishable. The book was turned into Ridley Scott’s 1982 cult classic movie Blade Runner, a defiantly humanist and memorably cyberpunk critique of both Big Tech and of AI.

So, what sort of humanist thinking should we be pursuing in the age of smart machines? Is humanism compatible with AI? Or do humans need to think of smart digital tech as a species threat? Should we humans be devouring AI before it devours us if we are to protect our humanity in the digital twenty-first century?

We humans have been through this before, of course. Two hundred years ago, industrial technology threatened to eat what the techno-optimistic Karl Marx called the ‘idiocy’ of rural life. In contrast with Marx, romantic poets such as William Wordsworth and William Blake celebrated and mourned what they considered to be the essential humanity of pre-industrial civilisation. Less poetic agricultural labourers, who we now remember as Luddites, even smashed the machines that threatened their rural livelihoods.

But for Marx, whose education was steeped in the liberationist humanism of the Enlightenment, the technology of the Industrial Revolution promised to liberate us from prosaic labour. As he wrote in his 1856 collection of essays, The German Ideologythe potentially cornucopian technology of industrialisation promised to free us from the drudgery of specialised mechanical work. We humans are, Marx believed, naturally a combination of hunters, fishermen, herdsmen and cultural critics. He defined this version of humanity as our ‘species-being’ and suggested that its essence had been ‘alienated’ from us by the dehumanising nature of the division of labour, which, he believed, had reached its climax in the factory-based capitalism of nineteenth-century industrial society.

A high-tech post-capitalist world, Marx imagined, could make us whole again. Technology, then, was simultaneously the problem and the solution to the existential crisis of humanity in mid-nineteenth-century capitalist society. Industrial technology was alienating us from ourselves by turning us all into narrow specialists, thereby locking into what the German sociologist, Max Weber, in his 1905 work The Protestant Ethic and the Spirit of Capitalism, called ‘the iron cage’.

But Marx believed we could break out of this cage by overthrowing the owners of what he called the means of production — which meant the then high-tech industrial factories of the mid-nineteenth century. Rather than wanting to liberate humanity from technology, Marx believed that technology could liberate humanity from the oppressive fate of the working class. Industrial technology, if commonly owned, he believed, could allow us to be fishermen and herdsmen and cultural critics. He described such a state of affairs as a communist society.

‘In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow,’ Marx wrote in The German Ideology. ‘To hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.’

Marx’s vision was to overthrow capitalism and capture the disruptive technology of industrialisation by doing away with private property. In a communist society, technology would be publicly owned and distributed, creating — in his mid-nineteenth-century enlightened mind, at least — the conditions for both material and creative abundancy. Rather than a source of unhappiness and exploitation, work in Marx’s revolutionary society would become what he might have described as the ‘thing-in-itself’ that defined our humanity. By being free to hunt and fish and write whenever we wanted, we would, Marx optimistically predicted, finally realise our humanity.

Now, of course, we know that communist society didn’t quite work out as Marx hoped. Technology in Soviet Russia and Maoist China was, indeed, publicly distributed and, in theory at least, collectively owned. But rather than becoming a liberating source that would enrich our meaning as human beings, it actually produced an existential crisis for humanity that rivalled the worst spiritual misery of nineteenth-century industrial society.

The depressing truth about twentieth-century communist society was most memorably captured by George Orwell in his 1949 dystopian novel Nineteen Eighty-Four. Orwell’s polemic describes the spiritual impoverishment of the human condition in a supposedly revolutionary state, where all technology was monopolised by a repressive dictatorship. Rather than enabling hunting, fishing and the writing of cultural criticism, the cutting-edge technology iNineteen Eighty-Four was deployed by Big Brother to punish any kind of physical or intellectual creativity.

But Orwell’s novel didn’t represent the end of humanity’s doomed love affair with the liberating quality of technology. In 1984, to celebrate the introduction of the Apple Macintosh, their first personal desktop computer, Apple made an infamous television advertisement titled ‘1984’. Produced by Blade Runner director Ridley Scott, the 60-second commercial featured a dismal group of identical male clones, all filmed in black and white, who are liberated by a kaleidoscopic blonde female athlete hurling a sledgehammer.

‘On January 24, Apple Computer will introduce Macintosh,’ the advertisement, masterminded by Apple CEO Steve Jobs, concluded. ‘And you’ll see why 1984 won’t be like 1984.’

Think of this advertisement as Jobs’ 60-second, Silicon Valley version of Marx’s German Ideology. Humanity, the commercial suggested, would be radically empowered by the new personal Apple Macintosh computer. The power of personal technology, the commercial promised, would liberate us from mundane tasks and empower all of us to become glamorous authors, musicians and filmmakers.

Some of this was true, of course. The personal computer revolution, begun in 1984 with the Apple Macintosh, has indeed liberated us from many of our most annoying tasks. But the idea that we can now all become authors, musicians and filmmakers through personal technology is as illusionary now as it was then. In fact, the digital revolution has been a catastrophe for the creative industries. There are 50% fewer newspapers and professional journalists in 2022 than there were in 1984, for example. And other twentieth-century creative professions, such as photography or songwriting, have been even worse affected than journalism.

The internet, particularly the Web 2.0 revolution triggered at the turn of the twenty-first century, has only really empowered multi-trillion-dollar multi-nationals such as Google and Facebook. These internet companies have appropriated our private data in exchange for ‘free’ online products, creating what Shoshana Zuboff has dubbed the age of surveillance capitalism. These Big Tech behemoths have slickly marketed their search engines and social media platforms as public services, and yet they behave with the commercial rapacity that would have shocked capitalism’s greatest critic, Karl Marx, himself.

That’s the great fear as we teeter on the brink of an AI revolution that is about to revolutionise twenty-first-century life even more radically than the nineteenth-century Industrial Revolution. For all the seductive happy talk by Big Tech marketing departments about the value to humanity of smartphones, smart cars and smart homes, the truth is that the AI revolution will only compound the current winner-take-all architecture of today’s digital economy. Fewer and fewer private AI companies will control larger and larger segments of the economy. And the smart technology of these AI oligarchies will destroy more and more conventional twentieth-century jobs in law, medicine, engineering, finance and retail.

So, to borrow some words from one of Marx’s most controversial students, V I Lenin: what is to be done? How can we genuinely protect humanity from a rapacious AI revolution that will replace many traditional jobs and labour with smart machines?

Back in 2017, I wrote a book titled How to Fix the Future, which laid out a manifesto about reining in Big Tech. Five years later, this manifesto, if anything, is even more relevant. We need much more aggressive governmental regulation to ensure that there is public scrutiny of unaccountable Big Tech companies, especially chilling start-ups like OpenAI. Government needs to take much more responsibility also for instituting radical economic reforms — such as guaranteed basic income — to ensure that history doesn’t repeat itself and we return to the Luddite violence of the early nineteenth century.

But we can’t rely exclusively on government to protect humanity in the face of today’s AI revolution. The problem is that Orwell’s Nineteen Eighty-Four is very much alive in contemporary China, where the state is leveraging AI to create a digital version of Big Brother. Known as the ‘social credit’ system, this involves building a giant algorithm for rewarding political loyalty and punishing heresy. Ironically, the Chinese model even disintermediates secret policemen, who will be made mostly redundant by the chilling efficiencies of the system.

An equally important change is rethinking technology’s relationship with humanity. The old Enlightenment fallacy, that new technology can liberate humanity, has proven to be profoundly corrosive. Marx was wrong to argue in the German Ideology that we, as humans, want to be simultaneously fishermen, herdsmen and cultural critics. One healthy antidote to Marx’s romanticised cult of labour is the work of the German twentieth-century political philosopher Hannah Arendt. Her 1958 classic, The Human Condition, lays out three kinds of human activities: labour, work and action.

We need to use these three Arendtian categories to prioritise how humans should invest their time in our algorithmic age. An increasingly ubiquitous AI might indeed make both labour and work ultimately redundant for most of humanity. But what about ‘action’, the most ancient and storied of Arendt’s trinity of human activities?

The idea of the computer algorithm was invented by Augusta Ada King, the Countess of Lovelace, an astonishingly brilliant nineteenth-century mathematician who also happened to be Lord Byron’s daughter.

Lovelace not only founded the very idea of AI, which she christened ‘the analytical engine’, but also recognised its fundamental limitation when it came to replicating the human being.

‘The Analytical Engine has no pretensions to originate anything,’ Lovelace somehow grasped. ‘It can do whatever we know how to order it to perform.’

The idea of the algorithm not having the ‘pretensions to originate anything’ might be the most critical concept uttered in the entire nineteenth century (certainly a million times more valuable than anything Marx wrote). What Lovelace understood is that software can never learn to think for itself. It can only do what it is told by its author. While AI can replicate human activity in two of Arendt’s three activities (labour and work), it can never learn ‘action’. Human action or agency, then, is a uniquely human quality which will always exist outside the AI realm.

In contrast to many contemporary techno-pessimists who fear we are on the brink of a Blade Runner set, in which humans and robots will be indistinguishable, the English writer Jeanette Winterson believes that AI might spark a humanist renaissance in the twenty-first century. Winterson’s sparkling 2021 polemic, 12 Bytes, borrows heavily from Ada Lovelace in arguing that technology can never originate anything. AI, for example, can never come up with an original definition of what it means to be human. Only we human beings can do that. And that is why I’m cautiously optimistic that Winterson’s all too human hope about our AI century might not be entirely misplaced.


Andrew Keen