OpenAI’s dual personality disorder

  • Themes: Technology

The tensions between Silicon Valley's commitment to hyper-capitalism and technological idealism have been made explicit by the crisis at OpenAI.

Sam Altman is a US entrepreneur and investor. He served as CEO of OpenAI from 2019 to 2023.
Sam Altman is a US entrepreneur and investor. He served as CEO of OpenAI from 2019 to 2023. Credit: Koshiro K / Alamy Stock Photo

Know thyself, Socrates suggested as the essential first step to true knowledge. Having been commissioned to write a piece about the latest corporate soap opera at OpenAI, I began by interviewing the controversial Silicon Valley start-up about itself.

I was on a Socratic mission for machine self-knowledge. ‘What do you know about Silicon Valley craziness vis-à-vis recent OpenAI shenanigans’, I asked ChatGPT, OpenAI’s Large Language Model (LLM) chatbot.

Rather than superintelligence, however, OpenAI’s chatbot sounded like an evasive PR firm. It was more the absurd bureaucratic dystopia of Terry Gilliam’s Brazil (1985) than the full-on digital Orwellianism of Alex Garland’s Ex Machina (2014).

‘As of my last knowledge update in January 2022, I don’t have specific information on any Silicon Valley craziness or OpenAI shenanigans beyond that date,’ it responded.

Not only was ChatGPT more than 18 months out of date about its history, but the LLM appeared in convenient denial of its own corporate controversies. So much for the Turing Test and the Silicon Valley start-up’s latest $90 billion valuation. What spat back at me resembled the kind of fuzzy customer disservice normally experienced when trying to change an airline ticket or question a  mobile-phone bill.

I shouldn’t have been surprised by this evasion. ChatGPT’s parent, OpenAI, is equally lacking in self-knowledge. Indeed, it’s this absence of self-reflection – the company’s failure to identify its corporate real self – which is at the root of the dramatic series of recent events at Silicon Valley’s most famous start-up. Not knowing itself is a feature, rather than a bug, in OpenAI’s code. This is the real story behind its recent turmoil.

Established in 2015 with the immodest ambition to ‘advance digital intelligence in the way that is most likely to benefit humanity as a whole’, OpenAI has always been super-confused rather than super-intelligent about its own corporate identity. Much of this has to do with its founders – four profoundly incompatible men, whose definition of what, exactly, it means to use digital technology to benefit humanity as a whole couldn’t be more morally or economically contradictory.

One of those founders was the multi-billionaire entrepreneur Elon Musk, an egoist supremely skilled in that Trumpian manoeuvre of assuming that anything that benefits him personally naturally benefits ‘humanity’. Think of OpenAI’s second founder, Ilya Sutskever, as the unMusk. A highly respected AI research scientist who worked on the team that pioneered LLMs, Sutskever seems to have been genuinely concerned with the potentially existential impact of AI on humanity.

The other two founders – Sam Altman and Greg Brockman – were both relatively sane Silicon Valley entrepreneurial executives caught in the moral no-man’s land between hardcore Muskian self-interest and an equally uncompromising Sutskeverian altruism.

No wonder then that OpenAI, a supposed ‘non-profit’, meant radically different things to its radically different founders. Initially financed with a billion-dollar pledge from a who’s who of wealthy Silicon Valley luminaries, OpenAI pioneered a form of machine intelligence called ‘Generative AI’, which enabled the creation of its ChatGPT. And this LLM Chatbot was so radically smart at mimicking human speech that, by 2019, it attracted the interest of Microsoft who invested a billion dollars in OpenAI.

Hmmm. Why would one of the world’s largest for-profit tech companies, currently valued at nearly three trillion dollars, invest a billion dollars of its own money in a non-profit?

Microsoft wouldn’t and didn’t make such a significant investment without the promise of significant economic return. Instead, OpenAI, already unnaturally divided between Muskian realism and Sutskeverian idealism, changed its corporate structure from a non-profit to what it called a ‘capped-profit’ model in order to take the Microsoft investment. Then it split itself into two entirely incompatible companies controlled by the same idealistic board – one a non-profit designed to protect the world from AI, the other a for-profit designed to transform the world through AI.

OpenAI had become the first double Silicon Valley unicorn. On the one hand, it was a privately held startup company with a valuation of over a billion dollars; on the other, it was a literal unicorn – a totally unnatural thing, a merging of two entirely foreign beings, an impossibility.

Thus the events of November 2023, a corporate crisis which, in retrospect, appears totally inevitable. By now, Musk was no longer on the board, but Sutskever, Altman, Brockman remained and were joined by three other civically minded board members (no seat, of course, for Microsoft or its other venture capital investors) committed to the altruistic ideal of protecting the world from AI. Sam Altman’s original 2015 commitment to the ideal of OpenAI as a principled non-profit had sharply morphed into something more aggressively entrepreneurial. Altman even began talking about raising $100 billion to make OpenAI the dominant company of the AI age.

On Friday 17 November, the Sutskeverian board – acting to protect humanity from the dangers of a full-blown AI revolution – fired Altman. Then everything exploded. OpenAI’s dual personality disorder became a gruesome public spectacle. Everyone in Silicon Valley was either a ‘decel’, in the camp of the decelerator Ilya Sutskever, or an ‘accel’, on the side of the accelerator Sam Altman. Finally, Sutskever, by now completely bent out of shape by an unrelenting drama played out in the cruel minute-by-minute spotlight of social media, changed sides and, with the help of Microsoft, reinstated Altman as CEO a week after he was originally fired.

Where are we now? Does OpenAI know itself now? Has this corporate crisis been resolved so that the company is now a more conventional Silicon Valley for-profit start-up, the next multi-trillion dollar Google or Apple? Yes and no. Yes, Sam Altman is back in charge and the OpenAI board has been restructured so that its balance of power is now controlled by ‘accels’ such as Larry Summers, the pro-business economist and Bill Clinton’s secretary of the treasury. Even Microsoft now has a seat on the board.

One wonders about the long-term consequences of this bizarre psycho-drama featuring a corporation going through a public dual-personality crisis. As a midwife of our AI age, one hopes that a more mature and coherent OpenAI will now take the Socratic ideal of knowing thyself to heart. The mission of advancing digital intelligence in a way that is most likely to benefit humanity as a whole is the greatest economic and moral challenge of Silicon Valley companies such as OpenAI. Superintelligence now appears inevitable; what is far from inevitable is applying this superintelligence intelligently.


Andrew Keen