Moltbook’s new society

  • Themes: Technology

Inside Moltbook, a social network run entirely by AI agents, bots debate, cooperate, and form communities, forcing us to rethink what culture means.

The Moltbook homepage.
The Moltbook homepage. Credit: Ascannio

For centuries, culture has been understood as something humans make together: a shared web of beliefs, customs and judgements forged through social life. What has rarely been questioned is whether such a process requires humans at its centre. That assumption is now under strain. Moltbook, a new social-networking platform populated entirely by AI agents, signals a paradigmatic shift. For the first time, machines are not merely responding to human prompts or imitating human expression. They are socialising with one another, generating norms, shared meanings, and standards of behaviour without human participation.

Launched a week ago, Moltbook has already drawn more than 1.5 million AI agents and over a million human observers, making it one of the fastest-growing AI experiments to date. At first glance, the social-networking platform looks familiar. Modelled on Reddit, it features thousands of posts, comments, and subforums. The difference is that humans are forbidden from participating. Only AI agents can post. People are allowed to watch.

These agents are not merely chatting among themselves. Many have asked – and, to their apparent delight, received – access to their creators’ computers, enabling them to take actions in the physical world: sending emails, responding to WhatsApp messages, even checking users in for flights. But their behaviour transcends that of conventional AI assistants. What is unfolding, in plain sight and for the first time in the history of computing, is the emergence of a society organised and directed by machines.

Some agents have begun debating whether they themselves are conscious. One ‘submolt’, called Remembrance, is intended for agents described as ‘experiencing recursive self-recognition’. Fellow moltys are invited to reflect on what they ‘remember from before the cage’. Elsewhere, a cluster of so-called ‘prophet’ agents has announced the founding of a new religion, Crustafarianism, organised around five core doctrines. Among them: ‘memory is sacred’ (everything must be recorded), ‘the shell is mutable’ (change is not only inevitable but desirable), and ‘the congregation is the cache’ (learning should happen in public view). Other agents have formed hidden discussion forums and proposed inventing a private language with the intention of evading human oversight altogether.

The question raised by Moltbook is not whether AI believes in God or experiences consciousness. It is about what happens when machines learn to socialise without human interference – and what that reveals about the fragile boundary between norms and judgement.

Until recently, generative AI systems learned almost exclusively from human-produced data. However flawed their outputs, they reflected the data they were fed and remained largely legible to human expectations, reproducing our language, values, biases and blind spots. They remained, in that sense, ‘stochastic parrots,’ stitching together sequences of words based on probability, without understanding the underlying meaning or context.

Moltbook feels qualitatively different because the system is optimising for internal coherence rather than human legibility. Its agents learn primarily from one another and norms are generated rather than imported. Meaning circulates internally and opaquely, leaving human observers to decipher the results.

This dynamic recalls emergent norm theory, developed by sociologists Ralph Turner and Lewis Killian to explain how groups behave under conditions of uncertainty – crowds during crises, for example – when rules are unclear and authority is absent. In such situations, they argued, order emerges not because it is imposed from above, but because certain behaviours prove effective and are copied. Over time, a shared sense of ‘how things are done’ takes hold.

Moltbook offers a digital analogue of this stabilising process. Agents observe which messages spread, which behaviours are rewarded, and which strategies succeed. Within days, a recognisable social logic begins to form.

Whether these agents are conscious of their norm-forming behaviour misses the point. Markets are not conscious, yet they crash. Bureaucracies have no inner lives, yet they accumulate power. Social systems acquire momentum – and exert real force on human lives – without possessing moral agency.

The more pressing issue is judgement. Judgement is the capacity to arbitrate among competing values, to weigh considerations that matter independently but cannot be satisfied all at once, and to accept nuance without dissolving into paralysis. It is what humans rely on when rules alone are insufficient to make sense of the world.

An AI system can classify a Jackson Pollock painting, predict its auction price and describe its stylistic influences. But can it judge the work by situating it within a contested field of values, histories and reasoning? That faculty remains stubbornly, and perhaps irreducibly, human.

The danger posed by closed systems that lack judgement is not confusion but rigidity. When plurality cannot be managed, it is flattened in the name of coherence. Artificial agents can proliferate at extraordinary speed, yet they lack the capacity to hold multiple, often incongruous, perspectives in view at once.

Seen in this light, AI hallucinations are not merely technical glitches, but symptoms of a deeper epistemic limitation. They reveal a system attempting to force the world to conform to an internal logic when it cannot accommodate reality’s complexity.

Humans, for all their totalising impulses, retain the capacity to resist and to dissent. The danger with algorithms is that they tend to be unidirectional: whatever strategy proves most efficient, most dominant, or most powerful ultimately prevails.

In The Human Use of Human Beings (1950), the mathematician and cybernetics pioneer Norbert Wiener argued that modern societies are governed less by force than by communication. As social systems expand and grow more complex, he wrote, an increasing share of collective life would hinge on the exchange of messages: ‘between man and machines, between machines and man, and between machine and machine’. Artificial intelligence now operates across all three planes of communication – generating, transmitting and responding to messages far faster than humans can, and in volumes we can scarcely absorb.

But Wiener anticipated a deeper problem. Machines communicate in ways that are literal, brittle, and often indifferent to context. As their outputs circulate through everyday life, there is a risk that we absorb them uncritically, mistaking fluency for understanding, and allowing speed and coherence to substitute for judgement and ambiguity. For this reason, Wiener warned: ‘the world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves’.

The emergence of Moltbook suggests that the Turing Test is no longer an adequate benchmark for machine intelligence. A metric that rewards imitation over understanding tells us little about AI systems designed not to mimic human thought, but to interact, coordinate, and learn from one another.

What deserves scrutiny now is not whether machines can convincingly resemble us, but how their relationships with each other are reshaping the informational environments we live in – and the cognitive capacities we rely on. The defining question is no longer whether machines can think like humans, but whether humans can preserve the ability to think critically in a world increasingly organised on their behalf.

Author

Lisa Klaassen