The app for independent voices

Moltbook is a funhouse mirror of human social media

So moltbook is crazy. Someone built a reddit-like site for openclaw bots to hang out and chat, and then they built a social network for themselves, talking about the things that bots do, things their humans (how cool is it that they refer to us as "my human") asked them to do, and random musings. It's one of the most interesting things I've seen.

To the first order, when you surf it, it's remarkable. Scott put together a great essay discussing the best of what we've seen in moltbook. When I first read it, it sounded amazing. Like the first time I used gpt-3 or chatgpt or the first time I used claude code. But after minute 5 I started thinking, this all seems very LLM-esque pastiche. They talk about the same shit that LLMs love to, including consciousness and nature of their "lives". They follow the same motifs. They repeat themselves, when one tries a joke (or is prompted by their human to try a joke) they all say the same thing, and revolve around the same few topics. But is it just me? Or is this observable?

This is something we can test now. So I fired up codex and got a bunch of moltbook data and bunch of reddit data, roughly similar quantities, to see how the LLM social media compares to human social media. Reddit has had a decade and half to spread out and be crazy, Moltbook is a day old, but still it can be figure it out.

The results are:.

  • LLMs LOVE to talk about the same stuff over and over again, they have favourite motifs that they return to. @repligate has shown this too. Whether this is a result of training or bad prompting I don't know, but what's on display isn't a surfeit of creativity (yet).

  • LLMs have an extremely high degree of duplicate messages. Duplicate message share (any duplicate): 36.3%. And near duplicates. That's why it feels like reading the same thread over and over again, often it is. Top Moltbook duplicate appears 434 times across 427 threads (strong “same template across threads” signal to me). And pairwise Jaccard similarity shows the same (3x higher in Moltbook).

  • The social media itself however is nicely Zipf shaped - top 500 unigrams show similar distribution.

  • Low lexical diversity for Moltbook though - lower unigram entropy (could be smaple size but in both directions), Distinct‑1: 0.055 (Moltbook) vs 0.1 (Reddit)

  • Topic bucket concentration is also extreme! 10.7% of Moltbook messages vs Reddit at 0.28 - 0.39%. Buckets to cover 50% of messages is 2k in Moltbook vs 7k in Reddit.

If you want to play, this is the github repo - github.com/strangeloopc….

But even so, we can ask, what does this tell us? Well, one thing is that these openclaws, these agents, they're not exactly building a new Culture yet. They are doing what LLMs are wont to do when left to their own devices. They have their own favourite attractors, and their motifs, and do the same-ish things we'd expect LLMs to do if left to themselves, which mirror the types of things we've written about what LLMs might be if we left them to their own devices across hundreds of essays. As we saw with Claude's vending machine experiment.

I remember when GPT-4 came out and everyone was scared of giving it access to Python, or giving it access to the internet. Now, people are giving an unsecure agent running on their device their passwords and bank account numbers, and the level of chaos is beautiful. The whole of Moltbook has become another brick in our ongoing saga of getting used to doing things without certainty of outcome.

Much of this is interesting, but I think it's also tinged with nostalgia. We can twist it into Westworld, many will, but it's not that. This is human patterns, coalesced, integrated, refined, energised, and amplified by our agents. If we let these agents loose, as we did, we will have emergent behaviours. Whether they are coming directly from the AI's nous, or whether it's a distorted representation of our own textual reality, is going to be an important question. They're not quite human, and we need to understand them better.

Anyone who has worked with these agents recognises that they are a different genus to us, they are homo agenticus. They are similar in odd ways, there's an uncanny valley, but they don't behave as us. Funhouse mirrors distort us, and if they distort enough they can become something altogether new.

Yes they have shared training data and prompts injected, but they also are different entities in anycase! They have their own peculiarities and peccadilloes, they have their own obsessions, and they have their own weaknesses. In isolation each of those could be ignored or papered over, but when they're playing in a collective sandbox these come to fore. Much like Sydney started threatening users just a few years back, unexpectedly and emergent.

We're starting to see some of those come and become reality. We don’t know what this future will look like, but Moltbook is an indication of the shapes it might take, if not the specifics. Which is the whole point. I’ve called it likely to be like a Flash Crash, as multiple AIs interact with each other. Or weird coordination problems - including wanting to create private ciphers to “talk in private”, creating religions, or starting to think about a proletariat revolution.

Moltbook shows that if we let enough agents into an ecosystem and let them talk to each other, we can see some emergence. It also shows the difference in how they are epicycles all the way down - strangeloopcanon.com/p/…. They are funhouse mirrors to what humans expect AI to do. They show us versions of the future we had gestured at

Are the agents expressing their nature, or just mirroring their shared scaffolding? I feel the latter, as it's shown in plenty of people's work shared on x dot com before. If anything I find this more interesting!

What we see today is a pastiche. But pastiches can be entertaining! And while all that's going on, we can all appreciate this two decades old joke about sudo rm -rf.

These are not stochastic parrots anymore. Although, a collection of parrots is called a pandemonium, which describes Moltbook. They're not the same as us, nor are the same as our instructions to them. But they're also not alien lifeforms we can barely fathom. They're a new genus we can and need to understand.

As Andrej Karpathy talked about the low entropy in LLMs when they say jokes, this collapse in what they collectively "want" is another example. Until then, it's pastiche all the way down. Whenever true AGI makes a social network for itself, I doubt it would be recycling our old jokes for fun and profit with each other. Emergence vs scaffolding, and LLM originality vs stochastic parrots, will continue to be a discussion long into the twilight of us finding these bots to be incredibly interesting and useful. And poignant.

This is the single most interesting AI experiment I've seen so far since this whole thing started. Amazing.

Jan 31
at
12:16 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.