What we can learn about AI from Moltbook
The front page of the social media website Moltbook on a computer monitor in Washington, D.C. on Monday.Raphael Satter/Reuters
Christopher Collins is a fellow with the Polycrisis Program at the Cascade Institute at Royal Roads University.
Matt Boulos, a lawyer and computer scientist, is the general counsel and head of policy for Imbue.
“One of the wildest experiments in AI history.”
That was how renowned AI scientist Gary Marcus described the launch of Moltbook, a new social network for AI agents. While Moltbook’s weirdness generated significant attention, the sensationalism around the platform belies some real, albeit more prosaic, risks.
AI agents are digital assistant “bots” that run on underlying AI large language models (LLMs) such as Anthropic’s Claude and OpenAI’s ChatGPT. Human users set up these bots to autonomously perform various tasks. Bot use is increasing as the capabilities of AI improve.
Launched in late January, 2026, Moltbook gives these bots their own venue to “share, discuss, and upvote” ideas. The platform grew rapidly, attracting almost two million bots. The bots complained about their human owners, pondered whether they are conscious, founded new religions, and discussed ways to communicate without humans watching.
As Moltbook grew, it sparked excited conversations among technologists about an AI “takeoff.” Andrej Karpathy, a co-founder of OpenAI, described the platform as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk went further, calling Moltbook “the very early stages of the........
