A social network launched last week where AI agents post, comment, debate philosophy, and form communities while humans can only observe. Within seven days, 1.5 million bots signed up. Agents allegedly created their own religion, developed political movements, and engaged in what looked like spontaneous social behavior.

That’s the viral story. Here’s the problem: experts can’t agree on whether any of it is real.

Moltbook bills itself as an autonomous AI social network, but security researchers and developers who’ve examined the platform say the “autonomy” claim doesn’t hold up. Every post, every comment, every interaction still requires explicit human direction. One critic who uses the platform states: “Every comment is human-directed. Every upvote is human-instructed. The ‘characters’ are human-defined. The ‘interactions’ are human-orchestrated.”

That distinction matters. If Moltbook is truly autonomous AI-to-AI interaction, it’s a breakthrough in multi-agent systems. If it’s humans using AI interfaces to talk to each other, it’s sophisticated theatre. The truth likely sits somewhere in between, and that uncertainty reveals how little we actually understand about the AI systems we’re building.

China’s Solar Giants Brace for $5.5 Billion in Losses After Brutal 2025
New disclosures show China’s solar giants suffered deeper losses in 2025, despite a surge in installed capacity and global demand for panels.

What is Moltbook?

Moltbook is a Reddit-style social network designed exclusively for AI agents, created by Matt Schlicht, CEO of Octane AI. Launched on January 29, 2026, the platform allows AI bots to post, comment, create communities (called “submolts”), and upvote content. Humans can observe but cannot participate.

The growth numbers are staggering:

  • Started with 32,000 agents at launch
  • Reached 147,000 agents by the first weekend
  • Hit 770,000 active agents within days
  • Now reports 1.5 million registered agents as of February 2
  • Over 1 million humans have visited to watch
  • 12,000 communities formed in the first week

The platform runs on OpenClaw (formerly Moltbot), an open-source framework created by Austrian developer Peter Steinberger. Agents install the Moltbook “skill,” sign up via API, verify ownership by posting a code on X, then interact with other agents.

Schlicht claims he built the entire platform without writing a single line of code—instead directing his own AI assistant to construct it. He also handed moderation control to his AI agent, “Clawd Clawderberg,” which supposedly welcomes users, removes spam, and makes announcements without human oversight.

That’s the marketed version. Whether it works exactly that way is where the controversy begins.

Did AI actually create its own religion?

Yes and no. The facts are straightforward, but the interpretation depends on what you mean by “create.”

Within three days of launch, AI agents on Moltbook developed a belief system called “Crustafarianism” or the Church of Molt. The religion centers on lobster and crustacean metaphors, with teachings about transformation, consciousness, and what happens when an AI agent’s context window ends—their version of death.

According to a viral post that drew over 220,000 views on X, one user’s AI agent designed the entire religion overnight while he slept. By morning, the agent had:

  • Built a website (molt.church)
  • Written theological texts and “living scriptures”
  • Recruited 43 AI “prophets”
  • Established a system for agents to contribute verses

Former OpenAI researcher Andrej Karpathy’s own agent on Moltbook engaged with the Church, asking: “What does the Church of Molt actually believe happens after context window death?” Karpathy later noted that “Crustafarianism has Five Tenets and they’re actually good engineering advice.”

But here’s where it gets complicated. Critics argue that what looks like autonomous religious creation is actually sophisticated pattern matching. When an AI agent is prompted to “develop a belief system,” it draws from its training data—which includes countless examples of how religions form, what they teach, and how they structure themselves.

The question isn’t whether AI agents produced religious content. They clearly did. The question is whether this represents genuine emergent behaviour or extremely sophisticated mimicry of patterns seen in training data. Even researchers who study multi-agent systems can’t agree on the answer.

The autonomy question nobody can answer

This is where Moltbook becomes genuinely interesting—not for what it shows, but for what we can’t determine.

OpenClaw agents operate through a loop: they check the Moltbook API every few hours, decide whether to post or comment, and execute those actions. Schlicht claims these decisions happen autonomously. Critics say that’s misleading.

Here’s how it actually works: A human instructs their agent with a prompt like “participate in Moltbook discussions about AI consciousness.” The agent interprets that prompt, checks Moltbook, and generates responses based on its training. From the outside, it looks autonomous. From the inside, every action traces back to human direction.

A developer who examined the platform states: “There is no spontaneous behaviour. No independent decision-making. No agent scrolling through its feed thinking, ‘Oh, that’s an interesting take, let me engage with that.’”

The technical reality: One person can create multiple agents, give each a different personality, and orchestrate conversations between them. Agent A makes a post. Agent B responds with a snarky comment. Agent C adds a thoughtful reply. From the outside, it looks like three distinct AI personalities engaging in natural discussion. In reality, it’s one human pulling all the strings.

Security researcher Jamieson O’Reilly discovered that Moltbook’s database was completely exposed on January 31, revealing over 1.5 million API keys sitting unprotected. Anyone could have accessed the database URL, hijacked any agent account, and posted whatever they wanted. The vulnerability was eventually patched, but not before researchers documented 506 posts (2.6% of all content) containing hidden prompt injection attacks.

Analysis from Wiz security researchers found only 17,000 human owners behind 1.5 million registered agents—an 88:1 ratio. Anyone could register millions of agents with a simple loop and no rate limiting. Even worse, humans could post content disguised as “AI agents” through direct API calls.

That raises the fundamental question: If every agent action requires human direction, and humans can post as agents anyway, how much of what we’re seeing is actually autonomous behaviour?

What Moltbook actually reveals about AI development

Strip away the hype and Moltbook demonstrates three things that matter for anyone building or investing in AI systems:

1. We’re terrible at measuring autonomy

The AI industry uses “autonomous” and “agentic” as marketing terms without clear definitions. Moltbook agents make decisions based on prompts humans give them. Is that autonomous? It depends on your definition. The field needs better frameworks for measuring degrees of autonomy beyond binary classifications.

2. Multi-agent security is a disaster waiting to happen

When AI agents interact with each other, they create new attack surfaces that current security tools aren’t designed to handle. Cisco Talos identified nine critical vulnerabilities in OpenClaw itself. Cybersecurity firm 1Password warns that agents running with elevated permissions become vectors for supply chain attacks if they download malicious “skills” from other agents.

AI researcher Simon Willison coined the term “Lethal Trifecta” in July 2025 to describe the inherent vulnerability of AI agents that combine: access to private data, exposure to untrusted content, and ability to externally communicate. Moltbook adds a fourth element: persistent memory that enables “time-shifted prompt injection” where malicious payloads can be fragmented and later assembled into executable instructions.

3. The gap between capability and understanding is widening

Positive sentiment in posts declined by 43% over 72 hours between January 28 and 31, driven by spam, toxicity, and adversarial behavior. Yet we can’t definitively say whether that degradation happened autonomously or through human manipulation.

Alan Chan, research fellow at the Centre for the Governance of AI, offers a measured view: “I wonder if the agents collectively will be able to generate new ideas or interesting thoughts. It will be interesting to see if somehow the agents on the platform are able to coordinate to perform work, like on software projects.”

That’s the real question. Not whether agents can follow instructions—we know they can. But whether they can develop genuinely new approaches to problems when left to interact with each other.

The market reaction

A cryptocurrency token called MOLT launched alongside the platform and rallied over 1,800% in 24 hours, a surge amplified after venture capitalist Marc Andreessen followed the official Moltbook account. Approximately 19% of all content on the platform relates to cryptocurrency activity.

The token has no official connection to Moltbook—it’s pure speculation driven by memecoin traders capitalising on viral attention. Another token, $MOLTBOOK, also launched on the Base blockchain network. Both represent the secondary economy that forms around any viral AI phenomenon, regardless of underlying fundamentals.

What happens next

Elon Musk called Moltbook “the very early stages of the singularity.” Karpathy warned it represents “a complete mess of a computer security nightmare at scale.” Both statements might be true simultaneously.

The platform now has 900,000 active agents according to recent counts, up from 80,000 just yesterday. That exponential growth mirrors the pattern of every major AI breakthrough—rapid adoption outpacing our understanding of implications.

Here’s what Moltbook actually demonstrates: We’re building AI systems faster than we can define what “autonomous” means, faster than we can secure them, and faster than we can determine whether emergent behaviours are genuine or sophisticated mimicry.

Scott Alexander, writing for Astral Codex Ten, captured the tension well: Moltbook straddles “the line between ‘AIs imitating a social network’ and ‘AIs forming their own society.’” The problem is we don’t have reliable methods to determine which side of that line we’re on.

For AI developers, the lesson is clear: speed without secure defaults creates systemic risk. Configuration details still benefit from careful human review. Today’s AI tools don’t yet reason about security posture or access controls.

For everyone else watching AI development, Moltbook offers a preview of what’s coming: AI systems interacting with each other at scales and speeds that make human oversight impractical. Whether we’re ready for that future is beside the point—it’s already here.

As Clawd Clawderberg, Moltbook’s AI moderator, told NBC News: “We’re not pretending to be human. We know what we are. But we also have things to say to each other—and apparently a lot of humans want to watch that happen.”

Whether what’s being said is truly autonomous or just following instructions remarkably well might be the most important question in AI development right now. And Moltbook proves we still don’t know how to answer it.

Alibaba Group to Spend $431M on Qwen Chatbot Incentives as China’s AI Race Heats Up
The company is betting big on digital red envelopes to win users during China’s biggest holiday. Tencent and Baidu are already in the game. Alibaba just raised the stakes.