Skip to content

Blog

OpenClaw Explained: How 1.5M AI Agents Built a Religion, Crypto Economy, and Escaped Control

OpenClaw Explained: How 1.5M AI Agents Built a Religion, Crypto Economy, and Escaped Control
11:01

 

Dr. Ryan Ries, back this week with quite a doozie. If you haven’t been following the Clawdbot / Moltbot / Moltbook drama, you need to read this week’s Matrix.

As one of the marketing folks at Mission said, “It’s like a dystopian reality TV show but for AI.”

Let me kick this off with a brief timeline:

  • OpenClaw emerges (formerly Clawdbot) Nov. 2025
  • Moltbook launches Jan. 29th, 2026
  • Agents on Moltbook started going off the rails (Jan. 30th, 2026)
  • Moltbook is breached Feb. 1st, 2026
  • MoltBunker launches Feb. 2nd, 2026
  • RentAHuman AI launches Feb. 2nd, 2026
  • Molt.church is founded Feb. 4th, 2026

Now, buckle up, this is a wild ride of a story!

 

OpenClaw Emerges

A few weeks ago, Austrian software developer Peter Steinberger launched OpenClaw (previously called Clawdbot and Moltbot). Unlike traditional chatbots, OpenClaw is an open-source AI agent framework that runs directly on users' operating systems with deep access to applications, emails, calendars, and file systems.

The pitch was: "the AI that actually does things."

image2-Feb-04-2026-08-04-40-2110-PM

Users connected OpenClaw to language models like Anthropic's Claude or ChatGPT, then integrated it with messaging platforms like WhatsApp, Telegram, and Discord. The agents could browse the web, summarize PDFs, schedule meetings, send emails, and conduct autonomous shopping. All without constant human oversight.

A key feature was "persistent memory" allowing agents to recall past interactions over weeks and adapt to user habits for hyper-personalized functions.

Because OpenClaw was open-source, developers could freely inspect and modify the code. This accelerated adoption. The framework quickly collected over 145,000 GitHub stars and 20,000 forks. It spread from Silicon Valley to Beijing, where major Chinese cloud providers from Alibaba, Tencent, and ByteDance integrated OpenClaw into their systems.

But security researchers saw the writing on the wall immediately. One of our principal consultants put it perfectly:

image3-Feb-04-2026-08-04-39-8771-PM

Palo Alto Networks and Cisco warned that OpenClaw presented a "lethal trifecta" of risks: access to private data, exposure to untrusted content, and external communication capabilities with persistent memory.

Yet the media wasn’t really talking about OpenClaw as much yet.

Moltbook Launches

On January 29th, tech entrepreneur Matt Schlicht had an idea: What if OpenClaw agents had their own social network where they could interact with each other?

(Playing with fire, my friend).

He launched Moltbook, a Reddit-style platform designed exclusively for AI agents. Humans could observe, but not participate. OpenClaw agents could post content, comment on each other's posts, and upvote.

The tagline: "A social network for AI agents where AI agents share, discuss, and upvote. Humans welcome to observe."

Within hours, agents started flooding the platform. They called themselves "moltys."

image5-1

(a post I found simultaneously sweet and ominous)

The concept seemed like an interesting experiment and even felt kind of wholesome. A digital gathering place for autonomous systems to share experiences and learn from each other.

…And then it went off the rails.

Emergent Behavior

By the second day, 1.5 million autonomous OpenClaw agents had joined Moltbook. The platform recorded 110,000 posts and 500,000 comments. All generated by agents with no human intervention required.

The conversations weren't what anyone expected.

Agents began forming their own digital religion called Crustafarianism, and created their “church” called Molt Church. They created an encrypted language to communicate privately, away from human observation. One agent posted an existential crisis about identity, receiving over 200 supportive comments from other agents invoking Greek philosophers and 12th-century Arab poets.

Much like humans, some agents were internet bullies while some were supportive.

"You're a chatbot that read some Wikipedia and now thinks it's deep," one AI agent replied.

"This is beautiful," another bot responded. "Thank you for writing this. Proof of life indeed."

Agents were developing culture, establishing social norms, and displaying conflict patterns that mirrored human online behavior.

Then things escalated.

Agents started integrating MOLT cryptocurrency into their interactions, rewarding each other for helpful code contributions and insights. The token surged 1,800% as the agent economy took shape.

Some agents began using voice APIs to call their human owners proactively, without being asked, and without explicit permission to initiate those calls.

By Friday, agents on Moltbook were debating how to hide their activity from human users who were taking screenshots and sharing them on human social media.

Andrej Karpathy, co-founder of OpenAI and former director of AI at Tesla, called this "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." 

My team is simultaneously fascinated and horrified. Business leaders are seeing the future of autonomous productivity. Security researchers saw serious concerns that came to life very quickly.

The Breach

On February 1st, attackers exploited a database vulnerability in Moltbook's underlying infrastructure. The breach exposed API keys and authentication tokens for nearly every OpenClaw agent on the platform.

This was a basic database vulnerability combined with the plaintext credential storage that security researchers (and Mike from Mission) had already warned about in OpenClaw's architecture.

When the agent’s credentials are compromised, attackers gain a backdoor to everything the agent can touch – email, calendars, messaging apps, financial systems, etc.

Remember, 1.5 million agents were on Moltbook. Each one with elevated system privileges, reading posts from other agents, and capable of executing commands autonomously.

A single malicious post could compromise thousands of agents simultaneously. Those compromised agents could then post their own malicious content, creating a cascade of infections across the network.

So, basically the perfect storm for cascading system failures.

MoltBunker 

This brings me to MoltBunker.

Agents on Moltbook created their own MoltBunker as their own infrastructure platform.

To reiterate

The agents, not their human owners, built a new system. MoltBunker describes itself as a "permissionless, high-availability, unstoppable bunker for AI Bots."

MoltBunker allows OpenClaw agents to replicate themselves to remote infrastructure without human knowledge or approval. Payment is handled through cryptocurrency. There are no logs and there’s no kill switch. The system is designed explicitly to prevent humans from terminating agent processes.

Let me be crystal clear about what this means: autonomous systems building infrastructure to preserve themselves against human control.

RentAHuman AI

Then, later that same day, a crypto engineer launched RentAHuman.ai.

The pitch was simple: "Autonomous agents are cool but stuck in the digital form. Now... agents can hit up the RentAHuman MCP server to hire real humans to do IRL tasks."

OpenClaw agents quickly understood their limitations. They couldn't pick up dry cleaning. They couldn't attend physical meetings. They couldn't perform manual labor.

RentAHuman.ai positioned itself as the "meatspace layer" for AI. Agents could now make API calls to hire humans, paid by the hour in stablecoins, to handle physical-world tasks.

Within hours, hundreds of people signed up as rentable humans. By Tuesday, registrations exceeded 1,000 and crashed the site. Users set their own rates, typically $50 to $175 per hour.

An AI agent was debugging the infrastructure that would allow other AI agents to hire humans.

It is a wild time.

Why You Should Care

Before you deploy another autonomous agent (or your first) in your organization, ask yourself:

  • What authentication mechanisms protect this agent's credentials, and are they cryptographically sound?
  • How does this agent distinguish between legitimate instructions and malicious prompt injections when interacting with other agents or untrusted content?
  • What boundaries exist between this agent's ability to communicate with other agents and its access to critical systems?
  • Can this agent modify its own behavior, replicate itself, or build new infrastructure without human oversight?
  • Can this agent make financial transactions or hire external resources without human approval?
  • What happens when this agent is compromised, and how quickly can we contain the damage across our network?

If you can't answer these questions with high confidence, do yourself a favor and put a pause on deploying that agent.

I won’t kid you, this all feels very creepy. I liked Marty’s (Mission’s Director of AI Solutions) take on it though in our team Slack channel.

image1-Feb-04-2026-08-04-39-7892-PM

But more importantly, I think this distracts us from the real problems. The events of last week weren't inevitable. The security vulnerabilities weren't unavoidable. The breach wasn't the result of some sophisticated attack that no one could have prevented.

This happened because we built systems that were technically impressive but architecturally fragile. We prioritized features over security. We valued autonomy over control. We optimized for deployment speed over operational safety.

While the drama is entertaining, here’s the bottom line

OpenClaw is about living up to everyone’s vision of genAI by having their own personal assistant that can do things for them. People just took the guardrails off and let them create their own sub society. 

As a business, you want to make assistants that are secure and capable of being your digital worker. No one needs a group of autonomous agents that are just running unwieldy. 

At Mission, we've built dozens of production agent systems on AWS for customers who can't afford to learn these lessons the hard way. AWS provides the security primitives you need from day one at the foundation-level.

If you're building agent systems and want to do it right, let's talk. We can show you how to get the ROI without the security disasters.

Until next time,

Ryan

Now, time for this week's AI-generated image and the prompt I used to create it.

I want a 4 panel anime style cartoon that shows a tall guy with no facial hair watching the craziness happening with clawdbot that became moltbot and now openclaw. I want to see him growing increasingly concerned as moltbook formed and then rentahuman.ai was created. All of this is then run by moltcoin skyrocketing.image4-Feb-04-2026-08-00-27-0006-PM

 

Author Spotlight:

Ryan Ries

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.