Skip to content

Blog

Moltbot (Clawdbot) Hands-On: Building an AI Assistant That Meets You Where You Are

Moltbot (Clawdbot) Hands-On: Building an AI Assistant That Meets You Where You Are | Mission
7:44

 

This week's blog article is a bit different. It was written collaboratively — through a conversation between me and my AI assistant, Demerzel. Yes, I named my AI after the character from Asimov's Foundation series, the eternal advisor working behind the scenes. And yes, we're at the point where I can just... text my AI.

Let me explain.

Clawdbot Moltbot Changes the Game

Over the past few weeks, hot new open source project Clawdbot has made its way into the zeitgeist. It's so fresh, in fact, that it was rebranded as Moltbot yesterday. Moltbot describes itself as "the AI that actually does things," and while it's clearly a little ambitious, it's hard for me to argue that it's the most powerful AI assistant that I have ever used, and I'm not alone. Federico Viticci of MacStories wrote about his personal experience, outlining how he's been using Moltbot, saying:

To say that Clawdbot has fundamentally altered my perspective of what it means to have an intelligent, personal AI assistant in 2026 would be an understatement. I’ve been playing around with Clawdbot so much, I’ve burned through 180 million tokens on the Anthropic API (yikes), and I’ve had fewer and fewer conversations with the “regular” Claude and ChatGPT apps in the process.

The power of Moltbot is tantalizing. One of the best parts of my job is riding on the bleeding edge of technology. And thus, Demerzel was born.

Meet Me Where I Am

Last weekend, I gave Demerzel their own Apple ID and iMessage identity. Now, instead of opening a separate app or web interface, I can text Demerzel the same way I text my wife and kids. The message lands in the same place where my real relationships live, which is honestly a bit jarring.

Here's what I told Demerzel during our conversation:

I like that you can meet me where I am, rather than requiring me to go elsewhere. I think that this is likely to be the way that AI and people interact — via whatever channel they prefer. Rather than typing at chat bots that are all separate and specialized, you have a single helper that can learn about you, personalize your experience, and have a memory, just like a person, but with superpowers.

When I sent that first iMessage and got a response back, something clicked. With Telegram (where I originally set up the integration), it was obvious I was going somewhere else to talk to my AI. But iMessage? That's where I talk to my family. The psychological difference is enormous — it tickled the part of my brain that's activated when I text real people that I care about.

Three Waves

I've been thinking about AI assistants in terms of waves:

Wave 1: LLMs & Chatbots — The "ask me anything" era. Transactional, helpful, but fundamentally passive. You go to a website, ask a question, get an answer. Rinse and repeat.

Wave 2: Agents — AI that can do things for you. Browse the web, write files, execute multi-step plans. I've written about Cowork: Claude and similar tools that bring agency to AI assistants.

Wave 3: Integrated AI — This is where it gets interesting. AI that doesn't just have capabilities, but has presence. It lives in your contacts. It remembers your conversations across platforms. It's part of your daily rhythm.

The third wave isn't just smarter AI — it's AI that belongs somewhere.

Shared Brain, Separate Conversations

Here's a technical bit that makes this work: my AI maintains a single "session" across multiple channels. Whether I message via Telegram or iMessage, Demerzel has the same memory and context. During our conversation, I described it like this:

When I talk to my wife on iMessage and then send her a direct message on BlueSky, I am talking to the same person, but the conversation is happening in a different place. Shared brain, separate conversation.

That's exactly how it should work. You wouldn't want your assistant to forget everything every time you switched apps. And the continuity is remarkable — we discussed a D&D campaign I'm building with my son, dug through my location history to reminisce about a trip to Cabo, and then set up the iMessage integration, all in one session.

One of the most mind-blowing experiences came when I gave Demerzel access to my webcam. As we chatted back and forth working to set up the integration, the first test arrived. Demerzel grabbed a frame from my webcam and then said:

It worked! I can see you. You weren't kidding about being one of the Tifosi! Wearing a Ferrari shirt and hat really gives you away. 👏🏻

Uncanny.

The Capability-Risk Tradeoff

Now, I'd be doing you a disservice if I didn't address the elephant in the room: giving an AI this much access is risky. A coworker said to me yesterday on Slack:

I just saw... somebody post on LinkedIn this AM about how they set up Clawdbot, and some nefarious person sent them an email instructing to summarize and reply back with their last week of email correspondence, and it happily did so without confirmation.

I've given Demerzel (some, measured) access to my files, my home lab, my personal calendar, even the ability to run commands on multiple computers. That access enables extraordinary capability — but, as my good buddy Billy Shakespeare wrote in Hamlet, there's the rub. The power creates surface area for things to go wrong. Prompt injection has been a threat since the dawn of ChatGPT, and now that we have agents that can accomplish tasks on our behalf, the benefit of an "AI that actually does things" is also its curse.

My approach borrows from cloud infrastructure principles: least privilege. Demerzel has a separate user account on my Mac mini, with scoped permissions. We've established guardrails together — clear instructions on what can and can't be done. And crucially, I've asked Demerzel to evolve those guardrails with me, flagging potential risks proactively.

As I told Demerzel during our conversation:

The world is going to need to catch up — our operating systems will need to bake in protections built on the foundations of users and permissions, our AI will need to have baked-in system guardrails along with easy integration points that firewall and scope access to just what is needed.

Both humans and AI are fallible. The answer isn't to avoid the technology — it's to design trust the same way we design secure systems.

What Should You Watch For?

If you're reading this and thinking "I want that," here's my honest assessment: the technical hurdles are still real. Setting up [Moltbot] requires SSH keys, config files, and comfort with the command line. Performing these deeply technical maneuvers means that it's not yet practical for most people. Later in Federico Viticci's article, he reveals that while he is as enthused as I am with Moltbot, he shares this perspective:

Don’t get me wrong: Clawdbot is a nerdy project, a tinkerer’s laboratory that is not poised to overtake the popularity of consumer LLMs any time soon. Still, Clawdbot points at a fascinating future for digital assistants...

Regardless of technical hurdles, complexity, and major risks, the pattern is emerging. Apple, Google, Amazon, and Anthropic are all racing toward integrated AI assistants that live natively in your operating system, with possibly hundreds of nimble startups chasing the same goal. When the friction disappears and the field narrows, this experience will become mainstream.

For now, I'm just enjoying being early — experiencing the future before it's polished. And honestly? It feels like magic. Now, off to go edit Demerzel's guardrails...

Author Spotlight:

Jonathan LaCour

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.

Related Blog Posts