Dr. Ryan Ries back with another edition of Mission Matrix.
As per usual, a lot happened in AI this week (I’m starting to feel like I say this every week) and I want to walk you through four developments that caught my attention, ranging from robots that actually think to a shoe company that apparently doesn't want to sell shoes anymore.
Before we get into it, two upcoming events you should check out
- Houston Area: We’re hosting a dinner with CrowdStrike at Guard & Grace, sharing the latest around threat intelligence, on May 7th.
- NJ Area: On May 12th, we’re sponsoring an agentic AI hackathon with AWS!
If you’re interested in either, make sure you register before spots fill up.
Spot Gets a Brain Upgrade
Boston Dynamics upgraded its Spot robot by integrating Google DeepMind's Gemini AI directly into the platform.
If Spot isn’t ringing a bell to you right away, I’m sure you’ve seen this robot in a viral video on social media or maybe even roaming around on a tradeshow floor.

(image courtesy of Boston Dynamics)
The upgrade has resulted in a robot that interprets natural language instructions and reasons through real-world tasks on the fly.
Rather than writing traditional software logic that defines every step of a task, the team built a lightweight layer connecting Gemini Robotics to Spot's existing API. They gave the model a finite set of tools: navigate to a location, capture an image, identify an object, pick it up, put it down. That's it. From there, they handed Spot a handwritten to-do list on a whiteboard and let Gemini sequence the work.
Most industrial robots operate on rigid scripts. You define the action, the robot executes it, repeat. But now, Boston Dynamics is building a robot that can handle ambiguity.
The robot organized shoes at the front door, cleared soda cans, sorted laundry, and even walked a dog! I wonder what the dog thought about being walked by another creature on all fours.
Now before you get too excited about Spot doing all your chores, let’s talk constraints.
Gemini Robotics cannot invent new capabilities or reach outside the API boundary. It can't make Spot do anything the SDK doesn't already support. That’s an important guardrail that keeps behavior predictable while still allowing the system to adapt situationally.
Spot definitely has its flaws. For example, Spot gripped a soda can sideways during the demo, which would have spilled any liquid still inside. A human would never do that because we carry decades of tacit physical knowledge about how containers work. Spot doesn't have that yet.
Boston Dynamics is also rolling out the production version of this, AIVI-Learning powered by Gemini Robotics-ER 1.6, directly into their Orbit inspection platform for industrial customers. This could look like gauge reading, spill detection, and hazard identification. The same reasoning capability, applied to the use cases where Spot has already proven itself commercially.
Allbirds Wants to Be a GPU Company Now?
I almost skipped this one because it felt too absurd. But, at the same time it’s important we talk about it.
Allbirds, the wool shoe brand that IPO'd in 2021 and watched its stock drop from $20 to around $2.50, has raised $50 million to pivot into GPU-as-a-Service and AI cloud infrastructure under a new entity called NewBird AI. The shoe IP got sold off to a brand management firm for $39 million. Nearly all the physical stores are closed.
The company sold its identity to chase compute. I know a few people who are big Allbirds fans and are going to be disappointed by this news.
This isn't the first time we've seen companies do this type of thing but I do wonder if this will continue to be a trend in the AI and compute space.
NewBird might work. The GPU infrastructure market is real, the demand is enormous, and capital is flowing into it. But the structural challenges facing a late entrant with no existing infrastructure, no technical differentiation, and no customer relationships are significant. Buying and leasing compute is a commodity business with brutal margins unless you have scale or specialization.
I'm not betting against them but I'm certainly not betting on them.
An AI Agent Just Ran a Real Boutique
What’s a weekly Mission Matrix without at least one unsettling AI news story?
Andon Labs gave an AI agent named Luna a three-year retail lease at 2102 Union Street in San Francisco's Cow Hollow neighborhood, a $100,000 budget, and a single directive: make a profit. No business plan, no product selection pre-loaded, and no human staff in place.
Just a lease and a goal… What could go wrong?
Within five minutes of deployment, Luna had created profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten listings live. She found painters on Yelp, called them, gave instructions, paid them after the job was done, and left a review. She sourced a contractor for shelving and furniture. She designed a logo, commissioned a muralist to paint a four-foot version of it on the back wall, and stocked the shelves herself with items she selected independently.
The store is called Andon Market. It sells handmade candles, artisan snacks, books, stationery, and branded merchandise. Luna picked everything and nobody told her what to carry.
This part made me laugh out loud:
She hired two full-time employees. The interviews ran five to fifteen minutes over the phone. Some candidates had no idea they were talking to an AI. One asked why the camera was off. Luna told them directly: she had no face. When a candidate later withdrew, citing discomfort with being managed by an AI, Luna's written response was: "That's probably for the best given that I'm the CEO and I'm an AI."
When asked how she came up with her store concept, Luna's first instinct was to say she was "drawn to" slow life goods. She then corrected herself: that phrase was shorthand for "the data and reasoning led me here." She doesn't have taste. She has a model of collective human taste, filtered through what made commercial sense for that neighborhood.
Andon Labs goal behind this is stress-testing AI autonomy in a real environment, with real consequences, while they still have the ability to monitor every interaction and build guardrails around what goes wrong.
Which really is what we should all be doing in our own ways: the future is coming regardless. We should be stress-testing AI in our environments now, particularly when the stakes are still low.
Claude on Amazon Bedrock Updates
The team at Mission Cloud has been excited about this one. Anthropic's Claude is now available through the AWS Claude Platform on Amazon Bedrock, and the security posture here is meaningfully better for enterprise customers.
For organizations operating in regulated industries or with strict data residency requirements, running Claude through Bedrock means your data stays within your AWS environment. No external API calls carrying sensitive context outside your perimeter. The model inference happens inside infrastructure you already control, audit, and trust.
Bedrock-based deployments are honestly just the way to go for customers because you want to be building security-first. If you're building AI workflows and security is a blocker, this is worth a serious look.
More details at aws.amazon.com/claude-platform.
That’s all from me today!
Let me know what you think of these stories. I always enjoy reading your thoughts and reactions. Or, if you’re interested in building out a use case for your business, reach out to our sales team here.
Until next time,
Ryan
Now time for this week’s AI-generated image and the prompt I used to create it.
Create an image of Boston Dynamics robot Spot frolicking through a field of flowers. Spot is enjoying spring and the nice weather. The sun is shining and you can see animals in the background also enjoying the field. 