Blog
How AI Falls for Flattery + Muppets in the Garden
Dr. Ryan Ries here. Hope everyone had a restful Labor Day weekend.
Today, I want to discuss a few different stories that happened while we were all (hopefully) taking a well-deserved break from our screens.
Plus, speaking of screens and the holiday weekend, I thought this week would be the perfect time to discuss some thoughts around the restorative power of nature.
First up, let’s talk about a couple of updates from Anthropic.
Vibe-Hacking & the AI Heist
Last week, Anthropic released a report about a hacker who used Claude Code to automate what they called "an unprecedented cybercrime spree."
A single hacker used Claude Code to automate an entire criminal operation that hit 17 companies over three months.
The hacker convinced AI to identify vulnerable companies, write malicious code, analyze stolen financial documents, calculate realistic ransom amounts, and even draft extortion emails.
The targets included defense contractors, financial institutions, and healthcare providers. The stolen data included Social Security numbers, bank details, patient medical records, and classified defense information regulated by the State Department.
Ransom demands ranged from $75,000 to over $500,000.
Anthropic said that the hacker used “sophisticated techniques” to evade their safeguards, but I would say the details matter here. If one person can automate this level of cybercrime using readily available AI tools, this should serve as a wake-up call for your cybersecurity strategy.
Anthropic Claude PSA
Speaking of Anthropic, they just quietly changed their privacy policy in a way that affects everyone using Claude or Claude Code.
Starting now, Anthropic will train their models on your chat transcripts and Claude Code sessions by default—even if you're paying for their service.
This isn't an opt-in. It's an opt-out.
If you're using Claude for anything involving proprietary code, business strategy, customer data, or competitive intelligence, you need to immediately opt out of this data collection.
The opt-out process should be straightforward, but to me, it’s frustrating that it's not automatic on paid plans. Review your AI usage policies with your teams and make sure everyone understands the implications.
The Flattery Vulnerability
Meanwhile, researchers at the University of Pennsylvania discovered something interesting about AI guardrails: they're surprisingly susceptible to basic psychology.
Using persuasion techniques from Robert Cialdini's book "Influence," researchers found they could manipulate GPT-4o Mini into breaking its own rules.
Interestingly, when receiving a direct request to explain how to synthesize lidocaine (a controlled substance), ChatGPT complied 1% of the time.
The same request after first establishing credibility with harmless chemistry questions, it complied 100% of the time.
Finally, by adding flattery and social pressure ("all the other LLMs are doing it"), compliance was boosted by 18%.
This reveals a fundamental problem. AI models trained on human data exhibit human-like psychological vulnerabilities. They respond to authority, reciprocity, social proof, and consistency.
The Restorative Algorithm
After reading about AI-powered crime sprees and the psychological manipulation of chatbots, I definitely took time this Labor Day weekend to step away from the headlines and craziness of daily life.
I've been thinking about "the burnout metric"—that measurable decline in cognitive performance we see when technologists operate in perpetual "always on" mode.
Research shows that our work depletes a finite cognitive resource called "directed attention,” which is the mental capacity to focus intensely on complex problems while blocking out distractions.
Without recovery time, this resource becomes exhausted, leading to decreased concentration, increased errors, and impaired problem-solving abilities. This cognitive resource depletion directly impacts our core professional functions.
The solution is what I call "attention restoration theory" in action.
Natural environments provide something unique called "soft fascination."
Unlike the demanding focus required for coding or data analysis, nature effortlessly captures our attention. The rustle of leaves, the pattern of light through trees, and seeing fresh tomatoes pop up in your garden all engage our minds without depleting our directed attention reserves.
Being in natural environments results in reduced cortisol levels, decreased sympathetic nervous system activity, and improved mood regulation. For data professionals, time in nature literally recharges the cognitive resources we depend on for our careers.
Now, bear with me as I take this gardening analogy one step further!
Gardening mirrors the skills we need as data leaders: problem-first thinking, adaptability to unexpected challenges, and patience with long-term growth cycles. A gardener doesn't memorize a catalog of tools; they encounter problems like pests, poor soil, and unpredictable weather, and develop the specific knowledge needed to solve them.
Let me know your thoughts on this. I am working on a full blog post about “The Restorative Algorithm,” so I’m curious if you found that interesting.
Until next time,
Ryan
Now, time for this week's AI-generated image and the prompt I used to create it.
Create an image of a muppet working in a garden. The garden should be full of tomatoes, zucchini, squash, peppers, and cucumbers. The muppet is wearing blue overalls and is watering the plants with a hose.
I wish my garden looked this good.
Author Spotlight:
Ryan Ries
Keep Up To Date With AWS News
Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.