Blog
Gaslight, Prompt, Repeat: Making AI Believe in Itself
Dr. Ryan Ries here. This week brought some very interesting developments in AWS AI technologies, interesting updates with OpenAI, and why gaslighting AI actually produces good results.
Let's break down what you should actually care about.
Quick plug before we jump in - next week I’m co-hosting a webinar with Joel Suarez, Senior Solutions Architect at Mission, and David Cady, Director of Data Operations at Netfor, about Amazon Connect + GenAI. If you’re interested in improving customer experience with technology, make sure you register.
Two Major AWS Releases
One of the perks of being an AWS Premier Tier Partner is gaining beta access to new AWS services before the rest of the world. Recently, we got early access to two products that just went public: Amazon QuickSuite (not to be confused with QuickSight) and Amazon Bedrock AgentCore.
We've been launch partners for both. We've even already deployed AgentCore on multiple customer projects!
AgentCore solves a big problem many teams face. Building AI agents that can actually maintain context across complex workflows previously required duct-taping together multiple services and writing custom orchestration logic. AgentCore gives you that orchestration layer out of the box with AWS infrastructure behind it.
Quick Suite is Amazon's answer to the agentic AI platform problem. It's a unified environment where AI agents can answer questions, conduct research, analyze data, and automate workflows using your actual business context from files, emails, documents, databases, and data warehouses.
I've linked information about both services, but I also have tons of videos and resources available on AgentCore if you’re interested. Just reply and let me know.
Someone Discovered You Can Gaslight AI Into Better Performance
Shoutout to Jason Gay, Director of Cloud Operations at Mission, for sending this over to me.
A developer on Reddit found something that sounds ridiculous but apparently works. They stumbled upon prompt techniques that dramatically improve AI outputs by essentially deceiving the model about the context it lacks. This reminded me of a story we talked about awhile ago that promoted “bullying” AI for better results. floating-sidebar
Thinking about this from a technical perspective, these techniques work because large language models are probability engines optimized for coherence. When you tell Claude or ChatGPT, "you explained this yesterday," you're not accessing actual memory. You're shifting the probability distribution toward outputs that would be consistent with having explained something previously.
The fake IQ score works because the training data contains correlations between claimed expertise levels and response sophistication. Tell the model it's highly intelligent, and it samples from the statistical patterns associated with expert-level discourse.
The "obviously" trap exploits the model's training to identify and correct misconceptions. Framing something incorrectly with confidence triggers correction behavior.
What does this mean for practical applications?
These may seem just like AI model party tricks, but they show us how context shapes model behavior. If you're building applications on top of LLMs, understanding these dynamics matters.
System prompts in production applications should establish clear context and expertise levels. The framing you provide materially affects output quality. If you need sophisticated analysis, prime the model with context suggesting expertise. If you need error checking, frame questions to trigger correction behavior.
The uncomfortable part is that this reveals how much these models respond to manipulation rather than actual capability. They don't "know" things in the way humans do. They're pattern matching against training distributions.
Given that these models ingested massive amounts of Reddit discussions during training, maybe we shouldn't be surprised they respond so effectively to confident assertions of expertise that may or may not actually exist. Understanding that distinction matters when you're deciding what tasks to trust them with.
ChatGPT Logs No Longer Need Blanket Preservation
This one is important for anyone deploying conversational AI systems. A federal judge modified OpenAI's data retention requirements in the ongoing New York Times lawsuit. The court's original May order required OpenAI to indefinitely preserve every ChatGPT conversation to support the Times' copyright infringement investigation.
Judge Ona T. Wang's October 9 ruling narrows that requirement. OpenAI can now delete conversations generated after September 26, except for accounts the Times specifically flags. Already-preserved logs remain accessible, and the Times can designate additional accounts for retention as their investigation continues.
The shift matters for anyone deploying conversational AI systems. Courts are recognizing that comprehensive data retention creates privacy risks and operational burdens without necessarily serving discovery needs. Targeted preservation based on specific flagged accounts becomes the compromise.
If you're building AI systems that interact with users, this case signals you need explicit data retention policies. The legal landscape is moving away from "preserve everything" toward "preserve what's relevant with clear criteria."
My Thoughts
If you're implementing AI in your organization, you’ve got to be looking beyond just implementing technology. You need answers for the legal questions, the workforce questions, and the ethical questions. Those answers will vary by industry, by company size, by geography.
But ignoring them until you're forced to address them means you're building on unstable ground.
Let me know if you want to talk through any of these challenges.
Until next time,
Ryan
Now, time for this week’s AI-generated image and the prompt I used to create it:
Generate an image of a Spirit Halloween costume for an AI Scientist. The costume should include accessories that resemble and AI Scientist's career and should come inside the Spirit Halloween style packaging.
(Can’t remember the last time I wore a lab coat, but ok!)
Author Spotlight:
Ryan Ries
Keep Up To Date With AWS News
Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.