Blog
How to Detect Deepfakes in 2026: Signs AI-Generated Videos Can't Hide
Dr. Ryan Ries here. Someone posted deepfake videos of the Stranger Things cast on X that were frighteningly realistic.
AI-generated videos like these easily fool most people scrolling through their feed.
This got me thinking: with all the new deep fakes being announced, how can one protect themselves in this day and age and verify they’re seeing accurate information?
Quick commercial break before we dive in:
On February 3rd, I’m co-hosting an “Ask Us Anything: Agentic AI” Q&A live event with a couple other folks on my team. Come with all your agentic AI questions, and we’ll discuss!
The Technology Has Escaped
Remember when this was the best deepfake AI could create of Will Smith eating spaghetti?

Image credit: Vice
Creating convincing deepfakes used to require specialized equipment, technical expertise, and significant computing power. Not anymore.
Models like LTX-2 are now open source. Anyone can download them and run them on consumer hardware. A decent gaming PC with an RTX 4090 can generate 4K deepfakes at 50 frames per second with synchronized audio.
The barrier to entry has completely collapsed.
Right now, someone with basic technical skills can make you say anything. They can create videos of your CEO authorizing wire transfers. They can clone your voice from a three-second audio clip harvested from your Instagram story.
Creating a deepfake takes seconds and costs pennies. Proving it's fake can require hours of forensic analysis and specialized expertise.
What You're Actually Looking For
Detection is getting harder, not easier.
The old advice doesn't work anymore. "Look for weird teeth" or "check if the lighting is off" made sense when deepfakes were crude. But models like LTX-2 have solved most of those obvious problems.
So what do you look for when the technology has gotten this good?
The answer is subtle. Modern deepfakes fail at the edges of human behavior and physics. They struggle with the tiny, unconscious things we do without thinking. The micro-movements, the biological quirks, and the physical interactions that are computationally expensive to render correctly are now our telltale signs.
Here’s what I recommend if you’re trying to detect a deepfake:
- Watch the eyes. Real humans blink spontaneously every 2-10 seconds. AI-generated faces often stare without blinking for unnaturally long periods. When they do blink, it looks mechanical, lacking the subtle muscle movements around the eyes that accompany genuine blinks.
- Ask them to turn their head, or if it’s a video, pay attention to any head movements. Most deepfake models train primarily on front-facing data. When a synthetic face rotates to a full profile, the rendering breaks down. The ear might blur. The jawline detaches from the neck. Glasses melt into skin.
- Listen for the breath. Human speech includes natural breathing patterns. AI audio often inserts breath sounds at syntactically wrong moments or loops identical breath sounds. If someone is supposedly speaking outdoors in wind but the audio sounds studio-clean, that's your signal.
- Check the details. Jewelry morphs or disappears as the head moves. Hair moves as a solid mass rather than individual strands. Teeth look like a single white block without natural separation. Skin appears waxy and overly polished, lacking the pores and fine textures visible in real 4K footage.
Detection Software Isn’t Keeping Up
Detection software exists, but it's locked in an arms race it's losing. McAfee's Deepfake Detector, browser extensions like Digimarc's C2PA validator, and mobile security apps like Trend Micro's ScamCheck can help flag suspicious content, but none of them are foolproof.
New generative models are specifically trained to defeat existing detection algorithms. A "90% Real" score from a detection tool doesn't guarantee authenticity.
The industry's pushing something called C2PA (Content Credentials) as a long-term solution. The concept is to cryptographically sign digital content at the moment of capture, creating a tamper-evident chain of custody. Adobe, Sony, Leica, and others have implemented it.
Platforms like X strip metadata to reduce file size, effectively deleting the C2PA manifest. For example, there is no way to verify the Stranger Things deepfake from X through the platform. You'd need to download it and check it through external tools that most people don't even know exist.
What Seems to Be Working
Technical solutions will always lag behind the threat. Your best defense is procedural.
My first, and easiest, tip is to set up a safe word with your family. Not your pet's name or a PIN. Something random and never shared online. "Purple Octopus" or "Lego Teapot." If someone calls claiming to be your daughter in an emergency, you ask for the safe word immediately. No exceptions. If they can't provide it or the line cuts out, you hang up and call them back on their known number.
Voice spoofing depends on you staying on the line. Breaking the connection breaks the attack.
Next, lock down your biometrics. Enable "Identity Check" on Android or "Stolen Device Protection" on iOS. These features require stricter authentication when your phone detects it's outside trusted locations. Learn how to instantly disable biometrics: hold Power + Volume Up for three seconds on most phones. This forces a PIN requirement, blocking deepfake-based presentation attacks.
Audit your digital footprint. Videos of you speaking directly to camera are training data for voice clones. Consider archiving old vlogs or restricting them to friends-only visibility. Scammers harvest three seconds of clear audio from social media to clone your voice with 95% accuracy. Of course, today with social media, this is a lot harder, but do what you can.
For businesses, implement multi-channel verification. If your CFO requests a wire transfer via video call, verify through a secondary channel. Send an encrypted message through Signal. Call their desk extension. Ask them to perform a physical action on camera, like waving their hand in front of their face. Current real-time deepfakes struggle when hands occlude the face.
Never authorize financial transactions based solely on video or voice.
The Bigger Picture
We've entered what researchers call the "Zero Trust Media" era. Every digital artifact must be presumed synthetic until proven authentic through cryptographic or forensic means.
"Seeing is believing" is dead.
This creates problems beyond fraud and misinformation. There's something called the "Liar's Dividend." When deepfakes become ubiquitous, people with genuine evidence of misconduct can dismiss it as AI-generated. Real becomes indistinguishable from fake, and truth gets lost in the noise.
The era of passive trust is over, and verification isn't optional.
My Take
Many may disagree, but I'm not a doom-and-gloom person (ok, I’ll admit, I can get pessimistic at times). AI has incredible potential to solve complex problems, but we need to be clear-eyed about the risks.
The democratization of deepfake technology is happening whether we like it or not. You can't regulate away open-source models. You can't un-invent this technology.
What you can do is adapt. Build verification protocols into your personal and professional life. Train your team to recognize the signs. Question what you see and hear, especially if it triggers a strong emotional response.
Stay skeptical. Stay safe.
Until next time,
Ryan
Bonus: Download our Deepfake Detection Guide - Click here to get our infographic with examples of common deepfake tells and a step-by-step verification checklist.
Now, time for this week's AI-generated image and the prompt I used to create it.
Create an image of me and Will Smith eating spaghetti together. We each have our own bowl and we are sitting at a table side by side. Use the attached image of me as a reference photo.

Author Spotlight:
Ryan Ries
Keep Up To Date With AWS News
Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.