Discernment in the Age of Deepfakes: The Skill Nobody’s Teaching You
Two weeks ago, the internet decided Benjamin Netanyahu was dead.
It started during the second week of the Iran war. Iran claimed its missiles had struck Netanyahu’s office in Jerusalem. An AI-generated image circulated showing a man resembling the prime minister lying injured on rubble. Claims spread that his home had been bombed and his brother killed. Then Netanyahu went quiet. For days, no video appeared on his official channels. Only text statements. Tasnim News Agency, run by Iran's Islamic Revolutionary Guard Corps built their case on the gaps: no video for days, tightened security around his residence, and diplomatic visits that got quietly postponed. The conclusion was obvious (to the internet, at least): he was dead.
When Netanyahu finally resurfaced in a televised address on March 15, it should have ended the speculation. Instead, it made everything worse. A freeze-frame from the video appeared to show six fingers on his right hand, which is a common tell of an AI-generated video. Within hours, the theory shifted: Israel was faking his appearances using deepfakes.
He posted a video of himself getting coffee at a cafe in Jerusalem, joking about the rumors, spreading both hands to show five fingers on each. People analyzed the coffee level in his cup, scrutinized his ring, and claimed the receipt in the video was dated 2024. Elon Musk’s AI chatbot Grok was asked whether the video was real and declared it “100% deepfake,” calling it “a classic example of fake image manipulation.”
The video was real. A deepfake detection company GetReal Security, co-founded by UC Berkeley professor Hany Farid, performed a multimodal analysis of the audio, video, and facial and vocal biometrics. They found no signs of synthetic content or manipulation. The cafe confirmed he was there and the receipt people claimed was from 2024 simply looked that way in low-quality footage.
A real video of a living person, verified by the cafe’s own security cameras, was declared fake by the AI tool built to catch fakes. And millions of people had already made up their minds based on screenshots and speculation before anyone verified anything. This is the world we are currently living in.
The line between real and fabricated has collapsed. And nobody, not even the AI tools built to police that line, can reliably tell you what you’re looking at. It’s gotten insane out there.
Your Eyes Are No Longer Enough
This isn’t a political story.
What your eyes see on a screen can no longer be trusted as evidence of reality.
We have spent our entire lives operating on the assumption that video is proof. That if you watch someone say something on camera, they said it. That assumption is gone.
Pro-regime accounts have circulated AI-generated clips showing missile strikes flattening cities that never happened. People are cloning synthetic voices from a few minutes of audio sample. Entire social media personas are being constructed around people who don’t exist, with AI-generated faces, fabricated backstories, and real followings. One disinformation network generated material that received 145 million views in just two weeks, and almost all of it fabricated.
And the detection tools? The same AI chatbot that called Netanyahu’s real video “100% deepfake” turned around and told users that obviously fake footage of missile strikes was real. It got it wrong both ways, on the same platform, in the same week.
It Gets Me Too
I get caught ALL the time.
I’ve been studying AI obsessively for three years, using it every day in my business and teaching other people how to use it. And I still find myself looking at something online and genuinely not knowing if it’s real.
My actual process at this point looks like this: I see something that seems off, or seems too perfect, or triggers a reaction that feels engineered. I screenshot it and bring it to Claude with the question, “Is this real? Can you verify this?” Sometimes that works and Claude can identify the source, check the claim, and point me toward the original.
And sometimes it can’t and the AI doesn’t know either, or the tools flat out contradict each other (exactly like Grok calling a verified real video “100% deepfake”). So then I send it to my husband. He has great discernment and a completely different lens than I do, which means he catches things I miss.
When that’s still not enough, when my eyes can’t tell me and the tools can’t tell me and the people I trust are unsure, it goes to the Holy Spirit. For real. That is my actual workflow. I start with what I can see, move to the tools, bring in the people I trust, and when all of that fails (which it does more often than people realize), I go to the one source of discernment that doesn’t depend on any of it.
I’m telling you this because I don’t want you to think the answer is just “get better at spotting fakes.” I’m good at spotting fakes and I still miss them.
The Content We’re Creating
There’s another side of this that most people skip entirely. We talk about being deceived BY AI content, but almost nobody is asking the harder question: what about the content we’re putting out ourselves? If you’re using AI to write your posts, script your videos, or draft your emails, there’s a version of the same problem happening in your own workflow. Did you actually think it through, or did you just accept what the machine generated because it sounded good and credible and close enough?
I’ve had to confront this in my own work. There are times when AI produces something so clean that I almost let it go. It sounds smart and credible but when I read it again, I realize it isn’t really saying anything at all. People are putting out AI-generated content every day and passing it off as their own thinking. Their audience trusts them and has no idea the person behind it isn’t really behind it anymore.
Proverbs puts it plainly: “The simple believes everything, but the prudent gives thought to his steps.” That applies to what we consume AND what we create. Whether it’s a deepfake video you’re about to repost or a draft AI just wrote for you that you’re about to put your name on.
Discernment Is Older Than AI
The secular approach to this problem has value. Media literacy and fact-checking sites and detection software all have a place.
But they’re reactive. They respond to deception after it’s been created.
Discernment works differently. Discernment is a posture you carry into everything you consume and create. It’s the habit of testing what you encounter BEFORE you accept it.
John wrote to the early church: “Beloved, do not believe every spirit, but test the spirits to see whether they are from God, for many false prophets have gone out into the world.” That instruction wasn’t about AI. But the principle underneath it is exactly what this moment demands. Test what you’re seeing, reading, or listening to and don’t assume something is true because it looks credible or sounds familiar.
The world has always been full of things that sound true and aren’t. False prophets in the first century didn’t have deepfake software, but they did have charisma, confidence, and proximity to truth. They sounded close enough to the truth that people followed them. The false prophets of today still have charisma, confidence, and proximity to truth but they are more dangerous because AI and technology have also given them the tools to become viral and reach millions of people.
Living in It
I’m not going to hand you a five-step framework for spotting deepfakes. Anything I gave you today would be outdated in six months.
I’ve slowed my consumption waaaaaay down. Not because I’m disciplined (LOL) but because I’ve been burned enough times to know that when something provokes a strong emotional reaction, that’s exactly when I’m most likely to get fooled. The Netanyahu story spread as fast as it did because it was emotionally charged and arrived during a war. People reacted before they verified. I’ve caught myself doing the same thing, and the only thing that stopped me was the two-second pause where I thought, “Wait. Let me check this first.”
That pause is everything. And it’s not a sophisticated skill...just the willingness to not react IMMEDIATELY.
I’ve also learned that verification isn’t final. I go to the original source and look for statements on actual platforms, verified accounts, and official websites. If something only exists as a clip being shared out of context, I treat it as suspect. But I also know my verification tools can be wrong. So even after I check, I hold it loosely. That’s a weird place to live, but it’s the honest one.
And I protect my own output. Every piece of content I create with AI goes through my own review. Not just for accuracy, but for the deeper question: “Is this actually what I think, or is this just what sounded good?” My name goes on it and my credibility is attached to it, so that part doesn’t get delegated.
What Still Works
What AI cannot do is give you the ability to recognize what’s true. That comes from somewhere else entirely.
The people who have spent years in Scripture, learning to test what they hear against what God has actually said, have been training for exactly this moment without knowing it. They’ve been practicing a discipline that’s thousands of years old, and it’s never been more relevant than it is right now.
And that’s the good news in all of this. You don’t need a degree in computer science or a subscription to every detection tool on the market. You need the willingness to slow down, check what you’re seeing before you spread it, and protect the integrity of what you put out into the world. When everything else fails, go to God. He’s been the source of discernment long before any of this technology existed, and He’s not going anywhere.
I intend to keep practicing that. Imperfectly. Every day. I’d love for you to join me.
If this kind of thinking matters to you, subscribe. I write about AI, discernment, and building with integrity twice a week.




My technique involves unfocusing my eyes and looking at the flow and movement. AI material even when good usually has an ‘offness’ to it, movements too smooth, too perfect, there’s an unnatural feel, even if I can’t point out something specific, my animal side is repulsed.
Also I save myself the trouble by skipping social media. Neither answer is of course foolproof.
I am planning on a business, and AI will be incorporated, so I will probably be dragged (kicking and screaming) out of my isolationist hole at some point