A video of a president declaring war. A photo of an explosion at the Pentagon. A voice message from your boss asking for an urgent bank transfer. All of this existed. All of it was fake. In 2025, the question is no longer "is it true?" but "how do you know if it's true?"
What has changed
Five years ago, creating a fake video required Hollywood-level production. Today, with a laptop and a few free tools, you can:
The problem: Our brains haven't evolved. We continue to believe what we see. And what we see can now be entirely fabricated.
March 2023: An image of Pope Francis in a Balenciaga puffer jacket goes viral. Millions of people believe it. It was a Midjourney image generated in minutes. (Source: AFP Factuel, BuzzFeed News)
May 2023: A fake photo of an explosion at the Pentagon causes the American stock market to dip for several minutes — billions of dollars in fluctuation for an image generated in a few clicks. (Source: Reuters, Bloomberg)
The 4 types of fakes we encounter daily
1. Generated images
Midjourney, DALL-E, Stable Diffusion produce images indistinguishable from real photos. Landscapes, portraits, news scenes — anything is possible.
2. Video deepfakes
One person's face on another's body. Or worse: an entirely synthetic person speaking. "Fake CEO" video call scams already exist.
3. Voice cloning
With just a few seconds of your voice (a WhatsApp voice message can suffice), tools like ElevenLabs or Resemble.AI can generate a convincing imitation. The FBI has documented cases of families receiving calls from "kidnappers" using cloned voices of their loved ones. Quality varies, but the element of surprise makes detection difficult in the heat of the moment.
4. Automated text content
Content farms generate thousands of "news" articles per day. Technically correct, but often false or misleading. Impossible to distinguish from a real article at first glance.
How to detect fakes — the reflexes to develop
For images:
For videos:
For personal messages (voice, video):
The psychological traps
Fakes don't work through technology alone. They exploit our cognitive biases — mental shortcuts we all use:
Confirmation bias
We more readily believe what confirms what we already think. Example: in 2020, COVID fake news spread 6 times faster than verified information (MIT study). Why? Because they confirmed the existing fears or beliefs of each side.
The repetition effect (illusion of truth)
The more we see something, the more we believe it — even if we know it's false. This is documented in cognitive psychology. Content farms exploit this: repeating a lie across 50 different sites makes it "true" in our memory.
Apparent authority
A verified account, a site that looks like a known media outlet, a "Dr" before a name. In 2024, fake "journalist" accounts on X/Twitter spread disinformation for weeks before being detected. The appearance of authority disables our critical thinking.
Emotion as a short-circuit
When we're angry, scared or outraged, the prefrontal cortex (reflection) gives way to the amygdala (reaction). Viral content is optimised for this: triggering a strong emotion BEFORE we think. If a piece of news makes you furious, that's the moment to slow down.
What we can do in practice
In daily life:
To protect yourself personally:
For society:
Our position
We explore AI every day. We see what it can do — the best and the worst.
What we think:
Technology isn't the problem. It's a tool. The problem is that we haven't adapted our reflexes to this new reality.
For millennia, "seeing was believing". That rule is obsolete. We need to relearn how to doubt — not everything, but what hasn't been verified.
The good news: Once you know that fakes exist, you become much harder to fool. Awareness is the first defence.
Key takeaways
- -Images, videos, voices: everything can be convincingly generated in 2025.
- -Reverse image search is your best friend.
- -Details betray fakes: hands, eyes, reflections, metadata.
- -Urgency and strong emotion are warning signs.
- -"I don't know if it's true" is an intelligent response, not an admission of weakness.
- -Awareness of the problem is already a form of protection.