top of page
Search

Synthetic Words, Synthetic Worlds: Generative AI, Deepfakes, and the Next 'Trust Recession' (2022 – Today)

  • Writer: VeroVeri
    VeroVeri
  • Jun 18
  • 2 min read

Part 4 of The Evolution of Misinformation

Deepfakes and the New Trust Problem, GenAI Misinformation Outruns Fact
Timeline of GenAI misinformation: MidJourney, DALL-E, deepfake detection, VeroVeri content verification shield.
March 2023 - Twitter timelines filled with lifelike photos of then former President Donald Trump being dragged away by police. None of it had happened. The pictures were made in Midjourney, shared thousands of times, and picked up by newsrooms before many editors realized they were synthetic images. The lesson was crystal clear: a picture no longer proves anything.

Photoreal Synthesis for Everyone (2022)

In April 2022 OpenAI released DALL·E 2. In August, Stable Diffusion followed with open-source code. Anyone with a browser could type “fireworks over the Eiffel Tower” and get magazine-quality art in seconds. Speed and realism became normal; surprise faded. Trust is roadkill.


A Double-Edged Promise

Generative AI is not all risk.

  • AlphaFold 2 predicted the 3-D shapes of 200 million proteins, a database Nature called a “revolution” for drug discovery.

  • A Stanford–MIT field study found office workers using GPT-4 finished writing tasks 37 % faster while quality scores rose.

  • Social scientists using GPT-4 to code open-ended survey answers cut analysis time in half with equal accuracy.

Yet the same systems can hallucinate.

  • Stanford’s Hallucination Benchmark shows GPT-4 gives factually wrong answers 17 % of the time across tasks, and 58 % on law questions.

Desensitization: We now expect brilliance and blunders in the same paragraph—trust, well, it's complicated.

Hallucination in Print

May 2025: The Philadelphia Inquirer and the Chicago Sun-Times published a “Summer Reading List” drafted with AI. More than half the titles, like The Dragon’s Student Loan and Shadow of the Blue Garden, don’t exist. Both papers retracted the lists within hours, but screenshots live on.

Desensitization: If editors miss obvious fakes and manipulate the truth, why should readers extend trust?

Election Cycle Deepfakes

Images against Trump (2023)

The Trump arrest image series was a blatant fake (made with Midjourney) and racked up many views before disclaimers caught up.

Video against Biden (2023)

An AI-generated ad released shortly after then-President Biden announced his 2024 run for re-election shows dystopian cityscapes, boarded-up stores, and shadowy migrants, no real footage, all synthetic imagery.

Desensitization: When both sides can fabricate persuasive media on demand, viewers doubt every clip—regardless of politics.

Detection Lags Behind

Meta’s Deepfake Detection Challenge concluded with the best model achieving 65% accuracy, not significantly better than a coin flip. Google’s SynthID watermark breaks once an image is cropped or blurred. Researchers warn that no public tool can yet guarantee authenticity.


What the Generative-AI Era Means

Print moved ideas faster than letters. Broadcast moved emotions faster than print. Social media moved attention faster than broadcast. Generative AI moves reality itself.

When "reality" can be created with a prompt, disbelief-by-default is the outcome.


Breaking the Loop

Built on our VALID™ Framework, the VeroVeri information audit acts as a proactive noise gate, screening out synthetic or weakly sourced claims before they reach your stakeholders. Verified material moves forward; the rest stops at the gate - no drama, no delay.


Next Up

From Noise Gate to Signal Boost - practical steps for teams that want to publish with confidence in an AI-saturated world.


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page