1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
PoliticsGlobal issues

Fact check: How to spot AI-generated newscasts

Monir Ghaedi | Adnan Sidibe
July 2, 2025

AI-generated newscasts are getting harder to spot — and they're flooding your feed. Here's how to avoid falling for the fakes.

A TikTok screenshot showing a male reporter standing in front of a red British post box with UK flags in the background. He holds a microphone and appears to interview a woman about the upcoming election.
This AI-generated interview on TikTok looks like a real street poll; complete with a red post box and British flags. But the scene is entirely fake.Image: TikTok

In one TikTok video, a reporter stands in front of a traditional red Royal Mail pillar box, with British flags fluttering in the background and a microphone in hand. He asks a female passerby who she plans to vote for in the upcoming election. "Reform," the woman replies. "I just want to feel British again, innit."

A user comments below: "I wonder how much they paid her to say that."

But this scene never happened. The interview is entirely fake. The reporter doesn't exist — he was generated by artificial intelligence. And if you look closely, there's a subtle clue: a faint watermark in the corner bearing the word "Veo," the signature of Google DeepMind's powerful new video-generation tool.

This 8-second video isn't an isolated case. From TikTok to Telegram, synthetic newscasts — AI-generated videos that mimic the look and feel of real news segments — are flooding social feeds. They borrow the visual language of journalism: field reporting, on-screen graphics, authoritative delivery. However, they're often completely fabricated, designed to provoke outrage, manipulate opinion or simply go viral.

"If you're scrolling fast on social media, it looks like news. It sounds like news," Hany Farid, a professor at the University of California, Berkeley who specializes in digital forensics, told DW. "And that's the danger."

Real-world risks

Many synthetic videos blur the line between satire and reality, or are simply misleading.

In another example(it's an 8-second clip too), a reporter appears to describe an "unprecedented military convoy" moving through central London. She stands in front of a tank as a crowd looks on. Yet the video does not refer to any specific event, time, or context.

An AI-generated fake newscast shows a synthetic reporter standing in front of tanks in central London

DW Fact check has repeatedly observed how such clips resurface during times of crisis — like riots or major news events — repurposed to sow confusion or falsely suggest dramatic escalations.

During the latest conflict escalation between Israel and Iran, TikTok and other platforms were swarmed with AI-generated content about the war, including fake newscasts making false claims — such as Russia allegedly joining the war, Iran attacking the US, or Iran shooting down US B-2 bombers used in strikes on Iran's nuclear facilities.

DW also observed a surge in synthetic news clips following the outbreak of protests against the US immigration authority ICE in Los Angelesin June.

The consequences extend far beyond social media.

In 2024, Taiwanese researchers flagged AI-generated newscasts on local platforms that falsely accused pro-sovereignty politicians of corruption. The clips didn't just spread misinformation — they seeded distrust, undermining the credibility of all news outlets ahead of the country's elections.

But some users turn to AI newscasts for parody or comic effect. One viral TikTokshows a synthetic anchor reporting in front of a pothole so deep that motorcycles vanish into it. Another has an avatar declaring, "I'm currently at the border but there is no war. Mom, Dad, this looks real — but it's all AI."

Fact check: How AI fakes are distorting the Israel-Iran war

02:54

This browser does not support the video element.

How to spot a fake newscast

So, how can you tell what's real?

Start with watermarks. Tools like Veo, Synthesia, and others often brand their videos, though the labels are sometimes faint, cropped out, or ignored. Even clearly marked clips are frequently flooded with comments asking, "Is this real?"

Fake newscasts are among the most polished AI-generated content. Because they often depict static news studio environments, typical AI giveaways — like awkward hand movements or inconsistent backgrounds — are harder to spot. But subtle clues remain.

Watch the eyes and mouth. Synthetic avatars often blink unnaturally or struggle with realistic lip-syncing. Teeth may appear too smooth or unnaturally shiny. Their shape might even shift mid-sentence. Gestures and facial movements tend to be overly uniform, lacking the natural variation of real humans.

Text can also be a giveaway. On-screen captions or banners often contain nonsensical phrases or typographical errors. In one example, a supposed "breaking news" chyron read: "Iriay, Tord for the coteat aiphaiteie the tha moerily kit to molivaty instutuive in Isey." The reporter's microphone was labeled "The INFO Misisery."

This AI-generated "newscast" may look real at first glance — but check the text. Nonsense words like "Iriay Tord" and "aiphaiteie" are red flags.Image: tiktok

As Farid explained, the challenge of spotting synthetic content is a moving target.

"Whatever I tell you today about how to detect AI fakes might not be relevant in six months," he said.

So what can you do? Stick with trusted sources.

"If you don't want to be lied to," Farid said, "go to reliable news organizations."

How cheap AI is making money

The concept of AI presenters isn't new. In 2018, China's state-run Xinhua News Agency introduced a stilted, robotic AI anchor. At the time, it was more curiosity than threat.

But the technology has evolved dramatically. Tools like Veo now let virtually anyone — with no media training — create polished, broadcast-style videos for just a few hundred euros a month. The avatars speak fluidly, move realistically, and can be dropped into almost any scene with a few typed prompts.

"The barrier to entry is practically gone," said Farid. "You don't need a studio. You don't even need facts."

Most of these clips are engineered for maximum engagement. They tap into highly polarizing topics: immigration, the war in Gaza, Ukraine, and Donald Trump, to provoke strong emotional reactions and encourage sharing.

Social media platforms often reward this content. Meta, for instance, recently adjusted its algorithm to surface more posts from accounts users don't follow, making it easier for synthetic videos to reach broad, unsuspecting audiences. Monetization programs further incentivize creators: The more views a video racks up, the more money it can generate.

This environment has given rise to a new breed of "AI slop" creators: users who churn out low-quality synthetic content tied to trending topics just to grab views.

Accounts like this one— with about 44,000 followers — often jump on breaking news before journalists can confirm the facts. During a recent airline crash, dozens of TikTok videos featured AI avatars dressed as CNN or BBC reporters, broadcasting fake casualty numbers and fabricated eyewitness accounts. Many remained online for hours before being taken down.

In moments of breaking news — when users are actively seeking information — realistic-looking AI content becomes an especially effective way to attract clicks and cash in on public attention.

"Platforms have moved away from content moderation," Farid told DW. "It's a perfect storm: I can generate the content, I can distribute it, and there are audiences willing to believe it."

Fact check: How do I spot AI images?

08:21

This browser does not support the video element.

Joscha Weber contributed to this article

Edited by: Tetyana Klug, Rachel Baig

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW