1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
PoliticsMiddle East

Fact check: How fake images from Iran misled media outlets

Rachel Baig | Björn Kietzmann
March 19, 2026

AI‑generated Iran war images fooled international media outlets, including DW, exposing how easily manipulated images can infiltrate global newsrooms. Here is what we learned and how you can spot the fake images.

Photo of protests in Iran with DW marks showing it is fake and AI-generated.
After journalists suspected that several images provided by the news agency SalamPix were AI-generated, media houses looked through their content and found many examplesImage: Salampix

Manipulated or recycled photos and videos frequently circulate during wars and crises — sometimes to deceive, sometimes simply to generate clicks.

During the ongoing US‑Israel war with Iran, the problem has reached a new level. Photo agencies themselves were supplied with manipulated or fake images, which then ended up in newsrooms across Europe. 

Some of the images appear to have been generated with artificial intelligence, while others were digitally altered by humans. Here is the story, what we learned, and how you can identify AI fakes. 

The SalamPix saga 

In early March, Dutch media reported that ANP (Algemeen Nederlands Persbureau), the country's largest news agency, removed roughly 1,000 Iran‑related photos from its database after suspecting that some had been manipulated with AI. 

Two days later, the Dutch branch of the media network RTL reported that its news service, RTL Nieuws, had unknowingly used three of these images on its website and in its app. After ANP alerted RTL that the photos were AI‑generated, RTL removed them and published a detailed explanation identifying which images had been taken down and why. 

When RTL was alerted that photos it had used were AI‑generated, it removed them and published a explanation identifying which images had been taken down and why Image: rtl

Shortly afterward, German weekly news magazine Der Spiegel acknowledged that it had also used an AI‑manipulated image in its coverage before realizing it was fake. 

In both cases, reputable news agencies had supplied the images. RTL received them through ANP, while Der Spiegel obtained them via dpa Picture Alliance, ddp and Imago Images, all of which had sourced the material from the French agency Abaca Press.

The images were ultimately traced back to an Iranian agency, SalamPix. According to Der Spiegel, SalamPix provided the photos to Abaca Press, which then distributed them to the databases of multiple international agencies and, ultimately, to newsrooms. 

News agencies normally collect media content, including pictures, from around the world and then sell and distribute the information to newspapers, TV and radio stations, which in turn use it as part of their own coverage.

In response to the revelations of AI-manipulated images, many photo agencies have blocked SalamPix or issued so-called "kill notices," instructing media clients they work with to remove SalamPix images from publication.

How the photo agencies were fooled 

In Germany, the "agency privilege" is a legal concept that generally allows media outlets to rely on the authenticity of verified text, image and video material delivered by news agencies with whom they work. Even a global broadcaster like Deutsche Welle (DW) routinely relies on external agencies to cover global events. 

But as AI‑generated and AI‑manipulated content becomes more sophisticated and prolific, distinguishing real images from fabricated ones is increasingly difficult. The challenge is not only technological but also logistical. During fast-moving breaking news situations, journalists and agencies must sift through massive volumes of visuals at high speed. Since the start of 2026, DW has received an average of 140,000 images per day from agencies.  

"Transparency is one of our highest priorities. Whenever we show AI‑generated content, it must be clearly and unmistakably identifiable as such," explains DW Editor‑in‑Chief Mathias Stamm. "And if we make a mistake — as in the case of using images from the agency SalamPix — we acknowledge it and remain transparent." 

Examples used and identified by DW 

After the first reports about SalamPix, DW reviewed its own coverage and found that it, too, had used its images. DW subsequently removed all SalamPix images from its publications and issued correction statements under each changed article to explain the altered content.  

This image from February, allegedly showing the aftermath of a missile's impact in Tehran, is actually AI-generatedImage: SalamPix/ABACA

One example is a seemingly realistic street scene depicting an alleged missile strike in Tehran, showing yellow cars (possibly taxis) in the foreground, buildings in the background and smoke rising behind them. It was shared through the agencies in 2026. 

On closer inspection, however, several unmistakable AI glitches become obvious. 

When you zoom in, it becomes obvious that the text is gibberish — a typical AI glitchImage: SalamPix/ABACA

Most notably, the writing on a wall and on a car looks like text — until examined more closely. Upon zooming in, it clearly does not resemble Farsi or Arabic script. In fact, it is not any language at all, but nonsensical pseudo‑text, a common glitch in AI‑generated imagery. 

Another typical AI glitch is that usually straight surfaces, like walls and windows, seem to bulge out like a bubble, as seen hereImage: SalamPix/ABACA

Another typical mistake in AI-generated images is the presence of oddly shaped buildings and objects that defy logic. In this case, we also found an oddly shaped car window and bulging walls and windows of the building that is above the yellow car in the center of the picture. 

Zooming in on cars and buses can also be revealing, as AI systems frequently produce vehicles that don't exist or contain unrealistic, illogical detailsImage: SalamPix/ABACA

If you zoom in on the car and bus in the lower left corner of the image, you'll also see that they are oddly shaped and do not correspond to any existing models.

Another image circulated in January that allegedly showed security forces shooting at protesters; distributed by SalamPix, it also turned out to be AI‑generatedImage: SalamPix/ABACA

This is another example of an AI-generated image. It portrays a man dressed all in black with a weapon in his hand. The caption provided read: "Tehran Armed Iranian security forces opening fire to suppress protesters during demonstrations on January 8, 2026." 

If you zoom in on the feet of the man, you can notice that both are different not only in size but also in shape.Image: SalamPix/ABACA

Again, we see AI glitches: The two shoes don't match; the person seems to have two completely different feet and shoes. The shadow of the person (especially the right hand, above the gun) does not correspond to the actual body part.

Look closely at the right hand: a chunk between the thumb and fingers seems to be missingImage: SalamPix/ABACA

The same hand also appears anatomically incorrect, with a chunk seemingly missing between the thumb and fingers.  

DW Fact check also looked through older images provided by SalamPix and, like the Süddeutsche Zeitungone of Germany's largest daily newspapers, found irregularities in images published.  

One image DW analyzed is the one at the top of the article, allegedly showing Iranian protesters clashing with security forces during a protest against the Iranian regime in Mahabad city in November 2022. 

The image shows typical AI errors of that time: the hands, especially the fingers, of several people in the image are deformed or look like they are made of wood. The windows of the building on the left don't align at the same angle. And the face of the person at the far right of the image is distorted.

How to spot AI‑generated or manipulated images 

As AI tools improve, fabricated visuals are becoming harder to distinguish from real ones. This means that both everyday users and professional journalists are at risk of being misled — as this case demonstrates. 

Media organizations, including DW, are investing heavily in training staff to detect and debunk AI manipulation. DW's Fact check team also produces media literacy content to help audiences understand how to recognize misleading images and videos. 

Edited by: Thomas Sparrow, Joscha Weber, Cristina Burack

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW

More stories from DW