1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

How extremist groups like 'Islamic State' are using AI

July 10, 2024

Groups like the "Islamic State" and al-Qaeda are urging followers to use the latest digital tools to spread their extremist message, avoid censorship and recruit.

Laptop user with their hood up.
Extremist groups are early adopters when it comes to harnessing emerging technologies for their malicious purposesImage: Tim Goode/empics/picture alliance

The video shows an older episode of Family Guy, one of the world's best-known cartoon comedy shows, in which the main character, Peter Griffin, drives a van containing a bomb at gunpoint over the bridge. But the audio has been changed. 

"Our weapons are heavy, our ranks are many, but the soldiers of Allah are more than ready," Griffin sings in a tune meant to encourage followers of the extremist "Islamist State" group (IS).

The so-called 'Islamic State' used an AI-generated voice of Peter Griffin (right) over a clip from this episode of 'Family Guy' to try to recruit new followersImage: 20th Century Fox/Courtesy Everett Collection/IMAGO

The animation is just one illustration of how extremist groups are using advanced computing or artificial intelligence (AI) to create content for their followers.

The term AI covers a wide range of digital technologies and can mean anything from the faster processing of large amounts of digital data for analysis to what's known as "generative AI," which "generates" new text or visuals based on huge amounts of data. That's how this Peter Griffin song was created.

"The rapid democratization of generative AI technology in recent years … is having a profound impact on how extremist organizations engage in influence operations online," writes Daniel Siegel, a US researcher who analyzed how AI is used for malicious purposes in an article for the Global Network on Extremism and Technology in which he highlighted the Peter Griffin video.

Increase in extremists using AI 

Over the past year, observers from a variety of monitoring organizations have reported how IS and other extremist groups are encouraging followers to make use of new digital tools.

In February, a group affiliated with al-Qaeda announced it would start holding AI workshops online, The Washington Post reported. Later, the same group released a guide on using AI chatbots.

In March, after a branch of IS killed over 135 people in a terror attack on a Moscow theater, one of the group's followers created a fake news broadcast about the event, publishing it four days after the attack.

After the first AI news video about the Moscow attack, at least four similar clips followed, "reporting" on IS activities in Africa and the Middle EastImage: Alexander Patrin/dpa/TASS/picture alliance

And earlier this month, officers from Spain's Ministry of the Interior arrested nine young people around the country who had been sharing propaganda celebrating the IS group, including one man described as being focused on "extremist multimedia content, using specialized editing applications supported by artificial intelligence."

"What we know about AI use today is that it works as a complement to official propaganda by both al-Qaeda and IS," says Moustafa Ayad, executive director for Africa, the Middle East and Asia at the London-based Institute for Strategic Dialogue (ISD), which investigates extremism of all kinds. "It allows supporters and unofficial support groups to create emotive content specifically used to galvanize the base of supporters around core concept."

The way this kind of content looks also means it may not be picked up by content moderators on popular social media platforms either.  

In fact, Ayad told DW that even the more ridiculous and unrealistic IS content is often enough of a novelty for followers to share it among themselves. 

Back in 2021, IS followers used AI to have US presidents — including Barack Obama and George Bush — singing IS war songs Image: Jane Barlow/PA Wire/dpa/picture alliance

Extremists are 'early adopters'

None of this is surprising to longtime observers of the IS group. When the extremist group first came to prominence around 2014, it was already making propaganda videos with fairly high production values to intimidate enemies and recruit followers.

"All this speaks to something the ISD has continually noted," Ayad explained. "Terrorist groups and their supporters continue to be early adopters of technology to serve their interests." 

But how dangerous is this sort of content really? After all, the fake news broadcast about the Moscow attack looks fake, and the Peter Griffin song isn't hurting anybody. Or is it?

At the height of its power, the IS group had its own media department producing propaganda films Image: ZUMAPRESS.com/picture alliance

Monitoring groups have listed a variety of ways in which extremist groups could use AI. Besides propaganda, they could also use chatbots from large language models, like ChatGPT, to converse with potential new recruits, experts suggest. Once the chatbot has aroused interest, a human recruiter might take over, they say.

AI models, like ChatGPT, also have certain rules written into their systems that prevent them from helping users with things like, for example, getting away with murder. However, these rules have proven unreliable in the past, and would-be terrorists might be able to override them to obtain dangerous information.

There are also fears that extremists could use AI tools to undertake digital or cyberattacks or to help them plan terror attacks in real life.

Deep fakes vs. real bombs

Experts argue that while AI has worrying potential in the hands of extremists, real life is still more dangerous.

In a 2019 paperin the journal, "Perspectives on Terrorism," researchers examined the connection between how much propaganda the IS group put out and their actual physical attacks. There was "no strong and predictable correlation," they concluded.

"It's similar to the discussion we were having about cyberweapons and cyber bombs around 10 years ago," says Lilly Pijnenburg Muller, a research associate and expert on cybersecurity at the Department of War Studies at King's College London.

Today even rumors and old videos can have a destabilizing impact and lead to a flurry of disinformation on social media, she told DW. "And states have conventional bombs that can be dropped, if that is their intention."

"I don't know if, at this stage, the use of AI by foreign terrorist organizations and their supporters is more dangerous than their very real and graphic propaganda involving the wanton murders of civilians and attacks on security forces," the ISD's Ayad says.

"Right now, the bigger threat is from these groups actually conducting attacks, inspiring lone actors or successfully recruiting new members because of their responses to the geopolitical landscape, namely the Israeli war on Gaza in response to October 7," he continued. "They are using the civilian deaths and Israel's actions as a rhetorical device for recruitment and to build out campaigns."

Edited by: Davis VanOpdorp

Israeli bombing of Gaza school kills 16 people

02:21

This browser does not support the video element.

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW