AI disinformation could threaten Africa's elections
February 23, 2025
"Deepfakes, videos, audios — it's so easy to produce disinformation at home for free with the help of artificial intelligence. What might happen in five or 10 years is really scary," said Hendrik Sittig, director of the Konrad Adenauer Foundation's media program in sub-Saharan Africa.
The World Economic Forum's 2024 Global Risk Report also described AI-supported disinformation as a No. 1 threat, and according to Sittig, it often aims to undermine democratic principles and divide societies.
In cooperation with the Konrad Adenauer Foundation, Karen Allen from the Institute for Security Studies in South Africa and Christopher Nehring from the cyberintelligence institute in Germany documented AI disinformation in Africa and Europe around national elections.
They found AI disinformation campaigns mostly aim to undermine electoral authorities and processes. The study acknowledged that beyond elections, AI disinformation has hardly been investigated in Africa.
Variety of actors use AI disinformation to drive propaganda
Allen and Nehring found that Europe and Africa face similar challenges.
"We could identify many culprits which use the same AI tools," Nehring told DW, adding that using artificial intelligence for election campaigns and disinformation is particularly prevalent among the extreme right political parties.
While the study found that Russia appears to have made AI-generated disinformation an instrument of its policy internationally, other actors from China or the Gulf states are also targeting Africa to spread their narrative through propaganda.
According to Nehring, groups with links to other states, cyber criminals, terrorist and Islamist organizations use AI in online disinformation campaigns primarily to generate content. This tracks with earlier studies by other researchers.
But in many African nations, access to the internet and social media networks is often prohibitively expensive, and in some areas not even available. As a result, Nehring and Allen noticed there is comparatively little dissemination of deepfakes — realistic-looking video content where the face of a prominent person is swapped out, or the voice is faked using AI — and cheaper, easier-to-recognize "cheap fakes."
For Allen, access to "clear, verifiable and truthful information on which people can make their own political judgments" is crucial.
Disinformation influencing public opinion in Congo conflict
The current fighting in the eastern Democratic Republic of Congo between Congolese government forces and the Rwanda-backed M23 has been fertile ground for disinformation and hate speech.
Amid the escalating conflict, images and text content linked to Rwandan accounts has influenced public opinion, said Allen.
"This reinforces the suspicion that certain political figures are exacerbating tensions between the two sides," she said.
She added that in Rwanda's case AI has been used to flood social media platforms and drown out dissenting voices. This tactic is especially used in conflict zones, but also during elections.
More fake news in unstable regions
Studies show AI-generated content is increasingly influential in Africa, but there has also been growth in accredited fact-checking organizations.
In South Africa, the Real411 platform allows voters to report concerns about online political content, including the possible use of AI.
With around 26 million social media users, South Africa offered a wide audience for AI-supported information manipulation during the 2024 parliamentary election. The newly founded Umkhonto we Sizwe party, led by ex-President Jacob Zuma, disseminated a deepfake video of US President Donald Trump, in which he allegedly announced his support for the party. According to Nehring and Allen's study, this was the most-shared AI content during the election, but not the first instance of deepfakes around political power transitions.
Avatars influence voting in Burkina Faso
In Burkina Faso, videos circulating after the September 2022 coup, in which Captain Ibrahim Traore took control, urged citizens to support the military junta.
The fake videos, first discovered on Facebook and later circulated in WhatsApp groups and on social media, show people calling themselves Pan-Africanists. A suspected connection to the Russian mercenary Wagner Group, which was still active at the time, could not be proven.
"It turned out these were all avatars. They were not real people. They [had been] created to support a political narrative, in this particular case, supporting the coup," said Allen, adding that the AI-tool Synthesia was used to create the content.
"We've seen the same platform being used by other actors, Chinese-based actors as well, to create similar kinds of distortions," she added.
She said African citizens could protect themselves against online manipulation by getting their news from a "variety of sources." But this has become more difficult since big social media platforms like X and Facebook have moved to scrap fact checking on their own platforms, essentially passing the responsibility of content verification on to users.
While Africa lags behind Europe in strict data protection regulations, Allen said increased exchange on fact-checking practices or platforms on which people can denounce suspected disinformation is a good start. The study also indicates that Africa's evolving regulatory environment means it can learn from mistakes made elsewhere to better deal with AI-driven disinformation.
This article was originally written in German and adapted by Cai Nebe.