1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Can social media inspire violence?

Morgan Meaker
November 11, 2018

A new report reignites the idea that social media users can be manipulated by misinformation, enabling a variety of actors to trigger violence between different groups. Morgan Meaker reports.

The eye of a woman showing the Facebook logo
Image: picture-alliance/dpa/J.W.Alker

When a Buddhist woman went to her local police station in Myanmar to report she had been raped by two Muslim colleagues, a local monk shared the details to Facebook, causing outrage that escalated into real-life violence.

For two days, the central city of Mandalay was gripped by clashes between Myanmar's majority Buddhists and minority Muslims. Two people were killed and 19 injured. Authorities were able to reinstate peace only when Mandalay's access to Facebook was temporarily blocked.

But the story that spread across Facebook, mobilizing Buddhist rioters, wasn't true. Five people have since been convicted of spreading false rumors, including the alleged victim — who later said she had been paid to file the police report.

Read moreInciting hatred against Rohingya on social media

This 2014 incident was an early red flag for Facebook, showing how fake rumors spread on the platform could explode into real life. In Myanmar — where around 700,000 Muslim Rohingya have fled since last year — Facebook has provided the backdrop to an upsurge in ethnic violence.

Earlier this week, an independent report into the country's use of Facebook found that a minority of users were exploiting the platform to "incite offline violence."

Facebook has blacklisted several Buddhist hard-liners in Myanmar for hate speech against Rohingya MuslimsImage: Getty Images/AFP/Y. A. Thu

Misinformation and violence

Published by the non-profit Business for Social Responsibility, the report looks into the idea that social media users can be manipulated by misinformation, which in turn can lead to violence between different groups. While social media might not invent these divisions, the way platforms promote emotive, divisive content can escalate existing prejudices.

The Facebook-commissioned study points to Myanmar's political atmosphere as part of the problem. But social media-inspired violence is not unique to countries that are new to the internet after years of censorship. Misinformation spread online has been linked to violence in Sri Lanka, Indonesia, India, Mexico, the United States and Germany.

Earlier this year, two researchers at the UK's University of Warwick released a study that examined every anti-refugee attack in Germany between 2015 and 2017. The report found that crimes against refugees were more likely to take place in areas where Facebook use was high and at times when the far-right, populist Alternative for Germany (AfD) party was sharing anti-refugee posts to its page.

Academics are generally careful to say that the vast majority who post, share or consume extreme content will never commit a hate crime. But there is growing evidence that an invisible tipping point exists; that some people become so obsessed with the distorted version of reality they see online they feel compelled to act. 

Bharath Ganesh, researcher at the Oxford Internet Institute, sees this distortion of reality among the online alt-right.

A culture of hate

"The networks and spaces they create online are entirely geared towards creating a culture in which hate, denigration and dehumanization of other people is made acceptable," he told DW. "Making this kind of language credible is really about creating an atmosphere where hatred and violence against particular communities are legitimized."

In Chemnitz, false rumors on Twitter and Facebook became rallying cries, having encouraged 6,000 people to take part in a far-right street demonstration in August. In Charlottesville, VA, one year earlier, far-right protesters were heard chanting slogans linked to conspiracy theories known to be circulating on social media.

Read moreChemnitz and Kandel: How hashtags shape German politics

Researchers believe the way social media companies operate can promote these conspiracy theories and spread extreme viewpoints.

To keep people on their sites as long as possible, companies like Facebook and YouTube use algorithms to recommend posts a person is likely to agree with. But that can leave users stuck in ideological bubbles where they see a version of the world where their views are never challenged. 

Linda Schlegel, counter-terrorism consultant with the Konrad-Adenauer Foundation, says these bubbles can create "digitized terrorists. Filter bubbles affect all users," she told DW, "but they can promote radicalization, if the filter bubble contains extremist attitudes and certain perspectives are repeatedly communicated as the truth."

Guillaume Chaslot, who spent three years working on YouTube's algorithm as an engineer, believes filter bubbles also have a negative effect on public discourse. So, in 2016, he set up Algotransparency, a website that tries to demystify YouTube's algorithm, showing which videos it promotes over others.  

Experts say YouTube hasn't gone far enough in changing its algorithms to discourage certain viewing habitsImage: picture-alliance/dpa/N. Armer

Demystifying YouTube's algorithm

While YouTube has made changes since he left the company in 2013, they are not enough, he told DW: "They didn't address the cause of problem, that the algorithm is mainly designed to maximize watch time," and therefore advertising revenue.

Videos that keep people watching are generally divisive or controversial. Chaslot compares them to a fight going on in the street — people either join in, try to break it up, or they stop and stare.

If the algorithm thinks a video might have that stop and stare affect, it recommends it more. The video gets more views, more people see it, spending more time on the site, watching more adverts and YouTube makes more money. But that means the site is actively spreading controversial videos — like conspiracy theories.

Chaslot gives an example. On the day before the Pittsburgh Synagogue shooting last month, Algotransparency saw that YouTube's algorithm showed favoritism to a video where conspiracy theorist David Icke accuses Jewish billionaire, philanthropist and favorite target of the alt-right, George Soros of "manipulating political events in the world."

The video had less than 1,000 views when the algorithm started promoting it. Now it has almost 70,000.

Read moreRacist YouTube videos get a refugee pre-roll 'ad' in Germany

More and more, people are raising concerns over the way tech companies curate information because what increases advertising revenue can also unleash violent forces between communities. "I think it's very bad for society in the long term," said Chaslot, "but the algorithm is not able to measure that."

When online hate goes offline

01:28

This browser does not support the video element.

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW