Social media firms and the European Commission have been pushing for self-regulation in the fight against hate speech. Their latest results show that it seems to be working.
Advertisement
Social media sites drastically improved their response time for online hate content in 2017, new European Commission figures showed on Friday.
Content flagged as hate speech was reviewed within 24 hours in 81 percent of cases, up from 51 percent in May.
About 70 percent of flagged content was removed, up from 59.2 percent.
Facebook swiftly reviewed complaints in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases.
Almost half of flagged content was found on Facebook, while 24 percent was on YouTube and 26 percent was on Twitter.
Most allegations of hate content centered on ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.
"These latest results and the success of the code of conduct are further evidence that the Commission's current self-regulatory approach is effective and the correct path forward," said Stephen Turner, Twitter's head of public policy.
EU Justice Commissioner Vera Jourova said the results made her less likely to push for legislation on the removal of illegal hate speech, but warned tech companies not to rest on their laurels.
"I would expect similar commitment from IT companies when it comes to other important issues such as terrorist content or unfair terms and conditions for users," Jourova told dpa.
Fighting for the internet: Social media, governments and tech companies
Germany has passed a new law on social media in 2017, despite complaints from social media companies worried about the impact on their business. But how far is too far? DW examines the trends.
Image: picture-alliance/dpa/W. Kastl
Free speech or illegal content?
Whether hate speech, propaganda or activism, governments across the globe have upped efforts to curb content deemed illegal from circulating on social networks. From drawn-out court cases to blanket bans, DW examines how some countries try to stop the circulation of illicit content while others attempt to regulate social media.
Image: picture-alliance/dpa/W. Kastl
Social media law
After a public debate in Germany, a new law on social media came into effect in October. The legislation imposes heavy fines on social media companies, such as Facebook, for failing to take down posts containing hate speech. Facebook and other social media companies have complained about the law, saying that harsh rules might lead to unnecessary censorship.
Image: picture-alliance/dpa/T. Hase
Right to be forgotten
In 2014, the European Court of Justice ruled that European citizens had the right to request search engines, such as Google and Bing, remove "inaccurate, inadequate, irrelevant or excessive" search results linked to their name. Although Google has complied with the ruling, it has done so reluctantly, warning that it could make the internet as "free as the world's least free place."
Image: picture-alliance/ROPI/Eidon/Scavuzzo
Blanket ban
In May 2017, Ukraine imposed sanctions on Russian social media platforms and web services. The blanket ban affected millions of Ukrainian citizens, many of whom were anxious about their data. The move prompted young Ukrainians to protest on the streets, calling for the government to reinstate access to platforms that included VKontakte (VK), Russia's largest social network.
Image: picture-alliance/NurPhoto/Str
Safe Harbor
In 2015, the European Court of Justice ruled that Safe Harbor, a 15-year-old pact between the US and EU that allowed the transfer of personal data without prior approval, was effectively invalid. Austrian law student Max Schrems launched the legal proceedings against Facebook in response to revelations made by former US National Security Agency (NSA) contractor, Edward Snowden.
Image: picture-alliance/dpa/J. Warnand
Regulation
In China, the use of social media is highly regulated by the government. Beijing has effectively blocked access to thousands of websites and platforms, including Facebook, Twitter, Instagram and Pinterest. Instead, China offers its citizens access to local social media platforms, such as Weibo and WeChat, which boast hundreds of millions of monthly users.
Image: picture-alliance/dpa/Imaginechina/Da Qing
Twitter bans Russia-linked accounts
Many politicians and media outlets blame Russia's influence for Donald Trump's election victory in 2016. Moscow reportedly used Facebook, Twitter, Google, and Instagram to shape public opinion on key issues. In October 2017, Twitter suspended over 2,750 accounts due to alleged Russian propaganda. The platform also banned ads from RT (formerly Russia Today) and the Sputnik news agency.
Image: picture-alliance/AP Photo/M. Rourke
Facebook announces propaganda-linked tool
With social media under pressure for allowing alleged Russian meddling, Facebook announced a new project to combat such efforts in November 2017. The upcoming page will give users a chance to check if they "liked" or followed an alleged propaganda account on Facebook or Instagram. Meanwhile, Facebook has come under fire for not protecting user data in the wake of the Cambridge Analytica scandal.
Image: picture alliance/NurPhoto/J. Arriens
8 images1 | 8
Why are social media companies voluntarily policing content? Several European governments were pushing social media companies to tackle extremist online content. In a bid to avoid legislation, Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the European Union in May 2016 to review most complaints within a 24-hour time frame. Instagram and Google+ will soon join them.
Why is Brussels pushing for self-regulation? The Commission fears a patchwork of rules in the EU and the possibility that governments could abuse such laws to restrict freedom of expression.
What is happening in Germany? Starting this year, social media companies face fines of up to €50 million ($61.2 million) in Germany if they fail to remove hate content in a timely manner. The laws have been criticized for encouraging social media firms to err on the side of censorship.
What happens next? The Commission will likely issue recommendations at the end of February on how companies should take down extremist content related to militant groups, an EU official told Reuters news agency.