Africa's content moderators want compensation for job trauma
May 1, 2025
The vast majority of Facebookand Instagram users never see disturbing content depicting scenes of torture, child abuse or murder on these platforms. That's thanks to content moderators, who constantly review content and delete inappropriate images and videos. Often, these content moderators are employed by subcontractors in countries of the Global South. Their line of work is extremely stressful and has drawn criticism for years.
Meta, which owns Facebook and Instagram, and its African subcontractors have faced numerous lawsuits over these practices. According to British daily The Guardian, lawyers are currently preparing for a lawsuit against Majorel, a company contracted by Meta to engage in content moderation.
Lack of psychological support?
Content moderators working for Majorel in Ghana'scapital Accra told The Guardian that they have suffered from depression, anxiety, insomnia, and substance abuse. They believe this is a direct result of their work as content moderators. They also claim that the psychological support offered to help process disturbing social media content was inadequate.
Teleperformance, which owns Majorel, reportedly denies these accusations. According to The Guardian, the company employs its own mental health professionals, who are registered with the local supervisory authority. DW asked Majorel for comment yet received no reply. British NGO Foxglove is now preparing a lawsuit.
This is not, meanwhile, the first lawsuit of this kind. Around two years ago, former Facebook moderators in Kenya'scapital Nairobi filed a lawsuit against Meta and its subcontractors Sama (Samasource) and Majorel. They had been involved in union action to improve their working conditions and say they were unfairly dismissed and not rehired by Majorel, Reuters news agency reports.
In September 2024, a court confirmed that Meta can be sued in Kenya, leading to further court cases and mediation proceedings. In the US, a class action lawsuit brought by former Facebook moderators ended in 2020 with a settlement of approximately $52 million (€45,7 million) in compensation for psychological damage.
Disturbing images hours on end
Mercy Mutemi, a lawyer who represented the plaintiffs in the Kenya trial, told Reuters such lawsuits are a "wake-up call" for tech companies to take a stronger stance against human rights violations.
Some former content moderators are speaking out about their experiences at work, although employees often sign confidentiality agreements. News platform netzpolitik.org has reported that content moderators employed by one subcontractor spent eight to ten hours a day reviewing disturbing content, some of which showed scenes of animal cruelty and executions.
A former content moderator working for a subcontractor told The Guardian "the system doesn't allow us to skip it … we have to look at it for 15 seconds, at least." The man also said he felt himself gradually becoming "out of humanity." The Guardian report was produced together with the Bureau of Investigative Journalism.
Another former employee, who is now involved in the African Tech Workers Rising coalition, told The Guardian about working under intense time pressure: "You could not stop if you saw something traumatic. You could not stop for your mental health."
The former content moderator said she believes the situation has since deteriorated further, with employees forced to watch videos at two to three times the normal speed, on several screens at once.
Foxglove NGO also reports that some content moderators had to ask for permission to leave their computer screens and that sometimes, this request was denied. The organization says content moderation is particularly stressful for people from conflict regions, who are in constant fear they may recognize relatives and friends in images they are asked to review. DW was, however, unable to verify whether these allegations are justified.
A communications agency commissioned by Meta told DW that Meta had entered into contractual agreements with its content moderator companies. It said these agreements are intended to ensure that moderators receive professional training as well as access to professional support and medical care around the clock. The communications agency also said content moderator companies are obliged to pay above-average salaries and respect the right to organize.
Kenya and the Philippines attract outsourced business processes
It is no coincidence that many subcontractors who offer content moderation services for large social media platforms are based in developing countries, where wages are generally significantly lower than in Western countries, occupational safety standards poorer, and youth unemployment is high. In addition, different time zones ensure that content moderation can be provided around the clock.
"Large tech companies like to outsource important but burdensome work to Africa. They do this in a colonial and exploitative manner," Mercy Mutemi told netzpolitik.org.
In late 2023, she told DW that "Facebook and Sama are luring young, talented but vulnerable, unsuspecting young people from Kenya and other African countries."
The countries themselves benefit from business process outsourcing (BPO), which is a lucrative business. In spring 2024, Kenyan President William Ruto announced plans to expand the infrastructure to attract further BPO, aiming to create one million new BPO jobs over the next five years.
Kenya's capital Nairobi is already considered an up-and-coming IT hub aptly nicknamed "Silicon Savannah," boasting a young and well-educated population with good English language skills. Countries such as Rwanda and Ghana are also keen to grow their tech sectors. The Philippines and India, meanwhile, are also big players in the tech world.
Will AI replace human content moderators?
It is still difficult to predict what working conditions will look like in the future. Even though artificial intelligence (AI) is already widely used, experts believe that it will not be able to completely replace human labor. For example, algorithms will not always reliably recognize images of torture and abuse. Linguistic and cultural differences also remain a problem for AI.
The only real solution, therefore, is to invest in human content moderators already doing this work, Foxglove director Martha Dark told The Guardian.
This article was originally written in German.