Getty Images
Social media moderators search for distressing or illegal photos and videos and then delete them.
Over the past few months, the BBC has explored a dark, hidden world – a world where the worst, most horrific, distressing and, in many cases, illegal content ends up online.
Beheadings, massacres, child abuse, hate speech: it all ends up in the inboxes of a global army of content moderators.
You don't see or hear about them often, but they are the people whose job it is to review and then, if necessary, remove content that is either flagged by other users or automatically flagged by technical tools.
The issue of online safety has become increasingly important, with technology companies under increasing pressure to quickly remove harmful content.
And despite much research and investment in technological solutions to help, ultimately, for now, it's still largely human moderators who have the final say.
Moderators are often employed by third-party companies, but they work on content posted directly to major social networks like Instagram, TikTok and Facebook.
They are based all over the world. The people I spoke to while making our series The Moderators for Radio 4 and BBC Sounds mostly lived in East Africa and had since left the industry.
Their stories were heartbreaking. Some of what we recorded was too brutal to broadcast. Sometimes my producer Tom Woolfenden and I would finish a recording and sit in silence.
“If you pick up your phone and then go to TikTok, you'll see a lot of activity, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator who worked on TikTok content. “But in the background, I was personally moderating hundreds of horrifying and traumatic videos.
“I took it upon myself. Let's let my sanity take over so that general users can continue to go about their business on the platform.
There are currently numerous legal claims pending that the work destroyed the mental health of these moderators. Some former East African workers have banded together to form a union.
“Realistically, the only thing stopping me from logging on to a social media platform and witnessing a beheading is someone sitting in an office somewhere, watching this content for me and reviewing it for so I don't have to do it,” says Martha Dark. who runs Foxglove, a campaign group supporting the legal action.
Mojez, who removed harmful content on TikTok, says his mental health was affected
In 2020, Meta, then known as Facebook, agreed to pay $52 million (£40 million) in compensation to moderators who developed mental health problems as a result of their work.
The lawsuit was initiated by a former moderator in the United States, Selena Scola. She described moderators as the “guardians of souls”, due to the amount of footage they see containing the final moments of people's lives.
The former moderators I spoke to all used the word “trauma” to describe the impact the job had on them. Some had difficulty sleeping and eating.
One described how hearing a baby cry sent a colleague into panic. Another said he had difficulty interacting with his wife and children because of the child abuse he had witnessed.
I expected them to say that this job was so emotionally and mentally draining that no human should have to do it. I thought they would fully support the automation of the entire industry, with AI tools evolving to adapt to this task.
But they didn't.
What came across strongly was the moderators' immense pride in the role they played in protecting the world from online dangers.
They saw themselves as a vital emergency service. One says he wants a uniform and a badge, comparing himself to a paramedic or a firefighter.
“Not even a second was lost,” says someone we called David. He asked to remain anonymous, but he had worked on hardware used to train the viral AI chatbot ChatGPT, so that it would be programmed not to regurgitate gruesome material.
“I’m proud of the people who shaped this model to be what it is today.”
Martha Dark
Martha Dark campaigns in support of social media moderators
But the very tool David had helped form might one day rival him.
Dave Willner is the former head of trust and security at OpenAI, the creator of ChatGPT. He says his team built a rudimentary moderation tool, based on chatbot technology, that successfully identified harmful content with an accuracy rate of around 90%.
“When I kind of fully realized, 'oh, this is going to work,' honestly, I choked up a little bit,” he says. “(AI tools) don’t get bored. And they don’t get tired or shocked…. they are tireless.
However, not everyone is convinced that AI is the silver bullet for the struggling moderation industry.
“I think it’s problematic,” says Dr Paul Reilly, lecturer in media and democracy at the University of Glasgow. “It’s clear that AI can be a pretty brutal and binary way of moderating content.
“This can lead to excessive blocking of free speech issues, and of course, it can miss nuances that human moderators would be able to identify. Human moderation is essential for platforms,” he adds.
“The problem is there aren’t enough of them and this work is incredibly damaging to those who do it.”
We also reached out to the tech companies mentioned in the series.
A TikTok spokesperson said the company knows that content moderation is not an easy task and that it strives to promote a caring work environment for its employees. This includes offering clinical support and creating programs that support moderator well-being.
They add that videos are initially reviewed by automated technology, which they say removes a large volume of harmful content.
Meanwhile, Open AI – the company behind Chat GPT – says it is grateful for the important and sometimes difficult work done by human workers to train AI to spot such photos and videos. A spokesperson adds that along with its partners, Open AI enforces policies aimed at protecting the well-being of these teams.
And Meta – which owns Instagram and Facebook – says it requires all companies it works with to provide 24-hour on-site support with trained professionals. He adds that moderators can customize their review tools to blur graphic content.
Moderators are on BBC Radio 4 at 1:45 p.m. GMT, Monday 11 November to Friday 15 November, and on BBC Sounds.
Read more global business stories
Source link