Facebook, YouTube and Twitter are relying more heavily on automated systems to flag content that violate their rules, as tech workers were sent home to slow the spread of the coronavirus.
But that shift could mean more mistakes — some posts or videos that should be taken down might stay up, and others might be incorrectly removed. It comes at a time when the volume of content the platforms have to review is skyrocketing, as they clamp down on misinformation about the pandemic.
Tech companies have been saying for years that they want computers to take on more of the work of keeping misinformation, violence and other objectionable content off their platforms. Now the coronavirus outbreak is accelerating their use of algorithms rather than human reviewers.
"We're seeing that play out in real time at a scale that I think a lot of the companies probably didn't expect at all," said Graham Brookie, director and managing editor of the Atlantic Council's Digital Forensic Research Lab.
Facebook CEO Mark Zuckerberg told reporters that automated review of some content means "we may be a little less effective in the near term while we're adjusting to this."
Twitter and YouTube are also sounding caution about the shift to automated moderation.
"While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes," Twitter said in a blog post. It added that no accounts will be permanently suspended based only on the actions of the automated systems.
YouTube said its automated systems "are not always as accurate or granular in their analysis of content as human reviewers." It warned that more content may be removed, "including some videos that may not violate policies." And, it added, it will take longer to review appeals of removed videos.
Facebook, YouTube and Twitter rely on tens of thousands of content moderators to monitor their sites and apps for material that breaks their rules, from spam and nudity to hate speech and violence. Many moderators are not full-time employees of the companies, but contractors who work for staffing firms.
Now those workers are being sent home. But some content moderation cannot be done outside the office, for privacy and security reasons.
For the most sensitive categories, including suicide, self-injury, child exploitation and terrorism, Facebook says it's shifting work from contractors to full-time employees — and is ramping up the number of people working on those areas.
There are also increased demands for moderation as a result of the pandemic. Facebook says use of its apps, including WhatsApp and Instagram, is surging. The platforms are under pressure to keep false information, including dangerous fake health claims, from spreading.
The World Health Organization calls the situation an infodemic, where too much information, both true and false, makes it hard to find trustworthy information.
The tech companies "are dealing with more information with less staff," Brookie said. "Which is why you've seen these decisions to move to more automated systems. Because frankly, there's not enough people to look at the amount of information that's ongoing."
That makes the platforms' decisions right now even more important, he said. "I think that we should all rely on more moderation rather than less moderation, in order to make sure that the vast majority of people are connecting with objective, science-based facts."
Some Facebook users raised alarm that automated review was already causing problems.
When they tried to post links to mainstream news sources like The Atlantic and BuzzFeed, they got notifications that Facebook thought the posts were spam.
Facebook said the posts were erroneously flagged as spam due to a glitch in its automated spam filter.
Zuckerberg denied the problem was related to shifting content moderation from humans to computers.
"This is a completely separate system on spam," he said. "This is not about any kind of near-term change, this was just a technical error."
300x250 Ad
300x250 Ad