YouTube is banking on tech to clean up controversial content, as moderators stay home

CNN/Stylemagazine.com Newswire | 8/25/2020, 1:26 p.m.
YouTube said Tuesday that it is increasingly relying on technology to moderate content, resulting in a sharp rise in removed …
YouTube is increasingly relying on technology to moderate content, resulting in a sharp rise in removed videos, including some that didn't violate its policies./Credit: Shutterstock

By Kaya Yurieff, CNN Business

(CNN) -- YouTube said Tuesday that it is increasingly relying on technology to moderate content, resulting in a sharp rise in removed videos, including some that didn't violate its policies.

The Google-owned company said that between April and June it removed more than 11.4 million videos for violating its policies. That's more than double what it took down in the previous three months.

The majority of the videos were related to spam, nudity and child safety, which YouTube defines as behavior harmful or exploitative to children, such as abuse or dares and challenges that could endanger minors.

The increase in removed content coincided with YouTube's decision to use technology to crack down on harmful content rather than human reviewers, who were sent home due to the pandemic. As a result, the company said it "over-enforced" policies. About 325,000 of the removed videos were appealed, and nearly half of those were later reinstated, after YouTube determined they didn't in fact break its rules.

"When reckoning with greatly reduced human review capacity due to Covid-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement," YouTube said in a blog post.

A YouTube spokesperson also said more people are at home uploading videos, as well as flagging controversial videos, which also contributed to the increased number of removals. However, YouTube declined to say how much uploaded content increased for the quarter. The company has also never released stats showing what percent of total content is removed, making it difficult to get a true scale of their enforcement efforts.

In a similar report earlier this month, Facebook said that sending moderators home limited its ability to moderate some content such as that related tto suicide and self-injury. "With fewer content reviewers, who are essential in our continued efforts to increase enforcement in such sensitive areas, the amount of content we took action on decreased in Q2 from pre-Covid-19 levels," Facebook said.

The YouTube spokesperson also said it has deleted tens of thousands of QAnon-related videos and hundreds of channels since updating its hate speech policy in June 2019.

Facebook has similarly taken action on the group as of late. QAnon originated three years ago and claims, among other unfounded conspiracy theories, that dozens of politicians and A-list celebrities work in tandem with governments around the world to engage in child sex abuse. Hundreds of QAnon groups, pages and advertisements were removed from Facebook as part of the effort.

Last month, Twitter became the first major social network to ban accounts that shared QAnon content.

However, creating new rules, and announcing them to positive press, is often easier than enforcing them. Like other social platforms, YouTube's enforcement has been inconsistent. For example, after announcing a ban on white supremacist content last year, some of the most prominent purveyors of hate remained on YouTube until a year later.