
The COVID-19 crisis has transformed the way the world works. With the risk of spreading the very deadly disease in enclosed spaces very high, telecommuting became the norm across the globe. Video-streaming giant YouTube complied with work from home orders, choosing to send home its moderators to mitigate the spread of the COVID-19 crisis.
This led to an expansion of their automated moderation system — and an approach to content moderation that differed vastly from Facebook’s.
YouTube’s Choice
YouTube reported that Q2 2020 was the first full quarter the platform operated on its “modified enforcement structure.” Typically, an automation process would flag sensitive content and send it to a human moderator to review for quality assurance. This action would help to improve the automation’s process. Under the automation expansion, YouTube accepted a lower threshold for violation and the machine flagged more videos.
In a press release titled “Responsible Policy Enforcement During COVID-19,” the streaming platform disclosed: “Because of choices we made to prioritize the safety of the community, we removed the most videos we’ve ever removed in a single quarter from YouTube.”
What Happened Next?
Though the system was successful in flagging videos, many noted it was too rigid in its objective. There was a large margin of error, meaning the system flagged many videos that did not meet the criteria for disobeying the guidelines.
As a result, appeals for reinstatement and the amount of reinstated videos doubled quarter over quarter. In order to make sure all creators were given a fair shot, content creators would not be issued a strike on their account for videos flagged by the system unless wrongdoing was confirmed by human review.
Though there was a large margin of error, the company has expressed its satisfaction with its decision, stating it believed overregulation was the safer choice under the circumstances.
YouTube vs. Facebook and Instagram
YouTube’s overabundance of caution is in stark contrast to the approach by Facebook and Facebook-owned Instagram. Like YouTube, Facebook sent home its moderators when it was made clear offices may facilitate super-spreader events. Protocol reported, unlike YouTube, the company did not expand its usage of automated moderation. This choice led to a sharp decrease in moderation, possibly exposing audiences to dangerous content.
The Future of the Moderation Filter on YouTube
While there is no report to suggest YouTube still continues to rely heavily on the moderation filter rather than its human team, it is clear they believe the technology created a favorable outcome. Though COVID-19 has changed the way work happened, it has also fostered the expansion of automation. There is a long way to go in perfecting the technology, but it seems even when held back physically, there are still strides toward the future.