COVID-19 and Social Media Content Moderation

n response to the coronavirus outbreak, the social networking giant ‘s content moderators were sent home due to the virus, forcing it to rely more on technology to police misinformation in the interim.

Google, Facebook, Twitter, and more commit to battling coronavirus misinformation

In response to the coronavirus outbreak, the social networking giant ‘s content moderators were sent home due to the virus, forcing it to rely more on technology to police misinformation in the interim.

They announced they would be relying on automated tools more than normal as a result—and warned that this would result in more errors in moderation which would be rectified along the way.

According to the company update, full-time employees would be trained to devote “extra attention” to highly sensitive content, such as any involving suicide, child exploitation and terrorism.

Mark Zuckerberg acknowledging the decision on a media call with Washington Post said this could result in “false positives,” including removal of content that should not be taken down.

RECOMMENDED: COVID-19: 5 Ways to Do Smart & Responsible Marketing

“This means some reports will not be reviewed as quickly as they used to be and we will not get to some reports at all,”

YouTube’s announcement was in the same terms: “We will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place. As we do this, users and creators may see increased video removals.” The same day, Twitter said it would do the same.

GoogleFacebook, and Microsoft all posted the full statement which reads as follows:

We are working closely together on COVID-19 response efforts. We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world. We invite companies to join us as we work to keep our communities healthy and safe.

Reducing the workforce also will alter the appeals process for users who believe their content was removed in error. People can still report they disagree with Facebook’s decision.

NOW IN-STORE: 50 Social Media Marketing Tools That Will Give You an Unfair Advantage Vol.1

Stopping misinformation and harmful content would involve the following

  • Combating COVID-19 misinformation across our apps
  • Limiting misinformation and harmful content 
  • Banning ads for medical face masks
  • Prohibiting exploitative tactics in ads
  • Removing misinformation related to COVID-19 on Instagram
  • Banning ads for hand sanitizer, disinfecting wipes and COVID-19 test kits
  • Keeping our platform safe with remote and reduced content review

While for content creators and publisher, all monetized content goes through brand safety reviews. This includes Instant Articles and videos with in-stream ads. Since their ability to review new content is now limited, they won’t be able to approve all content for monetization.

Leave a Reply

Your email address will not be published. Required fields are marked *