Facebook Revealed 1 in 1,000 Views is About Hate Speech

For the first time in the history of Facebook’s existence, it revealed how widespread hate speech is on its platform. 

Hate speech is rampant in social media, no matter what platform you’re using. And for the first time, Facebook exposed the pervasiveness of hate speech on its platform. 

It stated that this type of content accounted for 0.11% of content views from July to September. It means that for every 1000 views, 1 is hate speech

Per month, Facebook gets 2.7 billion active users. With that in mind, hate speech is widespread on the platform. 

The company calculated hate speech on its platform by choosing content on Facebook. Then, it labels how much of the video violates its hate speech policies. 

Facebook considers prevalence as a critical metric. It helps them understand what is going on the said social media site. 

The metric is vital. One post can quickly go viral. It gets distributed in a short amount of time. 

Other posts could be on the Internet, but users won’t see it. 

Did the Number Increase? 

It’s the first time the company revealed numbers about how widespread hate speech is on its platform. In that case, it can’t say that the figure increased or decreased. 

Facebook took vital steps in minimizing hate speech. For instance, it banned QAnon content. 

Unfortunately, it struggled to curb hate speech in non-English speaking markets. 

The US Congress grilled Mark Zuckerberg and Jack Dorsey on how they moderate content in their respective platforms. 

They answered questions from Republicans about how they decide on violent speech and the allegations of political bias. 

In an all-staff meeting, Zuckerberg said that the platform didn’t suspend Steve Bannon’s account. The reason is that Steve’s violation is not enough to justify suspension, even though he requested the decapitation of two US officials. 

In recent months, people probed Facebook for allowing the sharing of election fraud claims. 

The rates for searching for rule-breaking content before its users reported it increased in most areas. That’s because the company improved its AI tools. It also expanded detection technologies to many more languages. 

The company has 35,000 contractors who are working as content reviewers. They review and get rid of any inappropriate content that its AI can’t pick up. 

However, some of these moderators had PTSD and other mental health issues because of their job at Facebook. They sued the company for it. 

Then, this week, a group of moderators signed a letter to Mark Zuckerberg to tell him that their health is now at risk after the company forced them to go back into the office, despite the ongoing pandemic

Full-time employees of Facebook has the option to work from home until 2021. 

The letter stressed the importance of the moderators’ job. Without the moderators, Facebook can’t respond quickly to stop child abuse, for instance. 

No more how advanced Facebook’s AI is, it still not perfect. It needs human moderators to review posts and delete hate speech. 

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On

Submit a Comment

Your email address will not be published.