YouTube Removes 400+ Channels Following Child Exploitation Controversy

Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

YouTube has a new string of controversies surrounding its advertising business. The Alphabet-owned company has talked to key ad agencies and major companies to address it, according to AdWeek’s sources familiar with the matter.

Bloomberg reported that several brands, including Disney, Nestle and Epic Games, had stopped buying ads over concerns of child safety.

Blogger Matt Watson ignited fire after posting a 20-minute video last Sunday. He detailed how pedophiles are using YouTube to connect with each other and share links to child pornography through comments.

The story blew up. So, YouTube called its clients and agency partners to address their concerns. And then it sent a memo about ongoing and upcoming efforts to fix the issue.

 

youtube memo on child safety

 

The conference call and memo came after the company explained how it processes violations of community guidelines. YouTube has been removing recommendations for marginal content to stop the spread of questionable content.

A YouTube spokesperson told AdWeek, “Any content—including comments—that endangers minors is abhorrent, and we have clear policies prohibiting this on YouTube. We took immediate action by deleting accounts and channels reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”

YouTube representatives said they have taken steps over the last 48 hours.

The company terminated more than 400 channels after scrutinizing their history of comments on the platform.

YouTube also worked with the National Center for Missing and Exploited Children to determine illegal activity.

The company also disabled comments on millions of videos featuring minors. And it removed dozens of risky videos that may endanger children for unspecified reasons.

Precedents

YouTube has faced brand safety controversies before.

In 2017, AT&T, Verizon and Johnson & Johnson stopped YouTube ad spending after some ads ran over videos allegedly supporting ISIS.

UK ad agency Havas halted its clients’ YouTube and Google ad buys after similar reports appeared in The Times and The Guardian.

AT&T returned to YouTube in January this year. The telco giant cited YouTube’s ramped up efforts to review videos through the CSAI Match program.

A few thoughts

YouTube has no choice. It has to take more action on somewhat innocuous content. If it wants to stop business interests from taking a hit, it must remove questionable material.

Will Facebook and other social platforms follow suit or reconsider their position on related content?

Social platforms have distanced from making these decisions. They defer their decision-making to algorithms so people can see more of what they want.

Disconcerting content may build a larger crusade against this hands-free approach.

Everyone now knows about the meddling of social networks leading into the 2016 US Presidential Election.

Although all platforms increased efforts in that front, it seems they also have other, equally important areas to cover.

One thing is true: social platforms can spread questionable content.

They may have steered clear until now, but the pressure is rising for them to take more deliberate action.


Share the joy
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Author: Francis Rey

Francis is a voracious reader and prolific writer. His work appears on SocialBarrel.com and several other websites, covering social media, technology and other niches.

Share This Post On