If you spend countless hours on YouTube, then you know that many creators struggle to keep their content within the community guidelines and of course, monetize their content. And, it looks that YouTube it will make things a bit trickier. The video-hosting service said that it would promote fewer videos containing misinformation and conspiracy theories.
YouTube tested the new feature, only in the United States, and explained that the platform will stop to recommend the so-called “borderline content”, videos that come close to violating its community guidelines but stop just short.
The company released a statement that read:
“We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways — like videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”
Also, on Thursday, BuzzFeed published a new investigation regarding YouTube’s recommendation engine. They revealed that clicking on a nonpartisan political video the users could be watching “borderline content within just six videos”.
According to Pew, “Compounding this issue is the high percentage of users who say they’ve accepted suggestions from the Up Next algorithm — 81%,” “One in five YouTube users between ages 18 and 29 say they watch recommended videos regularly, which makes the platform’s demonstrated tendency to jump from reliable news to outright conspiracy all the more worrisome.”
Now, YouTube left many of us with many questions, like which kinds of videos will be considered borderline. But, in an interview, the video service app suggested that users should read its publicly posted community guidelines for answers about which kinds of videos may be considered as inappropriate content. Also, the company said that human moderators and machine learning systems will apply the new policy.
Firstly, moderators will train the system to recognize the inappropriate videos, and then the machine learning system will take action and review the videos automatically and decide whether they are good to go!
YouTube believes that with this method they could actually stop users who want to promote extremist content. But, this means that creator could also face the same discrimination and watch their videos get flagged. Also, the videos that are considered borderline will not be removed from the site. If users subscribe to a channel that has borderline content, they will be able to watch those videos. The company said in a blog post: “We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”