Google is determined to rid its search engine of what it termed “upsetting-offensive” content. The company on Thursday announced it would start flagging “upsetting” or “offensive” content by teaching search algorithms to better spot them. This will be made possible by tapping humans to help its computer algorithms to deliver results that are more factually accurate and less instigative, reports The Verge.
The humans in this case, according to Google, are 10,000 independent contractors who work as quality raters. The independent raters were provided with search based on real queries to score the results. Their operation will be based on guidelines provided by Google.
On Tuesday, Google presented the raters with a new charge; this time, to hunt for “Upsetting-Offensive” content, including hate or violence against a group of people, racial slurs or offensive nomenclature, graphic violence including cruelty to animals, child abuse or explicit information about harmful activities.
In all of these, Google’s aim is to guide people with queries to websites regarded as trustworthy and not to ones that are notorious for peddling false information or hate speech.
Google wants a near-perfect job—and so, just being upsetting won’t be enough for its independent raters to flag search results. The search engine giant gave an example regarding “Holocaust history” search. According to Google, one search result is a Holocaust denial site, which it says deserves the flag, while the other, a website from The History Channel, might be upsetting due to subject matter but is a “factually accurate source of historical information” and does not promote the hateful content mentioned above.
Just to be clear you are not getting the whole idea wrong, what Google is saying is that being hit with the Upsetting-Offensive flag won’t immediately lead to demotion of your search results. However, it said the flags will be used as data points for employees of Google as they continue to iterate on search algorithms. As time goes on, the algorithm will learn how to independently flag content considered to be upsetting and factual. This, according to Google, would impact search rankings in cases where the search engine giant believes users are after “general learning.”
Speaking to Search Engine Land on the development, the company’s search engineer Paul Haahr said:
“We will see how some of this works out. I’ll be honest. We’re learning as we go … We’ve been very pleased with what raters give us in general. We’ve only been able to improve ranking as much as we have over the years because we have this really strong rater program that gives us real feedback on what we’re doing.”
Good move no doubt, but getting rid of fake news from its search engine would have been a more effective way to deal with the problem. While this move is a welcome one on the part of Google, the world’s number one search engine website still has a lot to do to make its services safe for all.
Got something to add to this story? You can share it with us by using the comment box below.