Google wants to teach algorithms to flag "offensive" content in search results and videos posted to YouTube after earlier this week a member of Britain's Parliament scolded Peter Barron, VP of Google public affairs in EMEA, for the company's negligence in banning content such as the video by a former Ku Klux Klan leader called "Jewish People Admit Organizing White Genocide" to serve up in results.
About 10,000 independent contractors worldwide, which Google calls Quality Raters, will flag upsetting or offensive content to ensure that these types of search terms do not appear. The company wants to teach its algorithms to more effectively spot the offensive and factually incorrect information in an effort to uphold an agreement the search company made earlier this year.
The raters rate pages based on guidelines that appear in the top results as to the quality of the answers. The searches come from a list of real searches that Google identifies. The data from the raters is used to improve Google search algorithms, and in time the hope is that data will have an impact on low quality pages.
The guidelines have a new section that explains what Google calls an Upsetting-Offensive Flag and how to assign the flag to that type of content. The guidelines define the content that promotes hate or violence against a group of people based on criteria such as race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
Content that will be flagged includes any type of content with racial slurs or extremely offensive terminology, or graphic violence, including animal cruelty or child abuse. It also bans content about explicit how-to information about harmful activities such as murder or human trafficking.
Quality raters do not have the ability to change how search results are ranked. The feedback will be used by Google's engineering team and machine learning technology to improve search results.