New machine learning tools are helping Google make substantial progress in weeding out terror content on YouTube and other online properties, the company claimed this week.
In a blog, members of the YouTube team said Google has recently begun developing and implementing “cutting-edge” machine learning technology to identify and remove extremist and terrorism-related videos from YouTube in a more scalable manner. The company’s early experience shows the tools to be far faster and more effective than humans at the task, they noted.
For instance, more than 75 percent of the videos that were removed from YouTube last month for containing violent and extremist content were taken down before anyone had even flagged them once as being potentially offensive. Even though the tools are still in the process of being perfected, in many cases they proved to be more accurate than human reviewers in identifying videos for removal, according to Google.
“Over the past month, our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down,” the YouTube team noted in the blog.
Google’s efforts to weed out terror content online are making progress on other fronts as well, the YouTube blog claimed this week. Over the past month, Google has begun working with more than 15 new nongovernmental organizations to develop better guidelines for identifying content for removal. The organizations have brought expert knowledge on issues such as hate speech and radicalization that Google is currently using to identify content that is being used for terror recruiting and proselytizing.
Starting July 20, Google has begun using an approach called the Redirect Method that was developed by its Jigsaw tech incubator to direct people searching for extremist videos to content debunking that propaganda.
In addition, over the next few weeks Google will start enforcing a tougher policy on videos that may not exactly violate its policies but still contain hate speech and other extremist content. As part of the new policy, such videos will not be promoted, will not be allowed to display ads and will not allow viewers to leave comments.
Google, like several other major internet companies including Facebook and Twitter, has been under growing pressure to do more to stop its online platforms from being used by terrorists and hate groups to recruit members and to disseminate propaganda.
And that pressure isn’t coming just from lawmakers and regulators. The families of victims of the terror attacks in Paris, at the nightclub in Orlando, Fla., and in San Bernardino, Calif., have sued Google, Facebook and Twitter for allegedly providing material support to terrorists by allowing their platforms to be used for propaganda.
In response, Google has taken a series of steps over the past year or so to more effectively weed out terror and other offensive content on YouTube and elsewhere. Most recently, Google General Counsel Kent Walker in June outlined four measures to fight terrorism online. The measures include expanded use of machine learning technologies and the use of more human experts to flag and remove terror and hate videos from YouTube.
This week’s blog is an update on the progress that Google has made since announcing those measures.