In an effort to fight the spread of child sexual abuse material (CSAM) online, Google will be launching an artificial intelligence toolkit that will help organizations quickly review large amounts of sexual abuse material and reduce the need for human inspection.
While countless organizations have been working tirelessly to report these horrific images, Google’s new initiative will make the job much easier.
“Today we’re introducing the next step in this fight: cutting-edge artificial intelligence (AI) that significantly advances our existing technologies to dramatically improve how service providers, NGOs, and other technology companies review this content at scale,” Google wrote in a company blog post.
Google will utilize deep neural networking for image processing and then prioritize the most likely candidates for review.
“While historical approaches to finding this content have relied exclusively on matching against hashes of known CSAM, the classifier keeps up with offenders by also targeting content that has not been previously confirmed as CSAM,” Google wrote.
This new technology’s speed will allow children who are being sexually abused to be identified faster -- and protected from future abuse. Additionally, the number of responses will increase by 700 percent more than before, while reducing the number of eyes needed to look at the images.
Google is offering the service for free to non-governmental organizations and industry partners through the Content Safety API.
“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material,” said Susie Hargreaves OBE, CEO of the Internet Watch Foundation. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”