The technology Apple is employing will monitor images stored on iCloud Photos, searching for matches of previously identified “Child Sexual Abuse Material (CSAM),” the new, preferred term over “child pornography.” The company claims its system is so accurate that it “ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”
When the system lands on a match, a human will review the image. If that person confirms that the image qualifies as CSAM, the National Center for Missing and Exploited Children (NCMEC) will be notified and the user's account will be immediately disabled.
Apple said forthcoming versions of iOS and iPadOS set for release later this year will contain "new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy." Even though most Apple users don’t give much thought to cryptography, Apple already applies it, mostly in Safari, to regularly check derivations of a user’s passwords against a publicly available list of breached passwords to keep their account safe and secure.
A Herculean effort and a game-changer
Apple is looking at a monumental task. The NCMEC views over 25 million images a year, and the U.S. is one of the largest producers of these types of images and videos.
In its analysis, the Canadian Centre for Child Protection stated that 67% of child sexual abuse material survivors are impacted much differently by the distribution of their images than they are by hands-on abuse.
“The reason for this is tragic; distribution goes on perpetuity, and these images are permanent when they are constantly re-shared,” said Gina Cristiano of ADF Solutions, a mobile and digital forensics company.
"Apple's expanded protection for children is a game changer," said John Clark, the president and CEO of the National Center for Missing and Exploited Children. "With so many people using Apple products, these new safety measures have lifesaving potential for children."
“This will break the dam”
Despite Apple’s good intentions, some privacy experts are concerned that the company is crossing a line.
One of those -- Matthew Green, a cryptography researcher at Johns Hopkins University -- raised concerns that Apple’s system could be deployed to frame innocent people simply by sending the person otherwise innocuous images, but ones created to prompt a match for child pornography, outwit Apple's algorithm, and alert law enforcement.
"Researchers have been able to do this pretty easily," Green said. "Regardless of what Apple's long term plans are, they've sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users' phones for prohibited content," Green said.
Green says this decision could also prompt governments to ask for all sorts of information about their citizens.
"Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone,” he said. "What happens when the Chinese government says, 'Here is a list of files that we want you to scan for?'" Green asked. "Does Apple say no? I hope they say no, but their technology won't say no."