Follow us:
  1. Home
  2. News
  3. Cybersecurity News

Apple releases new details on plan to monitor phones for child sexual content

The tech giant has faced sharp criticism over its plan to scan iCloud libraries

Photo
Photo (c) ymgerman - Getty Images
Apple has released new details about its plan to scan consumers’ devices for evidence of child sexual abuse material (CSAM). Following criticism of the idea, Apple now says it will only flag images that have been supplied by clearinghouses in multiple countries. 

Ten days ago, Apple first announced its plan to monitor images stored on iCloud Photos to search for matches of previously identified CSAM. Once Apple’s technology finds a match, a human will review the image. If that person confirms that the image qualifies as CSAM, the National Center for Missing and Exploited Children (NCMEC) would be notified and the user's account would be immediately disabled. 

Apple said its main goal in employing the technology is to protect children from predators. However, critics were concerned that the tech could be exploited by authoritarian governments or used by malicious parties to open a “backdoor” for wider surveillance. 

“While child exploitation is a serious problem, and while efforts to combat it are almost unquestionably well-intentioned, Apple's proposal introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products,” security and tech privacy advocates said in a letter pushing for Apple to rescind its plan. 

New details 

In an effort to ease privacy fears, Apple now says it will tune the system so that it will only flag images supplied by clearinghouses in multiple countries -- not just by the U.S. National Center for Missing and Exploited Children (NCMEC), as announced earlier.

Additionally, only cases where users had about 30 or more potentially illicit pictures will be flagged for human review. If proven legitimate, authorities will be notified about the presence of CSAM in a person’s iCloud library. 

“We expect to choose an initial match threshold of 30 images,” Apple said in a Security Threat Model Review published Friday.

“Since this initial threshold contains a drastic safety margin reflecting a worst-case assumption about real-world performance, we may change the threshold after continued empirical evaluation of NeuralHash false positive rates – but the match threshold will never be lower than what is required to produce a one-in-one trillion false positive rate for any given account.”

Privacy concerns still present

Privacy advocates have argued that there’s no tweak that would render Apple’s CSAM surveillance system completely safe from exploitation or abuse. 

“Any system that allows surveillance fundamentally weakens the promises of encryption,” the Electronic Frontier Foundation’s Erica Portnoy said Friday. “No amount of third-party auditability will prevent an authoritarian government from requiring their own database to be added to the system.”

Apple has maintained that the technology will not scan users’ iCloud updates for anything other than CSAM material. Any government requests to “add non-CSAM images to the hash list” would be rejected, the company added. 

Take an Identity Theft Quiz

Get matched with an Authorized Partner