Apple Clears the Air on Its Child Abuse Detection Tool
Days after Apple introduced its tool Neuralmatch for detecting Child Sexual Abuse Material (CSAM), the company has now released a FAQ document to address the confusion its updates were causing among the public at large. The tool, planned to be rolled out later this year, became the cause of an internet furor, drawing both praise and criticism.
While some were happy that the company was trying to tackle the issue of child pornography, others called the tool a slippery slope that in the future would allow governments to snoop in on users.
Last week, we had also reported that Apple's Neuralmatch would enable the company to scan pictures on users' iPhones and iCloud, looking for "known" imagery of child pornography. If CSAM were found on a device, the company would conduct a human review of the material, disable the user's account and report it to the National Center for Missing and Exploited Children (NCMEC). In sharp contrast to the company's values of protecting user's privacy, experts argued that such an invasive tool could be repurposed to look for other material at the request of the government.
Some of the confusion seems to have occurred due to the planned release of another tool from Apple, called the communication safety in Messages. This tool is aimed at pre-teen users of Apple devices and designed to detect sexually explicit images shared over its messaging service, iMessage. For this tool to function, a device must be registered to an individual aged 12 or younger and set up as a child account in Apple's Family Sharing feature.
When these pre-conditions are met, iPhones will analyze images sent or received through iMessage and will blur them, if found to be sexually explicit. Additionally, the app will warn the child to not access the image while also offering resources to help the child, Apple said in its FAQ document. As an added layer of protection, the young user will also be notified that if the user still goes ahead and accesses the image, a parent will also be notified.
Apple's CSAM tool does not work on all images stored on iPhones but only those that are being uploaded to the iCloud. To detect CSAM, Apple will not scan the stored images on the iCloud but only uses its hash - a mathematical fingerprint, to compare it to hashes of known CSAM. If a user does not use iCloud, the CSAM tool will not compare hashes off all images in the photo library. The source hash for CSAM will come from the NCMEC. Once CSAM is detected on an iCloud account, Apple will also conduct a human review, before reporting the user to NCMEC, the company said.
Addressing the concern that the hash from NCMEC could be replaced by another hash from another government agency, Apple has said that like in the past, it will refuse such demands from government agencies. In case, the NCMEC database is misused to insert other hashes, Apple is confident that its human review will pick it up. Additionally, the company is confident that its tool is very accurate and the chances of an incorrect detection are one in a trillion, every year. In any case, there is no automated reporting that will occur, and therefore, the user will not be flagged or the account won't be suspended.
These updates will be rolled out only in the U.S. for now, and the company will do a case-by-case review before rolling them out in other regions, Gizmodo reported. However, not everyone is convinced. Apple is reported to have buckled to government pressures to keep its business going in some areas.
Dr. Shah explained how he and his team made significant advances in translational cell therapy, successfully developing cellular treatments for cancer.