News /

Apple's NeuralHash Reverse Engineered By Developer


A developer claims to have reversed engineered Apple’s NeuralHash algorithm used in their new Child Sexual Abuse Material (CSAM) detection technology.

According to Apple, “CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. Apple servers flag accounts exceeding a threshold number of images that match a known database of CSAM image hashes so that Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC). This process is secure, and is expressly designed to preserve user privacy.”

The developer who reversed engineered NeuralHash into Python code, Asuhariet Ygvar, claims it “already exists” in iOS 14.3 under obfuscated class names. Ygvar published the code to GitHub, allowing the code to be tested by anyone regardless of if they use an Apple device.

“If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement,” TechCrunch reported. “Apple says the chance of a false positive is about one in one trillion accounts.”

However, quickly after publishing the code, Cory Cornelius, a research scientist at Intel Labs, reported the first case of a “hash collision,” which in the case of this algorithm means that two completely different pictures produced the same hash and resulted in a false positive.

Kenneth White, a security researcher and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a *couple of hours.*”

Some security researchers warn that knowing the algorithm will allow for people to cause false positives, causing users to be flagged for innocent images, and false negatives, images that won’t be detected despite being in the CSAM database.

TechCrunch attempted to contact Apple about the situation, but they said an Apple spokesperson “declined to comment on the record.”

“But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses,” TechCrunch reported. “Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.”

*For corrections please email [email protected]*

Popular