Apple and The Cryptrickery Factory #2

Does Apple’s proposed content scanning technology turn iPhones and Mac computers into “compliance assistants” that save us from data-hungry regulators? Or is it one of the most dangerous global surveillance systems the world has ever seen?
A New Kind of Encryption
The new class of encryption that would solve the privacy/compliance contradiction is called “homomorphic encryption.” The first ideas for homomorphic encryption go back to the 1970s, when cryptographer Ronald Linn Rivest proposed a new type of cryptography that can process encrypted data, e.g. read it, write to it, and insert values to it, without knowing the decryption key¹⁵. This means that code can perform computations on encrypted data without knowing the content of the input nor the content of the output. But when the result of that operation is being decrypted with the correct key, it is identical to the result that would have been produced had the operations been performed on unencrypted data.
Homomorphic encryption offers something substantially new in the information security domain: “encryption in use”, in addition to the two already existing modes, “encryption at rest” and “encryption in transit.” Encryption in use is exactly what is needed for meeting both regulatory requirements banning illegal material while at the same time keeping prying eyes away from confidential information that two parties want to exchange.
But until recently, homomorphic encryption was not useful in the real world. Even simple computations on encrypted data would take weeks or months to complete.
This all changed around 2010 when it became possible thanks to faster processors, better algorithms to perform homomorphic computations within reasonable timeframes. It can still be a factor of a million slower than standard encryption methods. Yet, thanks to massive resource injections by companies such as Microsoft, Facebook, and Apple, amazing real-world applications have already been developed in the past few years.
For instance, users can store encrypted files on a remote file server and, even without that server knowing the decryption key for the files, nor knowing the content of the query, it can retrieve the matching files for the search query. Another example: it is now possible to scan large encrypted medical datasets for particular diseases without revealing personal information in patient records.
With “encryption in use” within reach, computations can now be split up between multiple devices, e.g., mobile devices and powerful computers in a data center, without privacy being affected at any stage.
Private Set Intersection
Homomorphic encryption allows for a new type of functionality in which tasks are shared between computational agents while the data for that computation remains encrypted, and none of the agents have access to all that data. A scenario for this is as follows: two agents, each holding a set of elements that remain secret to each other, ask an external agent to calculate the intersection of their two sets. These data agents do not reveal anything to the computational agent, as the sets they share with it are encrypted. The data agents learn nothing about each other’s sets, except which items are held by both of them. For his part, the computational agent knows nothing about either the sets or their intersection — it will just do the computations on the encrypted datasets and return an encrypted result to both parties.
This is called “private set intersection” (or PSI for short). It is now a mature, practical technology. It is used for things that seemed utterly impossible only relatively recently, such as privacy-preserving location sharing.
For example, it makes proximity alerting possible: informing two parties that they are in each other’s vicinity ¹⁶. Facebook is using PSI for measuring the effectiveness of its online advertising business. It compares the list of people who have seen an ad with those who have completed a transaction, without revealing to advertisers anything about the individuals¹⁷.
Apple recently implemented PSI in the Safari web browser to alert users on leaked passwords without Apple knowing the passwords themselves. This is realized by doing a private set intersection calculation, comparing the encrypted passwords in Safari’s keychain with an encrypted list of hundreds of millions of leaked passwords.
Avoiding Datacenter Disclosure
What could be more suitable for protecting personal privacy if regulators require content scanning than applying a private set intersection? Regulators would only get to know the content that would match a list of known illegal material and gain no insight into other information. The computational tasks necessary for the content scanning and the private set intersection can easily be divided between local devices owned by the content owner and computational agents in data centers. Until the advent of homomorphic encryption, the only other option would be to decrypt data upon arrival in the data center to allow scanning, which would amount to full disclosure of all the data — to the immediate service provider, and possibly to external service providers that perform the content scanning.
Microsoft is one of the biggest providers of on-demand CSAM assessment services. Its image-identification technology for detecting child pornography and other illegal material has been developed in 2009 and is called “photo DNA” .¹⁸ Cloudflare, a US-based web security company that is best known for its DDoS mitigation services, has also recently started to offer CSAM services¹⁹. All of these tools are proprietary. To most internet or service providers, implementing content scanning is mainly a black box that inputs an unencrypted image and outputs an answer to the question whether the content contains targeted material.
When do these actors get access to user data? Usually, access is granted when receiving the content in a data center. It is comparable to how things work in airports: passengers are scanned at security lines upon entering the check-in area. For a data center, this means, data that has been encrypted on the user device for the data transit is decrypted upon arrival. Information is then sent through the content scanning system, after which it is usually encrypted again before it is stored or sent further.
How long can different actors have access to the data? This is something that varies per provider and service. Some providers may be conscious of privacy and keep the number of actors, and also when these actors have access to the data and for how long, to a minimum. Others may be willing or are forced to share data longer, sometimes even permanently. For instance, ICT providers must hand over the encryption keys used for long-term data storage to authorities in China. This allows full access to all stored data, not just during the content scanning phase.
Personal Compliance Assistant
With “encryption in use” now being available as a new, third application domain for encryption, it is for the first time in history possible to create a solution to “datacenter disclosure” while still being able to flag illegal material.
Rather than being a passive data source, in which mobile devices hand over all information to remote data centers, the client device would, in that scenario, make a selection of possibly illegal information that must be further examined. The idea is not to allow data centers access to all data but only to content that is highly likely to be illegal.
With the device you hold in your hands becoming your trusted personal compliance assistant, the idea is to allow it to continuously, automatically, and without any user interaction scan all images stored on the device or on any local storage medium connected to that device. The local device compares the content with a list of known illegal material and silently flags authorities when such material has been found.
How does it work? First, the device makes a digital fingerprint. Error margins and weaknesses of the used algorithm need to be considered, so we will briefly touch upon this. Then, fingerprints that match — the proof of illegal material — should somehow raise alerts, ideally without the owner of the iPhone being aware of these. Raising silent alerts is a whole subject in itself: high levels of complexity in the system arise due to managing just one simple question: who knows what and when?
Creating an Image’s Digital Fingerprint
The requirements for selectivity for any useful scanning for Child Sexual Abuse Material are high. For example, any image scanner on a phone that I use should correctly classify my pictures of our naked daughters enjoying their bathtub parties, or us taking a skinny dip during our last vacation or our pictures of Gustav Klimt’s “Danae” (which happens to be visible on quite some of our snapshots, because we have a large reproduction of it hanging in our bedroom). None of this should be flagged as illegal child pornography material.
On the other hand, the selectivity of the content scanning algorithm should be so high that it would recognize child sexual abuse material, even if the owner would try to conceal the true nature of the images, using obfuscation operations like resizing, changing from color to black and white, cropping or compressing the image²⁰.
The algorithm that Apple engineers created for this is called “NeuralHash.” NeuralHash analyzes the content of an image and converts the result to a unique number, a kind of fingerprint called a “hash.” It does the calculation using neural networks that perform perceptual image analysis. This analysis does not classify, let alone judge, the content of the images, but it is capable of creating the same unique hash for visually similar images. Apple writes:
“Apple’s perceptual hash algorithm […] has not been trained on CSAM images, e.g., to deduce whether a given unknown image may also contain CSAM. It does not contain extracted features from CSAM images (e.g., faces appearing in such images) or any ability to find such features elsewhere. Indeed, NeuralHash knows nothing at all about CSAM images. It is an algorithm designed to answer whether one image is the same image as another, even if some image-altering transformations have been applied (like transcoding, resizing, and cropping).”
Until now, Apple has not provided many details of the NeuralHash algorithm. We do not know how selective it is. How big must visual differences between two images be for the algorithm to recognize them as distinctive? Is image selectivity dependent on the visual properties of the two images being compared, e.g., does it work better for images of humans than for landscapes?
It is frustrating that Apple won’t make the details of such a crucial aspect of the new algorithm public. By not making the algorithm public, it cannot become subject to public scrutiny and deconstruction or attack.
I wonder why this is the case. It seems to be something specific to the problem space, as the other prominent provider of technology for the automated selection of Child Sexual Abuse Material, Microsoft, with its PhotoDNA, also has not publicly released any information about the algorithm.
One possibility is that the code is proprietary, and its inventors are just trying to defend their intellectual property. It is also conceivable that providing more details would lead to an irresponsibly large attack surface. For instance, it could lead to an unacceptably high risk of reverse hash lookup so that it would be possible to know what image is part of the set of illegal material.
A third reason might be that the algorithm has some technical fragility, making it susceptible to deception. This could happen, for instance, by creating “phantom images”, by inserting small modifications in images that would be picked up by the algorithm while being (almost) invisible to the human eye²¹.
Error Tolerance
Like any image classification algorithm, NeuralHash is susceptible to errors. For example, the algorithm can produce false positives (two different images creating the same hash, called “hash collisions”) and false negatives.
A developer reverse-engineered the NeuralHash algorithm. He found the neural network model settings files hidden in the operating system of current iPhones and Apple computers. He imported these files into a general neural network framework and reverse-engineered how it’s supposed to work. With that setup, he managed to create the first known hash-collision images.
Apple stated that such collisions have been an expected outcome and that the NeuralHash settings files that have been shipped with previous versions of iPhones and macOS devices do not reflect the current technical state²².
So, how good is it then? Apple initially stated that “the algorithm has an extremely low error rate of less than one case in one trillion images per year”²³. In later released information, the accuracy was stated as, “We empirically assessed NeuralHash performance by matching 100 million non-CSAM photographs against the perceptual hash database created from NCMEC’s (The National Center for Missing & Exploited Children) obtaining a total of 3 false positives”²⁴.
Such statements are marketing as long as we don’t have the precise details of the empirical test. For example, we don’t know the nature of the tested images or what range of variance and distortion tested. And it’s not even known what the exact size of the NCMEC’s CSAM collection is. (Based on what the NCMEC website says, the organization has scanned around 300 million images²⁵. If we assume that about one percent of these are positively classified as child abuse images, then that would mean the total collection size would be around 30 million, yet informal statements say it’s more like seven million²⁰).
Given that Apple sold 1.65 billion devices, with around half of them being in use²⁶, and assuming that the average user will be making a few thousand images per year, even the error rate Apple is quoting would lead to hundreds or even thousands of users per year that will be incorrectly flagged as having inappropriate images on their device. This is something that Apple has created a manual process for — of which we don’t have any meaningful details.
Apple has encouraged the independent security research community to validate and verify its security claims²⁷. Yet, ironically, it has been in a protracted legal battle with Corellium, a company that provides the tools for doing exactly this. The company offers an iPhone emulator that allows a level of inspection of the hardware, software, and network traffic that is hard to achieve with a physical device. It announced that it would award grants to researchers who want to inspect the new image scanning technique²⁸).
If you would like to read more, please continue to part three of this story.
References
(15) R. Rivest, L. Adleman, and M. Dertouzos. On data banks and privacy homomorphisms. In Foundations of Secure Computation, pages 169180, 1978.
(16) — Arvind Narayanan, Narendran Thiagarajan, Mugdha Lakhani, Michael Hamburg, DanBoneh, et al. Location privacy via private proximity testing. In NDSS, volume 11,2011.- Xuan Xia et al, PPLS: A Privacy-Preserving Location-Sharing Scheme in Vehicular Social Networks, arXiv:1804.02431v1 [cs.CR] April 6, 2018.
(17) Pinkas, Benny, Schneider, T., Segev, G., andZohner, M. Phasing: “Private set intersection using permutation-based hashing. InUSENIX Security Symposium. USENIX,2015.
(18) https://www.microsoft.com/en-us/photodna
(19) For instance, CloudFlare’s CSAM service
(20) Paul Rosenzweig, ‘The Law and Policy of Client-Side Scanning’
(21) Video detection algorithms can be easily hacked by inserting images that would appear for a few milliseconds. See the “split-second phantom attacks,” on Advanced Driver Assistance Systems. https://www.nassiben.com/phantoms
(22) On the latest version of macOS or a jailbroken iOS (14.7+), it is possible to copy the model files from /System/Library/Frameworks/Vision.framework/Resources/ (on macOS) or /System/Library/Frameworks/Vision.framework/ (on iOS) See AppleNeuralHash2ONNX
(23) Expanded Protections for Children, p.8
(24) Apple, “Security Threat Model Review of Apples Child Safety Features”, p. 10; See also Apple says collision in child-abuse hashing system is not a concern, The Verge, August 18, 2021.
(25) https://www.missingkids.org/theissues/csam
(26) https://www.theverge.com/2021/1/27/22253162/iphone-users-total-number-billion-apple-tim-cook-q1-2021
(28) https://www.corellium.com/blog/open-security-initiative