# Client-side scanning Links: [[E2EE]], [[Mass Surveillance]] ### Thread by [[Matthew Green]] https://twitter.com/matthew_d_green/status/1423091097933426692?s=20 Unroll: https://threadreaderapp.com/thread/1423091097933426692.html See also: [['Black-Box Attacks on Perceptual Image Hashes with GANs']] This means that, depending on how they work, it might be possible for someone to make problematic images that “match” entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider. I can’t imagine anyone would do this just for the lulz (cough, 8chan). Just to have fun with someone they don’t like. But there are some really bad people in the world who would do it on purpose. And the problem is that none of this technology was designed to stop this sort of malicious behavior. In the past it was always used to scan *unencrypted* content. If deployed in encrypted systems (and that is the goal) then it provides an entirely new class of attacks. Initially Apple is not going to deploy this system on your encrypted images. They’re going to use it on your phone’s photo library, and only if you have iCloud Backup turned on. So in that sense, “phew”: it will only scan data that Apple’s servers already have. No problem right? But ask yourself: why would Apple spend so much time and effort designing a system that is *specifically* designed to scan images that exist (in plaintext) only on your phone — if they didn’t eventually plan to use it for data that you don’t share in plaintext with Apple? Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you. Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone. And by the time we find out it was a mistake, it will be way too late. ### Update: The scanning actually uses a [[Neural Matching Function]] https://threadreaderapp.com/thread/1423246871888338953.html Also, it will use a 2-party process where your phone interacts with Apple’s server (which has the unencrypted database) and will only trigger an alert to Apple if multiple photos match its reporting criteria. I don’t know anything about Apple’s neural matching system so I’m hopeful it’s just designed to find known content and not new content! But knowing this uses a neural net raises all kinds of concerns about [[Adversarial Machine Learning|Adversarial ML]], concerns that will need to be evaluated. Apple should commit to publishing its algorithms so that researchers can try to develop “adversarial” images that trigger the matching function, and see how resilient the tech is. (I will be pleasantly but highly surprised if Apple does this.)