Child sexual abuse is abhorrent. For many smartphone owners, though, the idea that government or police could read content on a device that is almost an extension of themselves is repugnant too. The imperatives of tackling crime and protecting privacy collide in Apple's decision to scan US iPhones for child abuse imagery. Campaigners for better child protection will celebrate. But the move sets a weighty precedent.
Apple has long rejected pressure to insert a "back door" in code that would allow law enforcement, in certain circumstances, to access its devices. It has twice resisted FBI demands to help it unlock phones, after shootings in San Bernardino, California, in 2015, and Florida in 2019 — though Apple said it had provided data including iCloud backups. As encryption has become key to many products and services, Facebook and other tech groups have also opposed moves to allow "exceptional access".
Encrypted devices and messaging are a boon to organised crime, terrorists, and child abusers. But Big Tech and privacy advocates have argued, with strong justification, that creating any kind of back door opens the way for hackers, cyber criminals or unscrupulous governments to abuse it.
Apple's "neuralMatch" is not — quite — a back door, in the sense of providing direct access to content via the operating system. Apple, moreover, already decrypts photos on its iCloud servers if required by law enforcement. The precedent is that its technology will now proactively screen images on iPhones — breaking down the ringfence that had surrounded its devices — looking for matches with those on a US database of known child abuse images. Matches are flagged when photos are uploaded to iCloud, studied by human reviewers, and sent to law enforcement if verified.