Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This second "secret" hash function, because it is applied to raw offensive content that Apple can't have, has to be shared at least with people maintaining the CSAM database.

You can't rely that it won't ever leak, and when it does, it will be almost undetectable and have huge consequences.

As soon as the first on-device CSAM flag has been raised, it becomes a legal and political problem. Even without a second matching hash, it already put Apple in an untenable position. They already are in a mud fight with the pigs.

They can't say : we got 100M hits this month on our first CSAM filter but we only reported 10 cases, because to avoid false positives our second filter throw everything to dev/null, and we didn't even manually reviewed them because your privacy matter to us. It has become a political problem where for good measure they will have to report cases to make the numbers look "good".

Attackers of the system can also plant false negatives aka real CSAM that has been modified enough to pass the first hash but fail this second hash. So that, in the audit, independent security researchers who review Apple system, will be able to say that Apple automated system, sided with the bad guys, by rejecting true CSAM and not reporting it.

Also remember, that Apple can also do something else than what they say they do for PR reasons : maybe some secret law will force them to reveal to the authorities as soon as the first flag has been raised, and force them not tell about it. And because it's in the name of fighting the "bad guys", that's something most people expect them to do.

From the user perspective, there is nothing we can audit, it's all security by obscurity disguised with pseudo-crypto-PR, it's just a big "Trust us" blanked signed paper that will soon be used to dragnet surveil anyone for any content.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: