Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the two images looked the same, then the expected behaviour is a collision, so if collisions matter at all, it would only be for pictures that look different.


They don’t matter because if two images don’t look the same, but collide - then human processes will absolve you. This isn’t some AI that sends you straight to prison lol


Imagine this scenario.

- You receive some naughty (legal!) images of a naked young adult while flirting online and save them to your camera roll.

- These images have been made to collide [1] with "well known" CSAM images obtained from the dark underbelly of the internet, on the assumption that their hashes will be contained in the encrypted database.

- Apple's manual review kicks in because you have enough such images to trigger the threshold.

- The human reviewer sees a bunch of thumbnails of naked people whose age is indeterminate but looks to be on the young side.

- Your case is forwarded to the FBI, who now have cause to turn your life upside down.

This scenario seems entirely plausible to me, given the published information about the system and the ability to generate collisions that look like an arbitrary input image, which is clearly possible as demonstrated in the linked thread. The fact that most of us are unlikely to be targets of this kind of attack is little comfort to those that may be.

[1]: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...


"- The human reviewer sees a bunch of thumbnails of naked people whose age is indeterminate but looks to be on the young side."

Given how most companies have barely paid, clueless and PTSD suffering human reviewers, a litany of mistakes is to be expected.

We should expect all the coherence of the twitter's nipple policy, except now it put you in jail or at least ruins your life with legal fees


The problem is that Apple cannot actually see the image at its original resolution because of the supposed liability of harboring CSAM, but being able to retrieve the original image would mean being able to know the complete contents of the rest of its data. To me, it sounds like Apple is trying to make a compromise between having as little knowledge of data on the server as possible and remaining in compliance with the law, but that compromise is impractical to execute.

The law states that if you find an image that's believed to be CSAM, you must report it to the authorities. If Apple's model detects CSAM on the device, sending the whole image to the moderation system for false positives carries the risk of breaking the law, because the images are likely going to be known CSAM, since that's what the database is intended to detect, so they'd be accused of storing it knowingly. Perhaps that's why the thumbnail system is needed.

So why wouldn't Apple store the files unencrypted and scan them when they arrive? That would mean Apple would remove themselves from liability by preventing themselves from gaining knowledge of which images are CSAM or not until they're scanned for, but could still send the original copy of the image with a far lower chance of false positives when something is found. That knowledge or the lack of it about the nature of the image is the crucial factor, and once they believe an image is CSAM they cannot ignore it or stop believing it's CSAM later.

That question may hold the answer to why Apple attempted to innovate in how it scans for child abuse material, perhaps to a fault.


Your scenario makes no sense. You can just skip all of the steps and skip to "the FBI, who now have cause to turn your life upside down." If the evidence doesn't matter, they could've just reported you to the FBI for having CP, regardless of whether you have it or not and your point remains the same.

Not to mention your scenario requires someone you trust trying to "get you." If that's true, then none of the other steps are necessary since you're already compromised.


If your iCloud Photo library contains enough photos to trigger a manual review + FBI report, how does the scenario make no sense?

And as far as your point about "someone you trust trying to 'get you'"... have you ever dated? Ever exchanged naughty photos with somebody (I expect this is even more popular these days among 20-somethings since covid prevented a lot of in-person hookups)? This doesn't seem crazy for a variant of catfishing. I could easily see 4chan posters doing this for fun.


My point is - if you hold that view, then collisions shouldn't matter at all, since if they look the same the correct action is for the person to get thrown in jail.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: