Login to comment
Reading through Apple’s Platform Security docs for iCloud*, it seems like server-side CSAM scanning of iCloud Photos would be cumbersome and expensive for Apple, so I’m betting client-side scanning is an optimization.— Dino A. Dai Zovi (@dinodaizovi) August 7, 2021
* https://t.co/rxs56uGMAs https://t.co/KKSVqv9olD
We are now at over 3,000 screeching voices of the minority! We tried to vet every signature.— 🇫🇷Nadim Kobeissi🇫🇷 (@kaepora) August 7, 2021
Our open letter now illustrates a very strong opposition to Apple's new content screening measures.
Thanks @Snowden for keeping me and @georgionic up all night! https://t.co/niSvN1jLXH
Apple has insisted on defining privacy as “we can’t see your data” then criticized Google & Facebook on privacy.— Dare Obasanjo (@Carnage4Life) August 7, 2021
However there’s also “we protect your data from governments” where their track record is worse than others which is the elephant in the room for on-device scanning.
There's already been a lot of smart commentary on why this is a terrible idea. But I also wanted to add another piece of context:— Wagatwe Wanjuki 🇰🇪 🇧🇸 (@wagatwe) August 6, 2021
Violence against women and children is often used as an excuse to expand state surveillance without actually helping. https://t.co/ZHWOK2YChV
If you’re still confused by Apple’s CSAM initiative, try looking at it through this lens:— Rene Ritchie (@reneritchie) August 7, 2021
Apple can’t abide known CSAM images on, or trafficked through, their servers, but they don’t want to scan actual private user libraries to catch them, and this system solves for that
I doubt 2 would have much effect? Why do you think it would?— Charles Arthur (@charlesarthur) August 7, 2021
Seems like 1 should have a dramatic effect of identifying offenders, if the proportion holding is anything like the FB/WApp numbers would imply.
My guess is that 1 is mainly Apple no longer being willing/able to tolerate CSAM on their servers and this is their engineering solution to that— Rene Ritchie (@reneritchie) August 7, 2021
Unlike most of the alarmist pieces published this week, John @Gruber writes a comprehensive analysis of how Apple's new child safety initiatives actually work. Bottom line: while not perfect, it protects personal privacy a lot more than you think. https://t.co/bT1GVVRFfk— Carl Howe (@cdhowe) August 7, 2021
I also don't understand why Apple is pushing the CSAM scanning for iCloud into the device, unless it is in preparation for real encryption of iCloud backups. A reasonable target should be scanning shared iCloud albums, which could be implemented server-side.— Alex Stamos (@alexstamos) August 7, 2021
1/ RE: Apple’s plan to scan every photo in iMessage with machine learning and alert parents to nudity.— Brianna Wu (@BriannaWu) August 7, 2021
I think a lot of well-meaning Silicon Valley people don’t understand what life is like in the Religious Right South.
Let me share so you can imagine how it will be misused.
This is also a good reminder for privacy and security folks to pay more attention to the global regulatory environment. While the PRC has been invoked a lot, I expect that the UK Online Safety Bill and EU Digital Services Act were much more important to Apple's considerations.— Alex Stamos (@alexstamos) August 7, 2021
Strongly agree, I can live with the design of iCloud as it exists - though what could happen next concerns me. It’s the iMessage component that gives me far more pause.— Brianna Wu (@BriannaWu) August 7, 2021
That said, I *think* it’s very difficult to change the age of a child’s iCloud account. I forget why I tried that for my son a few years ago, but I gave up. (I wanted to make him “older” for something, but don’t recall the details.)— John Gruber (@gruber) August 7, 2021
Here's a level-headed and detailed explanation about all the new Apple bad-image-detection features, including when one should actually start to worry, a startling revelation about existing FBI back-doors, and a good ol' take down of a sloppy journalist https://t.co/kDj8ECshrf— Andrew Burke (@ajlburke) August 7, 2021
So I *think* this is why the Messages feature gives so much control to the kids, even those 12 and under. Control meaning that they always get a clear warning upon receiving or attempting to send a flagged image that proceeding will result in a notification to their parents.— John Gruber (@gruber) August 7, 2021
What’s the net effect on CSAM on iOS of both:— Benedict Evans (@benedictevans) August 7, 2021
1: on-device scanning of images uploaded to iCloud for matches to known CSAM and
2: iOS internet traffic going through a double-blind encrypted relay, so IP addresses can’t ID users
(And could Apple, politically, do 2 w/out 1?)
Apple's filtering of iMessage and iCloud is not a slippery slope to backdoors that suppress speech and make our communications less secure. We’re already there: this is a fully-built system just waiting for external pressure to make the slightest change. https://t.co/f2nv062t2n— EFF (@EFF) August 5, 2021
Good thread. It has occurred to me that while the CSAM iCloud photo detection is the feature garnering the most attention, the Messages image detection for kids has more potential to be problematic *as it exists*.— John Gruber (@gruber) August 7, 2021
The real number is 236 in 1yr for iPhone. Which is a bit lower than 17m a month.— thaddeus e. grugq (@thegrugq) August 7, 2021
This was a good conversation on Apple's new child safety announcement. Very smart folks with a variety of expertise and POVs including @alexstamos who has worked in the trenches of trust and safety and talks about the horrifying reality of the harm https://t.co/wk9YhcVhkb— Patrick Howell O'Neill (@HowellONeill) August 6, 2021
The minority, but worst reports are those that involve the live abuse of a child. Those are often reported to NCMEC and also referred live to the relevant authorities by a combination of the child safety investigator (was part of my team) and law enforcement response (in legal).— Alex Stamos (@alexstamos) August 7, 2021
3/ Any discussion of sex, certainly any discussion of homosexuality, anything promoting a secular worldview would be censored.— Brianna Wu (@BriannaWu) August 7, 2021
If you don’t think parents like that are going to change their child’s age in iCloud to create a permanent surveillance state, you’re just wrong.
And yet there’s only 250 cases from iPhone. Which suggests that something isn’t adding up with that narrative. The CSAM database can only find known CSAM photos that have been added. So new content won’t get flagged, only existing content that has to come from somewhere— thaddeus e. grugq (@thegrugq) August 7, 2021
If you’re like me you might have had a hard time cutting through the noise to understand exactly what is going on.— Curtis Herbert (@parrots) August 7, 2021
This is a very good piece breaking down exactly what Apple is doing, and what they can and cannot see. A good 101 before convos about pros and cons and slopes. https://t.co/9PwI9RNvhQ
This is my point. You can’t think about these systems *AT THEIR BEST.* You have to ask hard questions about how they could be misused.— Brianna Wu (@BriannaWu) August 7, 2021
As an industry, we massively suck at considering the downside. In this case, the downside is outed, potentially dead children. https://t.co/K3A6kwNjaT
Is CASM on iPhones a big problem? How do they get it there from the dark web? Don’t pedos believe this capability already exists and avoid iPhones to begin with?— thaddeus e. grugq (@thegrugq) August 7, 2021
The vast majority of Facebook NCMEC reports are hits for known CSAM using a couple of different perceptual fingerprints using both NCMEC's and FB's own hash banks. What constitutes a "report", which can have multiple images and IPs, is one of the data challenges.— Alex Stamos (@alexstamos) August 7, 2021
I’m a big fan of @apple and I use a ton of their products and services— Olivier Simard-Casanova 🏳️🌈🇫🇷 (@simardcasanova) August 7, 2021
But on top of being creepy, we currently has zero *credible* reassurance that this system won’t be weaponized by authoritarian regimes or used for unrelated purposes
Maybe bc such a reassurance cannot exist https://t.co/8JJ07wHlCI
The irony of Apple’s theory that any analysis done on the device is automatically private and anything analysis in the cloud violates your privacy - they built a CSAM detection system that is *potentially* much worse for your privacy *because* it happens on your device— Benedict Evans (@benedictevans) August 7, 2021
I agree with Gruber’s take that the 2018 delay on E2E iCloud backup encryption was done in order to incorporate this. The timings all feel right. I think there’s probably been CSAM but it’s been hard to action (can’t be seen to root through iCloud photos). Now it can.— Charles Arthur (@charlesarthur) August 7, 2021
Apple saying they will “reject any such demands from a government” is laughable at best and a flat out lie at worst.— Fernando (@fromjrtosr) August 7, 2021
Once any government makes it a law for Apple to use a different set of hashes, what then? Are they stopping business in China? In the US? In the EU?
One of the basic problems with Apple's approach is that they seem desperate to avoid building a real trust and safety function for their communications products. There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.— Alex Stamos (@alexstamos) August 7, 2021
2/ I have a family member who enacted total lockdown on *everything* her child read.— Brianna Wu (@BriannaWu) August 7, 2021
So when her child would go to the library to check out books, upon returning home she would pass them to my family member.
She would then research *every single* book online and clear it.
I’d like to know what those Facebook numbers represent. New CSAM? Or people uploading CSAM that gets flagged by the db? And how many users are doing this? I saw one number was 20m, but that can’t be right. That’s like the entire adult male population of Great Britain.— thaddeus e. grugq (@thegrugq) August 7, 2021
I meant that if Facebook detects that much, that's an indication of how prevalent it is on regular sites, no "dark web" required.— Pär Björklund (@Paxxi) August 7, 2021
… I pay attention to these. But nothing could prepare anybody to a pretty much all-out-of-sudden announcement of a system shipping in a few weeks, with powerful capabilities but rather unclear governance/oversight structure. I’m pretty surprised in light of previous messages.— Lukasz Olejnik (@lukOlejnik) August 7, 2021
What about the on-device nature of it makes it worse? Assuming the cloud implementation scanned the same set of photos…— Antonio García Martínez (@antoniogm) August 7, 2021
👏👏👏 #Apple’s #neuralMatch tool will scan #iPhones for child abuse content #Ransomware #DataBreaches #DarkWeb #CyberSec #infosec #Security #cybercrime #ThreatIntel #hackers #dataprotection #cyberthreats #cybersecurity #cyberattacks #cyberintelligence https://t.co/Wsz2gyjTno— Jiniba (@JinibaBD) August 7, 2021
Governments can ask them to scan for XYZ you’ve uploaded to iCloud today. Apple has opened the door to scanning content on your device.— Dare Obasanjo (@Carnage4Life) August 7, 2021
Even Gruber concedes this is new and now depends on how much you trust Apple to say “no” to various governments.
Apple wants to check your phone for child abuse images – what could possibly go wrong? On the surface Apple’s new features sound both sensible and commendable – but they also open a Pandora’s box of privacy and surveillance issues— Alfons López Tena (@alfonslopeztena) August 7, 2021
By @ArwaM https://t.co/HLPNvB1QEi
Login to comment