The chatter around Apple's recent announcement of new child safety protections is rather frustrating, as the amplification/knowledge ratio seems rather high on this one.
— Alex Stamos (@alexstamos) August 7, 2021
Apple's docs: https://t.co/TIcVQ6Zb1J
A discussion with friends: https://t.co/c4IYPVMHUA
But the problem is there's no way to close the loop right now, to make it so that if Apple or Facebook or Google inflict huge social harm, their bottom line suffers, or their execs go to jail, or they lose all their customers. Profits accrue while social impacts are externalized
— Pinboard (@Pinboard) August 7, 2021
I have an entire lecture on live abuse, and I can't replicate the whole thing here, but here is a very stark description of what that can look like.
— Alex Stamos (@alexstamos) August 7, 2021
Content Warning: Really horrible child abuse.https://t.co/rHbitWSKU0
I have friends at both the EFF and NCMEC, and I am disappointed with both NGOs at the moment. Their public/leaked statements leave very little room for conversation, and Apple's public move has pushed them to advocate for their equities to the extreme.
— Alex Stamos (@alexstamos) August 7, 2021
This was a good conversation on Apple's new child safety announcement. Very smart folks with a variety of expertise and POVs including @alexstamos who has worked in the trenches of trust and safety and talks about the horrifying reality of the harm https://t.co/wk9YhcVhkb
— Patrick Howell O'Neill (@HowellONeill) August 6, 2021
Instead, we get an ML system that is only targeted at <13 year-olds (not the largest group of sextortion/grooming targets in my experience), that gives kids a choice they aren't equipped to make, and notifies parents instead of Apple T&S.
— Alex Stamos (@alexstamos) August 7, 2021
I think this is a real shame. I hope that @wcathcart doesn't just use this misstep as a revenge for Apple's preachy privacy marketing, and instead commits WhatsApp to exploring ML mechanisms that work on behalf of the user with their consent.https://t.co/EU9trmlQds
— Alex Stamos (@alexstamos) August 7, 2021
Anyway, thanks to @Riana_Crypto @matthew_d_green and @alexstamos for the thoughtful discussion. Good points all around, even if we're on slightly different pages. 15/15https://t.co/GVyrEMN2dG
— David Thiel (@elegant_wallaby) August 6, 2021
There's already been a lot of smart commentary on why this is a terrible idea. But I also wanted to add another piece of context:
— Wagatwe Wanjuki ?? ?? (@wagatwe) August 6, 2021
Violence against women and children is often used as an excuse to expand state surveillance without actually helping. https://t.co/ZHWOK2YChV
100% agree with @matthew_d_green on thishttps://t.co/cSpcFSnScP
— Vanessa Teague (@VTeagueAus) August 6, 2021
Apple isn't doing "child-safety" - that's for police, social workers, etc. Apple is doing non-consensual scanning of images. Maybe they'll scan for child sex abuse images, maybe for images of pro-democracy protests. https://t.co/6FJeVIrfyr
I will absolutely do it.
— Chris Vickery (@VickerySec) August 8, 2021
I will roll my eyes and let out a heavy *sigh* every single time a tech company, politician, or law enforcement agency leverages "for the children" in order to place yet another boot on the throat of privacy. https://t.co/QsBFH4U5sR
One of the basic problems with Apple's approach is that they seem desperate to avoid building a real trust and safety function for their communications products. There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.
— Alex Stamos (@alexstamos) August 7, 2021
My colleague @alexstamos unpacks the wealth of nuance being lost in discussions of Apple’s sudden, weirdly designed CSAM scanning change. https://t.co/Y1IBWKXVMa
— Daphne Keller (@daphnehk) August 7, 2021
Thoughtful thread by @alexstamos on this. He has been working on these issues for years, trying to protect kids from horrible abuse without serving the cause of mass surveillance. Apple not participating and then coming out of nowhere with its own elegant/odd thing is so…Apple. https://t.co/XhctUHX665
— Joseph Menn (@josephmenn) August 7, 2021
The way we find out about these technology impacts is by rolling them out worldwide and then seeing what social and political changes result. It's certainly a bracing way to run experiments, with no Institutional Review Board to bog everything down with pessimism and bureaucracy
— Pinboard (@Pinboard) August 7, 2021
I also don't understand why Apple is pushing the CSAM scanning for iCloud into the device, unless it is in preparation for real encryption of iCloud backups. A reasonable target should be scanning shared iCloud albums, which could be implemented server-side.
— Alex Stamos (@alexstamos) August 7, 2021
This is also a good reminder for privacy and security folks to pay more attention to the global regulatory environment. While the PRC has been invoked a lot, I expect that the UK Online Safety Bill and EU Digital Services Act were much more important to Apple's considerations.
— Alex Stamos (@alexstamos) August 7, 2021
I attended one of these meetings (working on an article that later got spiked), listened to FBI, big tech, and privacy advocates all speak up, and was very impressed with how it was conducted. The issue is genuinely difficult. Alex Stamos's thread here is very much worth reading. https://t.co/XWUPH7Bg3P
— Pinboard (@Pinboard) August 7, 2021
This is an important and nuanced discussion. I began my career investigating Digital Crimes Against Children at the FBI. It was a life altering experience. Not choosing sides but well worth critical analysis considering victims, user privacy, and E2EE platform responsibility. https://t.co/iK5ICqdSGH
— Rob Duhart Jr. (@RobDuhart) August 8, 2021
As a parent, I am so happy to see companies begin to take child safety more seriously.
— Aphrodite (@Mom_Mykayla) August 6, 2021
Child Safety - Apple https://t.co/XYlTNAQrFz
In any case, coming out of the gate with non-consensual scanning of local photos, and creating client-side ML that won't provide a lot of real harm prevention, means that Apple might have just poisoned the well against any use of client-side classifiers to protect users.
— Alex Stamos (@alexstamos) August 7, 2021
Since these platforms and devices are global, sometimes that impact takes place in contexts that none of the people working on the technology even know exists. If Facebook moves groups to full E2E, or Apple shifts content monitoring from to devices, the decision is unilateral
— Pinboard (@Pinboard) August 7, 2021
Reading through Apple’s Platform Security docs for iCloud*, it seems like server-side CSAM scanning of iCloud Photos would be cumbersome and expensive for Apple, so I’m betting client-side scanning is an optimization.
— Dino A. Dai Zovi (@dinodaizovi) August 7, 2021
_________________________
* https://t.co/rxs56uGMAs https://t.co/KKSVqv9olD
Yup. EU, India, and Brazil are probably the main motivators. AUS/UK/US are noisy but with little bite. China is regularly invoked, but tech's relationship with PRC is v bespoke and the options investigated and deployed there are mostly orthogonal to the rest of the discussion.
— Pwn All The Things (@pwnallthethings) August 7, 2021
… I pay attention to these. But nothing could prepare anybody to a pretty much all-out-of-sudden announcement of a system shipping in a few weeks, with powerful capabilities but rather unclear governance/oversight structure. I’m pretty surprised in light of previous messages.
— Lukasz Olejnik (@lukOlejnik) August 7, 2021
Likewise, the leaked message from NCMEC to Apple's employees calling legitimate questions about the privacy impacts of this move "the screeching voices of the minority" was both harmful and unfair. https://t.co/2BgbcrZckV
— Alex Stamos (@alexstamos) August 7, 2021
Epic Games CEO criticizes Apple's iCloud, claims Child Safety scans emails https://t.co/jgB6La7yQc
— iMore (@iMore) August 7, 2021
Is Apple's NeuralMatch system going to be searching for acts of abuse, or for people? It looks rather like a face recognition network... https://t.co/YaYNx7olP7
— Ross Anderson (@rossjanderson) August 8, 2021
This is an excellent thread, and captures my own thoughts on this quite well.
— Sean Zadig (@seanzadig) August 8, 2021
This is a very hard problem to solve, and those who dismiss the harm dome to children as "kiddy porn" are really minimizing a truly horrific crime. https://t.co/cdEtLMsIp9
"Depending on where you are, you might find your photos scanned for dissidents, religious leaders or the FBI’s most wanted". "Expect to see pictures of cats that get flagged as abuse". https://t.co/A0luLSa29p https://t.co/MSd2GAA0oh
— Lukasz Olejnik (@lukOlejnik) August 8, 2021
Valuable thread from @alexstamos
— Bart Preneel (@bpreneel1) August 8, 2021
that points out that 1) there is a real problem here that needs to be addressed 2) there are better solutions available with smaller privacy/abuse risks. Specific solutions should be developed after an open public debate and in full transparency. https://t.co/kL5BhxdI6T
Must-read thread. https://t.co/0eFgbFrmgV
— Ed Bott (@edbott) August 9, 2021
This is an important and nuanced discussion. I began my career investigating Digital Crimes Against Children at the FBI. It was a life altering experience. Not choosing sides but well worth critical analysis considering victims, user privacy, and E2EE platform responsibility. https://t.co/iK5ICqdSGH
— Rob Duhart Jr. (@RobDuhart) August 8, 2021
Finally something that isn't a knee jerk reaction to the new Apple child safety features. There are complex tradeoffs and this thread captures many of them. https://t.co/TNxMt7uyuf
— Chris Eng (@chriseng) August 8, 2021