One step is that it says the system will be able to be audited by third parties. Also, it will only flag images that have been identified by agencies in multiple countries as CSAM, to prevent one government from trying to use the system to scan for non-CSAM material.
— Ina Fried (@inafried) August 13, 2021
Apple already made such a deal in China, selling out the privacy of iCloud users there by putting iCloud servers in a data center operated by a government owned enterprise. Now Americans are told “trust us” because, though we just sold you out now, we won’t do it again.
— Tim Sweeney (@TimSweeneyEpic) August 13, 2021
I sat down with Apple’s Craig Federighi to talk about the company’s child protection features.
— Joanna Stern (@JoannaStern) August 13, 2021
He spoke about the stumbles the company made in the announcement. I pushed him to explain everything in plain english.
Here’s the exclusive video interview:https://t.co/ZpcXfns05j
Apple's left hand doesn't know what its right hand is doing. First it sued @CorelliumHQ for enabling people to inspect iOS devices. Now it's saying those same people will be able to spot if the photo matching process is misused in some way. pic.twitter.com/5oEjMZoqSO
— Runa Sandvik (@runasand) August 13, 2021
Apple says it will scan your iPhones only for evidence of child sexual abuse, but it is dangerously opening a new path for surveillance that could be used for anything. For example, @Apple has a long history of bowing to China when deemed necessary. https://t.co/hB6zpY6TVc pic.twitter.com/xkoFd1Qzce
— Kenneth Roth (@KenRoth) August 12, 2021
Apple intends to prevent misuse of the child safety feature — that is only available in the US — by working with organizations operating in different sovereign jurisdictions. https://t.co/FKZAcfueK9 pic.twitter.com/vZEDQvoW3A
— Runa Sandvik (@runasand) August 13, 2021
Apple has a new threat modeling white paper out regarding the controversial “child safety features” they recently announced. https://t.co/7A1SfZ0KMT
— Julian Sanchez (@normative) August 13, 2021
Apple is not doing this as part of a crusade to protect children, or they would not allow it the scans to be bypassed by disabling iCloud Photos. They are doing it to be seen to be doing something while minimizing their involvement.
— Edward Snowden (@Snowden) August 12, 2021
This hurt users and doesn't help children.
Wouldn't Apple sue you if you did that? pic.twitter.com/CNdQ8pXFCt
— Benedict Evans (@benedictevans) August 13, 2021
Here is Apple's newly released doc on how they plan to protect their new CSAM tech against abuse: https://t.co/0pUL9JnUMZ
— Patrick Howell O'Neill (@HowellONeill) August 13, 2021
Ah, so “about 30” is the magic number for tripping Apple’s CSAM system. Wonder how they arrived at that number. https://t.co/BVJ6uj9Qnw pic.twitter.com/IhpY3KDzGH
— Charles Arthur (@charlesarthur) August 13, 2021
Craig Federighi today to the @WSJ: "We're not looking for CSAM on iPhones. [...] The sound-bite that got out early was "Apple is scanning my phone for images." This is not what's happening"
— ????Nadim Kobeissi???? (@kaepora) August 13, 2021
Dude, this is just blatant lying. This is directly contradicted by Apple's own statement. pic.twitter.com/gYz8kdqqYg
The US constitution protects against arbitrary government search of one’s home and personal effects. Data one stores privately and doesn’t share is a personal effect. Apple backdooring iOS to examine personal iCloud data is a suspicionless search of personal effects.
— Tim Sweeney (@TimSweeneyEpic) August 13, 2021
More clarity: Apple’s child abuse image scanning system will only trigger alert when “around 30” images are detected.
— Adrian Weckler (@adrianweckler) August 13, 2021
Great explainer (and interview) here. https://t.co/dKklRaAhqd
I’d like to take this moment to make it clear to poor Craig that no, I don’t misunderstand Apple’s plans to check photos in iCloud against NCMEC’s database of CSAM. It’s well-meaning but it’s also creating a mechanism that Apple will be forced to use for other things. https://t.co/5G4MviUh4d
— Eva (@evacide) August 13, 2021
Apple is announcing a few fresh details on its child sexual abuse imagery detection program that it says are designed to avoid the system being manipulated
— Ina Fried (@inafried) August 13, 2021
He went into a bit more on it and we detail that in the news story here and in the column I have coming soon https://t.co/CHgq4c8RXO pic.twitter.com/Fa7r9A6Pvf
— Joanna Stern (@JoannaStern) August 13, 2021
Apple is flagging ICloud photos that match known pedo databases. It’s not one photo, but a threshold of multiple photos uploaded that triggers an alert. Pretty sure Apple has no interest in aggravating a huge customer base through algorithmic mistakes. https://t.co/Nx1a4rw3hD
— Artemis (@HuntressofCrete) August 13, 2021
Apple’s Craig Federighi discusses how new tool to combat child porn is designed with safeguards, such as independent audits, to protect user privacy, w/ @JoannaStern https://t.co/HUPHyQCRLR
— Tim Higgins (@timkhiggins) August 13, 2021
Latest news on Apple child abuse images system: the company warns staff to be prepared for questions from customers, the threshold of elicit images in your library for Apple to be alerted is ~30, and Apple will have an independent auditor for its database https://t.co/j9csYsSdlD
— Mark Gurman (@markgurman) August 13, 2021
Few days later, it’s even more clear that first and foremost this was the mother of all communications fuck-ups from Apple. And they have strong competition from players like the CDC and, as always, Facebook. What on Earth was Apple thinking rolling this out with this strategy?! https://t.co/0fG7qWrCzs
— M.G. Siegler (@mgsiegler) August 13, 2021
Federighi is right about the confusion caused by bundling the iMessage and iPhoto protections into one announcement, but the bigger PR mistep was:
— Alex Stamos (@alexstamos) August 13, 2021
1) Not being clear about the goals
2) Not announcing the parallel roadmap for iCloud encryption
Here’s @JoannaStern with Apple’s Craig Federighi on the child protection system and how it works https://t.co/LDpQLWMCJ5
— nilay patel (@reckless) August 13, 2021
Governments want this search capability but many, including America, are constitutionally prohibited from it. Perhaps Apple thinks that if they give governments this massive surveillance gift at this critical time, regulators will look the other way on their antitrust abuses.
— Tim Sweeney (@TimSweeneyEpic) August 13, 2021
"Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan ... Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests…"https://t.co/yQVN3WZJBj
— Kurt Opsahl (@kurtopsahl) August 12, 2021
One thing I’d note here is that Federighi says there are multiple points of auditability in the system and… it would be good if those were clearly spelled out and people were able to audit them
— nilay patel (@reckless) August 13, 2021
Exclusive: Some Apple employees are speaking out internally about the company’s plan to scan iPhones and computers for child sex abuse images. Apple’s move has also provoked protests from tech policy groups https://t.co/h4fxqpAf9k pic.twitter.com/D4sHGIlrdv
— Reuters (@Reuters) August 12, 2021
My fear is that what Apple is ultimately trying to backdoor here is not our iPhones but democracy and rule of law itself.
— Tim Sweeney (@TimSweeneyEpic) August 13, 2021
It’s strange that 30 or so is the magic number of unacceptable “known bad signature” child porn images.
— Mad Bitcoins (@MadBitcoins) August 13, 2021
I suppose there must be a threshold somewhere.
Again the key is it’s matching a known database of bad sigs, not scanning images for new ones. https://t.co/rcyuCpr8rp pic.twitter.com/pCrZhf1P14
Re Apple’s CSAM scanning, @kesenwang points out that Apple updated its privacy policy in late May 2019 to allow scanning for CSAM.
— Charles Arthur (@charlesarthur) August 13, 2021
Apple’s been working on this for years. @josephmenn @benthompson @gruber https://t.co/DvY1cKRAyd v https://t.co/Rz2a5S1pCJ (tip @techmeme) pic.twitter.com/YqCjI1gFEw
Apple’s Craig Federighi says the iPhone’s CSAM threshold will be about 30 images and that on-phone database will ship internationally, but engages in magical thinking about whether Apple is scanning images on your phone. https://t.co/fnOvkHc2w5
— Robert McMillan (@bobmcmillan) August 13, 2021
Can you imagine being an Apple Genius having to explain how CSAM detection works OVER AND OVER AND OVER again?https://t.co/mEOzn6geH3 https://t.co/9DHgNFshdC
— Chris Messina (@chrismessina) August 13, 2021
Apple even makes a reference to security researchers in its latest document about the child safety features. As if these researchers only exist to make sure Apple does not mess up. https://t.co/FKZAcfueK9 pic.twitter.com/Gfp0IBX2QO
— Runa Sandvik (@runasand) August 13, 2021
Here’s an op-ed @alexstamos and I co-authored about the risks of Apple’s content scanning plan. It’s short and easy to read, and I’m hoping it makes the issues digestible to non-technical people. https://t.co/PbXYPNgiyO
— Matthew Green (@matthew_d_green) August 12, 2021
Apple just released a Threat Model Review of its new scanning program, focusing on the threat of a secret attempt to add new hashes to the DB. The system is designed to be tamper resistant and evident. But what about overt attempts? https://t.co/xbzSSbbtr5
— Kurt Opsahl (@kurtopsahl) August 13, 2021
Apple somewhat naively assumed its privacy reputation would shield it against criticism of CSAM on-device scanning. https://t.co/bJuhueaIvl
— Greg Sterling (@gsterling) August 13, 2021
Exactly: "In hindsight, introducing these two features at the same time was a recipe for this kind of confusion.” https://t.co/kWkHgNTMTS
— Lance Ulanoff (@LanceUlanoff) August 13, 2021
The sheer volume of the doublespeak Apple has done on this is nuts. Over and over again this past week, Apple issues "clarifications" where they deny something they've said and then *immediately rephrase the same claim in a new way.*
— ????Nadim Kobeissi???? (@kaepora) August 13, 2021
Interview is here: https://t.co/I6LnoniVmH
“It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Mr. Federighi said. “We wish that this would’ve come out a little more clearly for everyone ..”
— Carl Quintanilla (@carlquintanilla) August 13, 2021
@WSJ $AAPL https://t.co/JOGuzQeWvf
Apple says possible for security researchers to verify how its CSAM system works because it's being done on the device.
— Joseph Cox (@josephfcox) August 13, 2021
Apple does not have the best reputation for making research easy, at all. If anything, tries to block it. https://t.co/apLCWWxoqB pic.twitter.com/wNYkt0Rfsh
No surprise. Apple has long made personal privacy part of its very DNA. Engineers chose to join Apple for less pay and a tougher work environment because they believe in product excellence and chose to serve on the front lines of privacy as a human right. https://t.co/fuoc4J9tkC
— Tim Sweeney (@TimSweeneyEpic) August 13, 2021
"Apple software head [#CraigFederighi] says plan to scan iPhones for child abuse images is 'misunderstood'"
— Alec Muffett (@AlecMuffett) August 13, 2021
Craig, I really don't think that *that* is the problem here.
The problem is that Apple unwisely & illiberally over-reached into people's privacy.https://t.co/0S83pf4mbr
Where @JoannaStern does one of her always phenomenal interviews/videos — this time digging into Apple’s new Child Safety features with Apple’s head of software engineering https://t.co/31MJlRBz4a
— Rene Ritchie (@reneritchie) August 13, 2021
*APPLE WARNS STAFF TO BE READY FOR QUESTIONS ON SAFARI TAB DESIGN https://t.co/ItSUR48YkU
— Benjamin Mayo (@bzamayo) August 13, 2021
30 images is a higher bar than I expected, and indicates that Apple's goal is more to prevent mass sharing of known CSAM instead of tracking down the original creators of CSAM found elsewhere.
— Alex Stamos (@alexstamos) August 13, 2021
Then why apply to all of the photo roll and not just shared albums? https://t.co/IRsRqFw8tn
“Because it’s on the [phone], security researchers are constantly able to introspect what’s happening in Apple’s [phone] software,”
— hakan (@hatr) August 13, 2021
(11-min interview w/ Apple's software chief Craig Federighi)https://t.co/47DBkiaLQq
Reuters: "Apple's child protection features spark concern within its own ranks -sources"
— Jeff (@Darchmare) August 12, 2021
Looks like the "screeching voices of the minority" include those within Apple itself.
Good.https://t.co/3KqqPX9KHq
Facing criticism about privacy on the iPhone, Apple’s Craig Federighi says new tools aimed at best ensuring privacy while fighting illegal images https://t.co/IPhTXeN4eL via @WSJ
— Dwight Silverman (@dsilverman) August 13, 2021
I appreciate that there's a text version of this story as well as video.
"Apple software head [#CraigFederighi] says plan to scan iPhones for child abuse images is 'misunderstood'"
— Alec Muffett (@AlecMuffett) August 13, 2021
Craig, I really don't think that *that* is the problem here.
The problem is that Apple unwisely & illiberally over-reached into people's privacy.https://t.co/0S83pf4mbr
Apple software head says plan to scan iPhones for child abuse images is 'misunderstood' https://t.co/KV6EGfFOQ6
— Ian Sherr (@iansherr) August 13, 2021
"While Apple is introducing the child sexual abuse detection feature only in the U.S. for now, it is not hard to imagine that foreign governments will be eager to use this sort of tool to monitor other aspects of their citizens' lives." https://t.co/5oq9hxNjuA
— The Tor Project (@torproject) August 12, 2021
This is a must-read article by @matthew_d_green and @alexstamos about the safety risks of Apple's content scanning plan. It's short and easy to read. Makes you wonder why Apple refuses to work with outside experts on such an important project. https://t.co/JrUqB1jakS
— Runa Sandvik (@runasand) August 12, 2021
An op-ed from @alexstamos and @matthew_d_green Apple Wants to Protect Children. But It’s Creating Serious Privacy Risks. https://t.co/tFWkISBjGp
— Stanford Internet Observatory (@stanfordio) August 12, 2021
An important letter from @alexstamos and @matthew_d_green who, even though I disagree with their concern here, are voices you must listen to, given their unparalleled experience and expertise.https://t.co/AnIwzL2juN
— Ben Adida (@benadida) August 13, 2021
Apple shares a security threat review for its new CSAM detection feature https://t.co/y8N0EkjNu9
— iMore (@iMore) August 13, 2021
Apple just issued *another* explanatory document on child safety features… and this time it’s very helpful! Some key topics that Apple addresses:
— Jonathan Mayer (@jonathanmayer) August 13, 2021
* NeuralMatch evaluation
* NeuralMatch “visual derivatives”
* Hash set construction
* Hash set validationhttps://t.co/oAqgRvJxtm
Here’s the calculation that led to this number https://t.co/UI0Rlquxdu pic.twitter.com/07aSWN0b9L
— Félix (@fayfiftynine) August 13, 2021
Apples new threat model document contains some actual justification for the numbers! (https://t.co/F7uGx9VyIn)
— Sarah Jamie Lewis (@SarahJamieLewis) August 13, 2021
They are assuming 1/100000 false acceptance rate for NeuralHash which seems incredible low. And assuming that every photo library is larger than the actual largest one. pic.twitter.com/wls7fQDgvJ
“Security Threat Model Review of Apple’s Child Safety Features”https://t.co/HeW6QQFBry
— Rene Ritchie (@reneritchie) August 13, 2021
Here is Apple's newly released doc on how they plan to protect their new CSAM tech against abuse: https://t.co/0pUL9JnUMZ
— Patrick Howell O'Neill (@HowellONeill) August 13, 2021
Apple, in full crisis mode, has published yet another (more detailed) explanation of how their child safety features will operate. (See: https://t.co/UgMVSbkM7v)
— Christopher Parsons (@caparsons) August 13, 2021
Their explanations do nothing to substantively assuage concerns that have been raised about the features to my eye.
Apple's new document brings up many good questions to unpack and explore. For example, how good is the cross-border collaboration between NCMEC and other child safety organizations? Is this a new version of Apple deciding what is "bad" enough? https://t.co/FKZAcfueK9
— Runa Sandvik (@runasand) August 13, 2021
Some reflections on Apple's security model here:https://t.co/VrqZcPZOFw
— Nicholas Weaver (@ncweaver) August 13, 2021
New Apple explainer just dropped: https://t.co/Op4zwqhGZu
— Joseph Menn (@josephmenn) August 13, 2021
Apple has published more info on how they're preventing their child protection system from being used for non-CSAM:https://t.co/pifB3wXW0t
— Ben Adida (@benadida) August 13, 2021
On first read, looks thorough.
I doubt it will convince my friends opposed to this, as slippery slope is seen as anything on device.