“Clearview AI is not welcome in Canada and the company that developed it should delete Canadians’ faces from its database, the country’s privacy commissioner said.”
— Veena Dubal (@veenadubal) February 4, 2021
The company’s retort is that this will give Google a monopoly in face surveillance. https://t.co/v5udI9t5dN
Clearview AI’s Facial-Recognition App Ruled Illegal in Canada. Canadian authorities declared that the company needed citizens’ consent to use their biometric information, and ordered the firm to delete facial images from its database. https://t.co/JChntPOm7g
— Jesse Damiani (@JesseDamiani) February 4, 2021
NEW: Canada's federal privacy watchdog says "Clearview AI’s unlawful practices represented mass surveillance of Canadians": https://t.co/jn5RibFHcA #cdnpoli
— Alex Boutilier (@alexboutilier) February 3, 2021
BREAKING NEWS - @PrivacyPrivee and provincial Privacy commissioners rule Clearview AI's scraping of the faces of millions of Canadians from social for police search databases was in violation of Canadian privacy law and must be permanently dropped: https://t.co/2MWurhGjvc
— OpenMedia (@OpenMediaOrg) February 3, 2021
For those following the @PrivacyPrivee investigation of ClearviewAI https://t.co/TK77o9kHRe these two briefings on how a Federal moratorium on Facial Recognition Technologies could work from our @tip_mcgill project may be helpful: https://t.co/GFFYSDTXp4 & https://t.co/c75reXg4K1
— Taylor Owen (@taylor_owen) February 3, 2021
I can’t stress how important this victory is. Clearview scrapes billions of photos from social media and public networks. Their mass surveillance product is then sold to law enforcement and others without our consent. https://t.co/vUr175MCLC
— Amanda Bartley (@bartleyamandaj) February 4, 2021
Even mild-mannered Canada says "fuck Clearview' https://t.co/31EWM3SoW3
— Evan Greer (@evan_greer) February 4, 2021
Clearview AI was working with the RCMP to collect billions of images of Canadians from social media websites without their consent. Now that's been deemed to have violated the privacy rights of Canadians. https://t.co/XqcXNPoy2D
— Kate Schneider (@kteschneider) February 3, 2021
Clearview AI is illegal in Canada, say privacy commissioners there, who told Clearview to delete Canadian faces from its database... but that's easier said than done. https://t.co/yH9KDP0mjI
— Kashmir Hill (@kashhill) February 4, 2021
Here is @kashhill in the New York Times about. In order to opt out a person has to send their information to Clearview. “You realize the irony of the remedy, requiring individuals to provide further personal information about themselves,”https://t.co/J1uuXJlStO
— waiting on my 2K (@twocatsand_docs) February 4, 2021
To think, we could've outlawed Facebook too. https://t.co/2x221KNkow
— Elizabeth M. (@hackylawyER) February 4, 2021
The federal and three provincial privacy commissioners have found Clearview AI's facial recognition tech to constitute mass surveillance, and say "commercial organizations" used it, but don't clearly name which: https://t.co/Z7iCnxOPLd
— josh o'kane (@joshokane) February 3, 2021
Clearview AI makes an app that is putting all of society “continually in a police lineup," Canada's privacy czar said. https://t.co/inAt3lUFOq
— NYT Business (@nytimesbusiness) February 4, 2021
New: Canadian privacy authorities say Clearview AI is "illegal," and threatens further action if the surveillance startup does not stop collecting facial recognition data on Canadians.https://t.co/Enc9inRpUV
— Zack Whittaker (@zackwhittaker) February 3, 2021
Update. Clearview AI ruled 'illegal' by Canadian privacy authorities https://t.co/vObe1ZO8Xq via @techcrunch #tech #digital #data #privacy pic.twitter.com/M3ULmX8XiB
— Kohei Kurihara -DataPrivacy for Fighting Covid-19- (@kuriharan) February 4, 2021
Research shows facial recognition technology has fallen short in correctly identifying people of color.
— Los Angeles Times (@latimes) February 4, 2021
A federal study reported that Black and Asian people were about 100 times more likely to be misidentified by facial recognition than white people. https://t.co/AX72kHwGZL
“To be Black, to be Muslim, to be a woman, to be an immigrant in the United States is to be surveilled,” @stevenrenderos said. “How much more surveillance will it take to make us safe? The short answer is, it won’t.” https://t.co/TJ9npTcwFb
— Johana Bhuiyan (@JMBooyah) February 4, 2021
In the days following the Capitol riot, facial recognition software was used to identify those who had stormed the building, researchers say.
— Los Angeles Times (@latimes) February 4, 2021
Clearview AI, a leading facial recognition firm, said it saw a 26% jump in usage from law enforcement on Jan. 7. https://t.co/AX72kHwGZL
Facial recognition’s promise that it will help solve more cases has led to its growing use.
— Los Angeles Times (@latimes) February 4, 2021
Privacy concerns have not stopped its spread. Nor has evidence showing that the use of facial recognition disproportionately harms communities of color. https://t.co/AX72kHwGZL
Civil rights groups also say calls for more surveillance are unfounded in reality.
— Los Angeles Times (@latimes) February 4, 2021
The Capitol riots were planned in the open, in public forums across the internet and the Capitol police were warned ahead of time, they argue. @jmbooyah reports: https://t.co/AX72kHwGZL
NEW: The internet’s scramble to identify Capitol rioters sparked an unprecedented use of tools like facial recognition. But Black & brown communities call the tech “uniquely dangerous” & say it shouldn’t be used, even if it’s on right wing extremists https://t.co/TJ9npTcwFb
— Johana Bhuiyan (@JMBooyah) February 4, 2021
Ethics of surveillance tech — which studies show can be biased against POC — doesn’t shift based on who the “bad guys” are today, @hypervisible said. “Normalizing what is a pretty uniquely dangerous tech causes a lot more problems.” https://t.co/TJ9npTcwFb
— Johana Bhuiyan (@JMBooyah) February 4, 2021
It’s not just facial recognition, experts caution those creating IG & Twitter accounts to scour social media to identify rioters to tread carefully. “Untrained individuals...sleuthing around in the internet can end up doing more harm than good... https://t.co/TJ9npTcwFb
— Johana Bhuiyan (@JMBooyah) February 4, 2021
The problem may be in how the software is trained and who trains it.
— Los Angeles Times (@latimes) February 4, 2021
Such systems are being developed almost exclusively in spaces that “tend to be extremely white, affluent, technically oriented, and male,” a study by the AI Now Institute of NYU reads. https://t.co/AX72kHwGZL
Research shows the technology has fallen short in correctly identifying people of color.
— Los Angeles Times (@latimes) February 4, 2021
A federal study released in 2019 reported that Black and Asian people were about 100 times more likely to be misidentified by facial recognition than white people. https://t.co/AX72kHwGZL
Black, brown, poor, trans and immigrant communities are “routinely over-policed,” Steve Renderos, the executive director of Media Justice, said, and that’s no different when it comes to surveillance. https://t.co/AX72kHwGZL pic.twitter.com/Kp94tVKgor
— Los Angeles Times (@latimes) February 4, 2021
In the aftermath of a riot that included white supremacist factions attempting to overthrow the results of the presidential election, it’s communities of color that are warning about the potential danger of this software. https://t.co/AX72kHwGZL pic.twitter.com/pDkKhXN2OK
— Los Angeles Times (@latimes) February 4, 2021
Facial recognition’s promise that it will help solve more cases has led to its growing use.
— Los Angeles Times (@latimes) February 4, 2021
Privacy concerns have not stopped its spread. Nor has evidence showing that the use of facial recognition disproportionately harms communities of color. https://t.co/AX72kHwGZL
“This is always the response to moments of crises: Let’s expand our policing, let’s expand the reach of surveillance,” @stevenrenderos from @mediajustice . “But it hasn’t done much in the way of keeping our communities actually safe from violence.” https://t.co/PlCTisMfVT
— ✨Myaisha Hayes ✨ (@MyaishaAyanna) February 4, 2021
“This is always the response to moments of crises: Let’s expand our policing, let’s expand the reach of surveillance...But it hasn’t done much in the way of keeping our communities actually safe from violence.” https://t.co/zJEX8VCj0d
— Don't proctor me, bro (@hypervisible) February 4, 2021
Canada (politely) asks Clearview AI to stop scraping citizens’ photos (story by @thomas_macaulay) https://t.co/gBnFICMc6N
— TNW (@thenextweb) February 4, 2021
Big news! Four #privacy commissioners have found that Clearview AI violated Cdn privacy laws: “What Clearview does is mass surveillance and it is illegal."
— BC Civil Liberties Association (@bccla) February 3, 2021
We call for a federal BAN on all facial recognition surveillance by law enforcement agencies. https://t.co/60pdeiZCar
Canada has declared Clearview AI's collection of Canadian citizens' photos w/o consent illegal.
— Jack Poulson (@_jack_poulson) February 4, 2021
When I asked NSCAI last week if DoD's use of Clearview AI et al. would be deemed a violation of DoD AI Principles, the Microsoft commissioner (Horvitz) puntedhttps://t.co/Aswk1wlHfp pic.twitter.com/IhVN4xHmJQ
“What Clearview does is mass surveillance and it is illegal,” Canada’s federal privacy commissioner said Wednesday.
— Tech Won't Save Us podcast (@techwontsaveus) February 4, 2021
He wants Clearview AI to stop offering its tech in Canada, stop collecting images of Canadians & delete photos of Canadians in its database.https://t.co/PscgSytI9e
Being #Canadian sounds nice... ??https://t.co/uwPfEu5l8w pic.twitter.com/Ab4f0vsQVe
— Clare Garvie (@ClareAngelyn) February 3, 2021
Story from @katecallen, @alexboutilier and me. Report here: https://t.co/ZuPemDImce
— Wendy Gillis (@wendygillis) February 3, 2021
An investigation by the federal and three provincial privacy commissioners has found Clearview AI's facial recognition technology was mass surveillance. https://t.co/pvX6CyVebX
— Alexander Quon (@AlexanderQuon) February 3, 2021
Clearview has violated privacy laws in this country, too.https://t.co/Dsl8vjrBSE
— ACLU (@ACLU) February 4, 2021
Normalizing surveillance tactics that have been used disproportionately on Black and brown communities may have big consequences, activists and academics warn. @JMBooyah reports: https://t.co/AX72kHwGZL
— Los Angeles Times (@latimes) February 4, 2021
“Whenever they’ve enacted laws that address white violence, the blowback on Black people is far greater,” said @Margari_Aziza.
— Johana Bhuiyan (@JMBooyah) February 5, 2021
Some worry the scramble to react to the Capitol riots will lead to rushed policies & increased use of surveillance tools. https://t.co/TJ9npTu83L
“Untrained individuals sort of sleuthing around in the internet can end up doing more harm than good even with the best of intentions... You always have to ask yourself, how could this end up being used on you and your community." — ❤️Fight's Evan Greerhttps://t.co/oLQHN4Tczq
— Fight for the Future (@fightfortheftr) February 5, 2021
Canada (politely) asks Clearview AI to stop scraping citizens’ photos (story by @thomas_macaulay) https://t.co/Upu8Sr6hqY
— TNW (@thenextweb) February 5, 2021
カナダ政府がClearview AIに市民の顔を収集しないように(丁寧に)要求。カナダの個人情報保護法に違反しているとのこと。AIによる顔認識は最近本当に風当たりが強いですね。https://t.co/8fuELw17kH
— ???????? ?????? (@catapultsuplex) February 4, 2021
I thank @PrivacyPrivee for responding to my call for an investigation into whether Clearview AI broke Canadian law.
— Charlie Angus NDP (@CharlieAngusNDP) February 3, 2021
His report is excellent
The question is why did @rcmpgrcpolice and other police exploit a technology that breached Canadian law? https://t.co/SIyS8Hw3li