Think I'll give Zuckerberg's Metaverse a swerve https://t.co/SKMHouHp2t
— Chris Keall (@ChrisKeall) September 4, 2021
the fact that this has happened at multiple major tech companies really says something about the way AI is being built ? https://t.co/f2R1uFn9PJ
— o...k (@kateconger) September 4, 2021
And this is why I got stay reminding people that tech is racist by virtue of the conscious OR unconscious bias of the not diverse teams that create it. But they’re sorry, they’ll do better. https://t.co/9vJIH5yEBK
— Tatenda Musapatike (@TatendaCheryl) September 3, 2021
The biggest tech co's have struggled with biases in AI. In 2015, Google Photos categorized Black people as "gorillas." Six years later, Facebook, which has one of the largest repositories of photos on which to train its AI, is dealing w/ the same issue. https://t.co/p29Bayd5MX
— Ryan Mac ? (@RMac18) September 3, 2021
If they were an ethical company, they would be proactively trying to break AI like this to make certain the bias isn’t there and this doesn’t happen. Instead once again, “Facebook shut off the feature and is investigating. It also apologized after we reached out.” - @RMac18 https://t.co/JGo7X6vabS
— Jason Kint (@jason_kint) September 4, 2021
Welp. This is not how I imagined I’d make the NYT, but at least it was for something important.https://t.co/SQYDPuGjgK
— Darci Groves (@tweetsbydarci) September 4, 2021
In 2016 Facebook found most of its mis and disinfo was racially based . They also concertedly made an effort to separate high profile Black orgs from Black technologists who critiques it using “diversity” incubators
— ?Sydette Cosmic Dreaded Gorgon ?? (@Blackamazon) September 4, 2021
We actually have to have that reconciliation https://t.co/tU56bfmBzq
Don't turn on your "artificial intelligence-powered feature" if it can't pass this one fucking test. How are AI features still getting shipped if they can't pass the literal least they could do. https://t.co/UdLWEJ6XAQ
— keith kurson (@keithkurson) September 4, 2021
Oh my god. https://t.co/2MZHip78y6
— Elamin Abdelmahmoud (@elamin88) September 3, 2021
14% of Americans are Black.
— Simran Jeet Singh (@simran) September 4, 2021
Only 4.4% of Facebook's US employees are Black. https://t.co/IjUO95gO3k
This is what the AI-powered prompt looked like. The 3:30 minute video from the Daily Mail had nothing to do with "primates." It was a set of two clips featuring Black men getting in altercations with a white male citizen and white police officers.https://t.co/p29Bayd5MX pic.twitter.com/XgopgpfFfc
— Ryan Mac ? (@RMac18) September 3, 2021
I couldn't have reported this story if Darci, a former Facebook content design manager of 4 years, had not flagged it to a group of former FB employees and then talked to me. This stuff is so important, but can only come out if brave folks speak up.https://t.co/gx2J4IfM33
— Ryan Mac ? (@RMac18) September 4, 2021
I'm so thoroughly exhausted. Facebook Apologizes After A.I. Puts ‘Primates’ Label on Video of Black Men https://t.co/QgJp12KNDf
— Rebecca Carroll (@rebel19) September 4, 2021
No one should take a technology that labels black people as “primates” seriously ever again
— Joe Bernstein (@Bernstein) September 4, 2021
An “unacceptable error.” Remember when Zuckerberg explained that the failure to take down the Kenosha “call to arms” post that Kyle Rittenhouse responded to was “an operational mistake”? https://t.co/3P8ZbJKGbh
— Sherrilyn Ifill (@Sifill_LDF) September 4, 2021
Everyday we see the limits, biases and pure idiocy of relying solely on AI. And yet we continue to allow ourselves to be guinea pigs. @RMac18 with the latest.https://t.co/X9FKgEy1Kj
— Nicole Perlroth (@nicoleperlroth) September 4, 2021
As with ads/disinfo, I’d like to see the takeaway from stories like this, in addition to rightful anger, be that Facebook is a shitty technological product that relies on a kind of collective social delusion to take seriously https://t.co/dulmmMxu1n
— Joe Bernstein (@Bernstein) September 4, 2021
Data Science needs the testing discipline.
— ᴇʀɪᴄ ᴩʀᴏᴇɢʟᴇʀ (@ericproegler) September 4, 2021
This report is a critical ethics problem. This is (or should be!) a crucial risk to brand and image. And…
The number of unseen/unobserved AI errors is some gigantic multiple of the known issues, risking desired outcomes too. https://t.co/wAZ5aUBLXN
At this point I almost expect Facebook to fuck up in the most mind-blowingly offensive ways with no consequences.#toobigtofail https://t.co/jEzB5FBDlb
— Partisan 161 ? (@161partisan) September 4, 2021