This is a very good article, on very scary stuff - it also details a company called HUGGINGFACE. And Huggingface is sort of the hero of the piece.https://t.co/msAtIjVKGN
— Jon Severs (@jon_severs) May 23, 2021
Google fired its ethical AI co-leads after they raised concerns about the racist, sexist and abusive ideas embedded in one of its most prized AI technologies. The company recently unveiled ambitious new plans to deploy this technology across its products. https://t.co/qBNXSdpwb3
— MIT Technology Review (@techreview) May 20, 2021
Tech companies use programs that read and write without understanding. But researchers are studying the disturbing limits of these glorified autocomplete functions, @_KarenHao writes for @techreview.https://t.co/OQTRnDyfB4
— Quanta Magazine (@QuantaMagazine) May 21, 2021
"Soon enough, all of our digital interactions—when we email, search, or post on social media—will be filtered through large language models".https://t.co/DHygZg4IZ7
— Luciana Benotti (@LucianaBenotti) May 23, 2021
Ever since Google fired @timnitGebru & @mmitchell_ai, it's continued to deploy the very technology it punished them for scrutinizing. Now hundreds of scientists are racing to investigate the technology's risks before it's too late to avoid its harms. https://t.co/N7ezDhqpbL
— Karen Hao (@_KarenHao) May 20, 2021
“Soon enough, all of our digital interactions—when we email, search, or post on social media—will be filtered through LLMs.” An important piece from @_KarenHao about an increasingly core piece of tech infrastructure https://t.co/SwlYxdggX3
— Gideon Lichfield (@glichfield) May 21, 2021
Hundreds of scientists around the world are working together to understand one of the most powerful emerging technologies before it’s too late. https://t.co/fG5jZYo22I
— MIT Technology Review (@techreview) May 21, 2021
Google recently announced an AI system that can chat to users about any subject, but didn't discuss the ethical debate surrounding such cutting-edge systems. Studies have already shown how racist, sexist, and abusive ideas are embedded in these models. https://t.co/7huz6AmGrS
— MIT Technology Review (@techreview) May 23, 2021
Very concerning use of large language models (LLM). „Unfortunately, very little research is being done to understand how the flaws of this technology could affect people in real-world applications, or to figure out how to design better LLMs that mitigate these challenges.“ https://t.co/Iw5FCzKHJT
— ICT4Peace Foundation (@ict4peace) May 22, 2021
"LLMs (Large Language Models) are increasingly being integrated into the linguistic infrastructure of the internet atop shaky scientific foundations."
— Morten Rand-Hendriksen (@mor10) May 21, 2021
"The race to understand the thrilling, dangerous world of language AI" https://t.co/8hlPxeW2gf
This. The danger of a single scientific story....
— Phyllis D.K. Hildreth (@phalcon7) May 22, 2021
Large language models (LLMs) https://t.co/7MWCNilLur pic.twitter.com/4NMf3mwNlH
AMAZING initiative from Lux family co @huggingface
— Josh Wolfe (@wolfejosh) May 21, 2021
Sign up here https://t.co/mvrDRaG5v0
Learn more here https://t.co/a2rAh2yEsr pic.twitter.com/1xnLMTVROk
? On Tuesday, May 18, 9am EDT we will have the first meeting of the Interpretability and #Visualization group for the Summer of Language Model workshop. It's still a good time to join. #NLProc @ieeevis #Vis
— Hendrik Strobelt (@hen_str) May 17, 2021
Workshop: https://t.co/dWR8yvYnkb
Groups:https://t.co/lkw9pdQggj pic.twitter.com/6iRrkVvpBW
Ever since Google fired @timnitGebru & @mmitchell_ai, it's continued to deploy the very technology it punished them for scrutinizing. Now hundreds of scientists are racing to investigate the technology's risks before it's too late to avoid its harms. https://t.co/N7ezDhqpbL
— Karen Hao (@_KarenHao) May 20, 2021
The race to understand the thrilling, dangerous world of language Artificial Intelligence https://t.co/2d7A3Uo4Pl #ai #artificialintelligence #machinelearning #deeplearning #google pic.twitter.com/Ytzom42ep0
— Nige Willson (@nigewillson) May 23, 2021
The race to understand the exhilarating, dangerous world of language AI https://t.co/ci5T3Vo1kf #AI #digitalhealth
— John Nosta (@JohnNosta) May 21, 2021
Tech companies use programs that read and write without understanding. But researchers are studying the disturbing limits of these glorified autocomplete functions, @_KarenHao writes for @techreview.https://t.co/OQTRnDyfB4
— Quanta Magazine (@QuantaMagazine) May 21, 2021
The race to understand the thrilling,
— Spiros Margaris (@SpirosMargaris) May 22, 2021
dangerous world of #language #AIhttps://t.co/LawdTKhX0a #fintech #ArtificialIntelligence #MachineLearning #DeepLeraning @techreview @_KarenHao @psb_dc @andi_staub @YuHelenYu @Ronald_vanLoon @YuHelenYu @DioFavatas @Paula_Piccard @Fisher85M pic.twitter.com/su90heni0Z