New AI fake text generator may be too dangerous to release, say creators [www.theguardian.com]
Fake news: OpenAI's 'deepfakes for text', GPT2, may be too dangerous to be released [betanews.com]
OpenAI has a fake news bomb made of AI and no clue what to do with it [www.theinquirer.net]
OpenAI refuses to release software because it's too dangerous [www.fastcompany.com]
OpenAI text-generating tool GPT2 won't be released for fear of misuse [www.businessinsider.com]
This AI is one of the few genuinely breathtaking advances in technology I've seen in a long time: https://t.co/LmKLaJc7Or
— alex hern (@alexhern) February 14, 2019
“We need to perform experimentation to find out what they can [do]... If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously”https://t.co/6EDT1bPaZm
— Rasmus Kleis Nielsen (@rasmus_kleis) February 15, 2019
In 1979, in my 7th grade classroom at Pembroke Meadows Elementary School in Virginia Beach, I asked my teacher, Mr. Petran (great teacher) if he ever thought that computers would write books in the future?https://t.co/fnupyXW3cY
— Todd Schnitt (@toddschnitt) February 15, 2019
New AI fake text generator may be too dangerous to release, say creators https://t.co/Ga94mFL6r6
— emanuele menietti (@emenietti) February 15, 2019
'Feed it the opening line of George Orwell’s Nineteen Eighty-Four ... and the system recognises the vaguely futuristic tone and the novelistic style'. Of course AI doesn't get emotion, but it's getting better at simulation. We're still Baudrillardian. https://t.co/V59O1vyPMw
— Andrew McStay (@digi_ad) February 15, 2019
We decided to put this text online, too: it's embedded halfway down the news story https://t.co/LmKLaJc7Or
— alex hern (@alexhern) February 15, 2019
“AI is scary” is a narrative that reporters cannot pass up. Spoon fed to reporters this time. Self-proclaimed “deep fakes of text” indeed. gah. https://t.co/b6VUrY0yFM
— Mark O. Riedl (@mark_riedl) February 14, 2019
It looks like OpenAI has announced their marginal progress on the coherence problem in narrative prose generation in the most clickbaity possible way again: https://t.co/x5tH3Jes64 https://t.co/n2hXclFGiU
— Midcentury Minitel Manticore (@enkiv2) February 15, 2019
Non academic recap: New AI fake text generator may be too dangerous to release, say creators - The Guardian https://t.co/DKQggD8TCU
— Lynn Cherny (@arnicas) February 15, 2019
@J_amesp here we go...https://t.co/hewtAhk08S
— Tramspotting (@Incrementalists) February 15, 2019
It's difficult to tell from a newspaper report, but as someone who basically accepts the Turing Test as valid this is kind of a terrifying yet delightful rubicon to see crossed. https://t.co/abFQqvE5Te
— Elizabeth Sandifer (@ElSandifer) February 15, 2019
"'I have a term for this. The escalator from hell,' Clark said. ..."
— Outsideness (@Outsideness) February 15, 2019
-- Accelerationist brand innovation. https://t.co/VYvIQxH77D
The creators of an AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – won’ release their research publicly, for fear of potential misuse. https://t.co/i4fKZUPCCk
— Pernille Tranberg (@PernilleT) February 15, 2019
OpenAI's new program rewrote the opening to 1984 and honestly I would read this novel https://t.co/uqoJf2yotK pic.twitter.com/gFptPlfdfC
— Dylan Matthews (@dylanmatt) February 14, 2019
I'm skeptical about the headline-grabbing "too dangerous to release" stance. I see no natural moats for this tech, and if @OpenAI can do it, so can others around the world.
— Subbarao Kambhampati (@rao2z) February 14, 2019
The Pandora's Box is open, and we have to learn to live with #AI fake reality..https://t.co/iIXJdtW8fD
things not to worry about:
— Mark Andrejevic (@MarkAndrejevic) February 15, 2019
1. AIs writing actual literature.
2. AIs discovering their own desire and deliberately taking over. https://t.co/ZIhMYtuFDO
Things to worry about:
1. People getting so used to reading AI-generated content that literature becomes meaningless to them.
An Elon Musk-backed AI firm is keeping a text generating tool under wraps amid fears it's too dangerous https://t.co/xZVpihjgLT
— BI Tech (@SAI) February 15, 2019