New AI fake text generator may be too dangerous to release, say creators [www.theguardian.com]
Fake news: OpenAI's 'deepfakes for text', GPT2, may be too dangerous to be released [betanews.com]
OpenAI has a fake news bomb made of AI and no clue what to do with it [www.theinquirer.net]
OpenAI refuses to release software because it's too dangerous [www.fastcompany.com]
OpenAI text-generating tool GPT2 won't be released for fear of misuse [www.businessinsider.com]
Login to comment
“We need to perform experimentation to find out what they can [do]... If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously”https://t.co/6EDT1bPaZm— Rasmus Kleis Nielsen (@rasmus_kleis) February 15, 2019
'Feed it the opening line of George Orwell’s Nineteen Eighty-Four ... and the system recognises the vaguely futuristic tone and the novelistic style'. Of course AI doesn't get emotion, but it's getting better at simulation. We're still Baudrillardian. https://t.co/V59O1vyPMw— Andrew McStay (@digi_ad) February 15, 2019
I'm skeptical about the headline-grabbing "too dangerous to release" stance. I see no natural moats for this tech, and if @OpenAI can do it, so can others around the world.— Subbarao Kambhampati (@rao2z) February 14, 2019
The Pandora's Box is open, and we have to learn to live with #AI fake reality..https://t.co/iIXJdtW8fD
things not to worry about:— Mark Andrejevic (@MarkAndrejevic) February 15, 2019
1. AIs writing actual literature.
2. AIs discovering their own desire and deliberately taking over. https://t.co/ZIhMYtuFDO
Things to worry about:
1. People getting so used to reading AI-generated content that literature becomes meaningless to them.
Login to comment