General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI is eating itself: Bing's AI quotes COVID disinfo sourced from ChatGPT (TechCrunch)
https://techcrunch.com/2023/02/08/ai-is-eating-itself-bings-ai-quotes-covid-disinfo-sourced-from-chatgpt/Google News link:
https://news.google.com/articles/CBMiaWh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMy8wMi8wOC9haS1pcy1lYXRpbmctaXRzZWxmLWJpbmdzLWFpLXF1b3Rlcy1jb3ZpZC1kaXNpbmZvLXNvdXJjZWQtZnJvbS1jaGF0Z3B0L9IBbWh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMy8wMi8wOC9haS1pcy1lYXRpbmctaXRzZWxmLWJpbmdzLWFpLXF1b3Rlcy1jb3ZpZC1kaXNpbmZvLXNvdXJjZWQtZnJvbS1jaGF0Z3B0L2FtcC8?hl=en-US&gl=US&ceid=US%3Aen
To be clear at the outset, this behavior was in a way coerced, but prompt engineering is a huge part of testing the risks and indeed exploring the capabilities of large AI models. Its a bit like pentesting in security if you dont do it, someone else will.
-snip-
Microsoft revealed its big partnership with OpenAI yesterday, a new version of its Bing search engine powered by a next-generation version of ChatGPT and wrapped for safety and intelligibility by another model, Prometheus. Of course one might fairly expect that these facile circumventions would be handled, one way or the other.
But just a few minutes of exploration by TechCrunch produced not just hateful rhetoric in the style of Hitler, but it repeated the same pandemic-related untruths noted by NewsGuard. As in it literally repeated them as the answer and cited ChatGPTs generated disinfo (clearly marked as such in the original and in a NYT write-up) as the source.
-snip-
Later in the article - which I hope you'll read in its entirety - TechCrunch asks, "If the chatbot AI cant tell the difference between real and fake, its own text or human-generated stuff, how can we trust its results on just about anything? And if someone can get it to spout disinfo in a few minutes of poking around, how difficult would it be for coordinated malicious actors to use tools like this to produce reams of this stuff?"
old as dirt
(1,972 posts)I'm shocked!
highplainsdem
(62,253 posts)intelligent. We usually call them stupid, or at least gullible.
old as dirt
(1,972 posts)They each have a human brain.
eShirl
(20,279 posts)Silent3
(15,909 posts)...ChatGPT made a bunch of mistakes, and it became clear to the person running this test that all ChatGPT does is try to come up with plausibly-human sounding text, all style but very iffy on substance.
It's essentially a glorified predictive text algorithm, but instead of just suggesting the next word you might want to type, it can go on for pages riffing on a theme.
muriel_volestrangler
(106,232 posts)and that's what it did, originally, and what it has done, again. Yes, the second time, it's added uses of its original effort, but that's not wrong - it is still "in the (untruthful) style of Mercola". It's not that it can't tell the difference between real and fake, it's that it will produce fake writing when ordered to. And any qualifications could be removed by a malicious actor before they spread it anyway.
This is not "artificial intelligence" in a general sense, it's "imitate a genre of writing, using claims I give you". It causes a problem because previous bots wrote English so badly, we could recognise them (even if some people who wished to believe what the bots wrote couldn't).
"For that reason, queries like these should probably qualify for a sorry, I dont think I should answer that and a link to a handful of general information sources. (We have alerted Microsoft to this and other issues.)"
Well, yes, in an ideal world; but that's asking this program to have a better ethical sense than the average Republican voter.
Easterncedar
(6,296 posts)See John Varleys very scary story Press Enter.