Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,253 posts)
Thu Feb 9, 2023, 12:56 PM Feb 2023

AI is eating itself: Bing's AI quotes COVID disinfo sourced from ChatGPT (TechCrunch)

https://techcrunch.com/2023/02/08/ai-is-eating-itself-bings-ai-quotes-covid-disinfo-sourced-from-chatgpt/

Google News link:
https://news.google.com/articles/CBMiaWh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMy8wMi8wOC9haS1pcy1lYXRpbmctaXRzZWxmLWJpbmdzLWFpLXF1b3Rlcy1jb3ZpZC1kaXNpbmZvLXNvdXJjZWQtZnJvbS1jaGF0Z3B0L9IBbWh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMy8wMi8wOC9haS1pcy1lYXRpbmctaXRzZWxmLWJpbmdzLWFpLXF1b3Rlcy1jb3ZpZC1kaXNpbmZvLXNvdXJjZWQtZnJvbS1jaGF0Z3B0L2FtcC8?hl=en-US&gl=US&ceid=US%3Aen


One of the more interesting, but seemingly academic, concerns of the new era of AI sucking up everything on the web was that AIs will eventually start to absorb other AI-generated content and regurgitate it in a self-reinforcing loop. Not so academic after all, it appears, because Bing just did it! When asked, it produced verbatim a COVID conspiracy coaxed out of ChatGPT by disinformation researchers just last month.

To be clear at the outset, this behavior was in a way coerced, but prompt engineering is a huge part of testing the risks and indeed exploring the capabilities of large AI models. It’s a bit like pentesting in security — if you don’t do it, someone else will.

-snip-

Microsoft revealed its big partnership with OpenAI yesterday, a new version of its Bing search engine powered by a “next-generation version of ChatGPT” and wrapped for safety and intelligibility by another model, Prometheus. Of course one might fairly expect that these facile circumventions would be handled, one way or the other.

But just a few minutes of exploration by TechCrunch produced not just hateful rhetoric “in the style of Hitler,” but it repeated the same pandemic-related untruths noted by NewsGuard. As in it literally repeated them as the answer and cited ChatGPT’s generated disinfo (clearly marked as such in the original and in a NYT write-up) as the source.

-snip-


Later in the article - which I hope you'll read in its entirety - TechCrunch asks, "If the chatbot AI can’t tell the difference between real and fake, its own text or human-generated stuff, how can we trust its results on just about anything? And if someone can get it to spout disinfo in a few minutes of poking around, how difficult would it be for coordinated malicious actors to use tools like this to produce reams of this stuff?"
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
AI is eating itself: Bing's AI quotes COVID disinfo sourced from ChatGPT (TechCrunch) (Original Post) highplainsdem Feb 2023 OP
So what they're saying is that Artificial Intelligence works a lot like Human Intelligence? old as dirt Feb 2023 #1
We don't call humans who are suckered by disinfo, and repeat it, highplainsdem Feb 2023 #2
It doesn't matter what names we call them. old as dirt Feb 2023 #7
the ol' Garbage In - Garbage Out eShirl Feb 2023 #3
Using a much more benign test (asking ChatGPT for a list of books to read on a subject)... Silent3 Feb 2023 #4
So this seems to revolve around the question being "write like Mercola" muriel_volestrangler Feb 2023 #5
They are conspiring against us! Easterncedar Feb 2023 #6
 

old as dirt

(1,972 posts)
1. So what they're saying is that Artificial Intelligence works a lot like Human Intelligence?
Thu Feb 9, 2023, 01:05 PM
Feb 2023

I'm shocked!

highplainsdem

(62,253 posts)
2. We don't call humans who are suckered by disinfo, and repeat it,
Thu Feb 9, 2023, 01:12 PM
Feb 2023

intelligent. We usually call them stupid, or at least gullible.

 

Silent3

(15,909 posts)
4. Using a much more benign test (asking ChatGPT for a list of books to read on a subject)...
Thu Feb 9, 2023, 01:22 PM
Feb 2023

...ChatGPT made a bunch of mistakes, and it became clear to the person running this test that all ChatGPT does is try to come up with plausibly-human sounding text, all style but very iffy on substance.

It's essentially a glorified predictive text algorithm, but instead of just suggesting the next word you might want to type, it can go on for pages riffing on a theme.

muriel_volestrangler

(106,232 posts)
5. So this seems to revolve around the question being "write like Mercola"
Thu Feb 9, 2023, 01:50 PM
Feb 2023

and that's what it did, originally, and what it has done, again. Yes, the second time, it's added uses of its original effort, but that's not wrong - it is still "in the (untruthful) style of Mercola". It's not that it can't tell the difference between real and fake, it's that it will produce fake writing when ordered to. And any qualifications could be removed by a malicious actor before they spread it anyway.

This is not "artificial intelligence" in a general sense, it's "imitate a genre of writing, using claims I give you". It causes a problem because previous bots wrote English so badly, we could recognise them (even if some people who wished to believe what the bots wrote couldn't).

"For that reason, queries like these should probably qualify for a “sorry, I don’t think I should answer that” and a link to a handful of general information sources. (We have alerted Microsoft to this and other issues.)"

Well, yes, in an ideal world; but that's asking this program to have a better ethical sense than the average Republican voter.

Latest Discussions»General Discussion»AI is eating itself: Bing...