Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Eugene

(61,939 posts)
Tue May 30, 2023, 03:59 PM May 2023

ChatGPT 'hallucinates.' Some researchers worry it isn't fixable.

Source: Washington Post

ChatGPT ‘hallucinates.’ Some researchers worry it isn’t fixable.

Big Tech is pushing AI out to millions of people. But the tech still routinely makes up false answers to simple questions.

By Gerrit De Vynck
Updated May 30, 2023 at 1:27 p.m. EDT | Published May 30, 2023 at 7:00 a.m. EDT

-snip-

“Language models are trained to predict the next word,” said Yilun Du, a researcher at MIT who was previously a research fellow at OpenAI, and one of the paper’s authors. “They are not trained to tell people they don’t know what they’re doing.” The result is bots that act like precocious people-pleasers, making up answers instead of admitting they simply don’t know.

The researchers’ creative approach is just the latest attempt to solve for one of the most pressing concerns in the exploding field of AI. Despite the incredible leaps in capabilities that “generative” chatbots like OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard have demonstrated in the last six months, they still have a major fatal flaw: they make stuff up all the time.

Figuring out how to prevent or fix what the field is calling “hallucinations” has become an obsession among many tech workers, researchers and AI skeptics alike. The issue is mentioned in dozens of academic papers posted to the online database Arxiv and Big Tech CEOs like Google’s Sundar Pichai have addressed it repeatedly. As the tech gets pushed out to millions of people and integrated into critical fields including medicine and law, understanding hallucinations and finding ways to mitigate them has become even more crucial.

Most researchers agree the problem is inherent to the “large language models” that power the bots because of the way they’re designed. They predict what the most apt thing to say is based on the huge amounts of data they’ve digested from the internet, but don’t have a way to understand what is factual or not.

-snip-

Read more: https://www.washingtonpost.com/technology/2023/05/30/ai-chatbots-chatgpt-bard-trustworthy/

Non-paywalled link: https://archive.is/n3qbe

11 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Shermann

(7,428 posts)
3. Every AI answer is a guess
Tue May 30, 2023, 04:30 PM
May 2023

Even the simplest of questions garner guesses from an AI system, although they are very good guessers. And these simplest of questions (to you or I) are what often baffle them the most because of the need for contextualization and/or conceptualization.

localroger

(3,630 posts)
4. Of course they do. These chatbots are just Markov chain generators on steroids.
Tue May 30, 2023, 06:09 PM
May 2023

They do not have a conception of reality, so they have no basis from which to determine whether the symbols they are assembling represent anything "true." They also have no motivations, no emotions, and no will, so they are a total non-threat to humans (except in the sense that they might take some of our jobs, but the whole "not knowing what the truth is" thing will make that iffy too). Even the simplest animals, including non-social insects like the Bee Eating Wasp, clearly have a model of the world which they apply to decision making and when that model conflicts with reality they demonstrate the symptoms of cognitive dissonance. The chatbots are an interesting step forward, but they essentially prove that Alan Turing was wrong and the Turing test isn't enough. There are other things to this whole intelligence thing.

Shermann

(7,428 posts)
5. I don't believe there will ever be a test for consciousness
Tue May 30, 2023, 06:54 PM
May 2023

It is similarly difficult as testing whether or not there is free will. How would you even begin to design such a test?

It may be possible through large language models to eventually pass the Turning test without any attempt to simulate consciousness, but we're not there.

localroger

(3,630 posts)
6. I think there will, but it may be some time and it won't be as simple as the Turing Test
Tue May 30, 2023, 08:58 PM
May 2023

I think it will be a combination of "we can demonstrate that neural circuitry in biological brains is performing X function, and our algorithm does that in this section of the AI." And a combination of those things will add up to performance comparable to real animals, including emotional response and interaction with the physical world. There is a fascinating book called The Creative Loop by now decased Dr. Erich Harth which he wrote in the 1990's which demonstrates some specific circuitry in the thalamus and explores how it might be creating the process we call "thought" (which the chatbots obviously don't do). There are also economic reasons why the actual thing might not be pursued all that aggressively, since it will be impossible to not notice that you are implementing an uncontrollable chaos engine if you try. Monetizing that could be a challenge.

markodochartaigh

(1,145 posts)
7. Hmmmm
Tue May 30, 2023, 11:09 PM
May 2023

"...based on the huge amounts of data they’ve digested from the internet, but don’t have a way to understand what is factual or not."

AI really is similar to humans.

3Hotdogs

(12,405 posts)
8. There was a D.U. post over the weekend. It was about a N.Y. lawyer who had A.I. prepare a brief on
Wed May 31, 2023, 12:26 AM
May 2023

a case he represented. A.I. complied and produced a brief with 3 case cites.


Problem was, that the cites were pulled out of A.I.'s ass. Mr. I made them up and the lawyer didn't bother to find the cites to read them. Unfortunately, the judge did try to find the cites to ascertain if they fit the arguments of the case.



Larry D. Lawyer is in a heap of doo-doo

Skittles

(153,185 posts)
9. a programmer buddy of mine says he tried getting ChatGPT to write some Javascript
Wed May 31, 2023, 06:53 AM
May 2023

but unless you tell it exactly what it needs to do, the only thing that will be correct is the syntax.....the code didn't do anything.

Purrfessor

(1,188 posts)
10. I asked ChatGPT to write an essay on me and my contribution to...
Wed May 31, 2023, 09:46 AM
May 2023

sofa and recliner design since I spend a fair amount of time on the two. Reading the essay one would think I was a pioneer in the furniture design industry. It even credited me with creating ottomans and upholstered benches that had storage capacity.

Cheezoholic

(2,030 posts)
11. So all these "AI's" are modeled after George Santos?
Wed May 31, 2023, 02:10 PM
May 2023

“They are not trained to tell people they don’t know what they’re doing.” The result is bots that act like precocious people-pleasers, making up answers instead of admitting they simply don’t know.

Latest Discussions»Issue Forums»Editorials & Other Articles»ChatGPT 'hallucinates.' S...