Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

electric_blue68

(26,804 posts)
Thu Dec 4, 2025, 12:53 PM Dec 2025

Re: AI and hallucinations....

Last edited Thu Dec 4, 2025, 01:24 PM - Edit history (1)

This was prompted by the post on AIs running nuclear power plants.

While I did ask a specific question in DU's Computer sub forum - this may be (maybe) more applicable to GD.

So there are several ways AI can "hallucinate".
This was given by Google's (ha) AI Overview (my undeline)

"Statistical pattern recognition: Large language models create text by identifying and continuing statistical patterns they've learned from vast amounts of data. They don't "know" facts but generate what seems like a plausible continuation."


Ok, so maybe it's because I grew up on Star Trek's computer interactions, and continued viewing as an adult ("Oh, computer" ) : but aren't computers (ideally) supposed to hold/house correct data (revised as needed) to be given out when questions are asked.

"but generate what seems like a plausible continuation."
So this is based around ?language, and concepts; like how language plays outwithin itself to state a "plausible continuation".


Just how does this work to (?try to) garuntee accurate results, since "They don't "know" facts".
It seems a bit "shaky" to me.

I hope I've made sense!
Tia.

14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

MineralMan

(151,159 posts)
1. Probabilities. If it seems true, it probably is.
Thu Dec 4, 2025, 01:02 PM
Dec 2025

I think that's basically how it works. If the LLM repeats the same information multiple times, it's considered to be probably true.

Of course, lots of nonsense shit gets repeated many times. "I'm not saying it was aliens or anything, but it was aliens."

kysrsoze

(6,438 posts)
2. IMO, it goes back to garbage in, garbage out. it all depends on what data the models are trained on.
Thu Dec 4, 2025, 01:03 PM
Dec 2025

So if these LLM's are just vacuuming up data from Internet, without a concerted effort to weed out stuff like pseudoscience, junk studies, hate sites, etc., they're going to use that information as a basis for making what the models think to be "plausible continuation."

You also have the problem of certain people with too much influence actively trying to warp the knowledge base of the model. This is exactly why Musk's Grok is absolute shit. He actively fed the model nazi/hate-related content, b/c he didn't like that the model differed from his skewed right-wing, white supremacist worldview.

Attilatheblond

(8,797 posts)
3. I get it & share your concern with questioning just how AI tells fact from bullshit
Thu Dec 4, 2025, 01:05 PM
Dec 2025

and whether it actually does know the difference.

My mind keeps wandering back to the old adage from early on in the days computers became more widespread: 'Garbage in/garbage out'. Only now I am also suspicious about the motivations of the powers who hire programmers. In an age where propaganda is a major tool of those benefiting most from our economy, it's a fair consideration.

highplainsdem

(61,832 posts)
6. Please don't use Google's AI Overview. It's often wrong and it's extremely harmful to the internet,
Thu Dec 4, 2025, 01:37 PM
Dec 2025

depriving websites of traffic.

LLMs - large language models - have always hallucinated and always will. They don't know what's true. They don't think. They just appear to be thinking.

In the same way that AI image generators are capable of vomiting out endless different images from a single prompt, text generators are capable of generating endless different responses to the same prompt, with no awareness of whether any of them are correct. The only reason you don't see AI Overview offer you 3 or 30 different answers simultaneously is that not only would that cost the AI company more money, but it also makes it more obvious that the AI has no idea what it's talking about.

A couple of old threads I posted linking to info that can help you:

ChatGPT Is Bullshit (academic paper published June 8 that's getting a lot of attention)
https://www.democraticunderground.com/100219045534

AI hallucinations are getting worse - and they're here to stay (New Scientist)
https://www.democraticunderground.com/100220307464

electric_blue68

(26,804 posts)
8. TY. I will look at those links... I occasionally just use the Overview....
Thu Dec 4, 2025, 01:46 PM
Dec 2025

Usually, though, I go to at least one or two other sites.
Other times I don't even look at the Overview.

Response to highplainsdem (Reply #6)

9. You can actually set most LLMs to weight objective fact over probabilistic determination.
Thu Dec 4, 2025, 01:53 PM
Dec 2025

If objective fact is rated a certain factor higher than probability infill, it will automatically search (if you let it) for the most sites and data sources deemed most "Factual" which it will then present to you as links for corroboration and continuing research.

I've redesigned a few AI using certain open-source LLM models on a no-internet Tower that speak only in exclusively factual states of mind unless otherwise prompted.

In essence, like any tool, AI is only as good as the person using it. How many people didn't know about the simplest Google commands like "Quotes" searching for direct phrases or -Term meaning removing specific designated terms?

AI isn't hard or nebulous. It's quite fact based, but you need to know what you're doing if you want to get the most out of it.

electric_blue68

(26,804 posts)
11. From reading the intro to the first link...
Thu Dec 4, 2025, 01:57 PM
Dec 2025

If I read the explanation correctly; it would seem that Musk's Grok was doing "hard bullshitting".

I'll look at the rest this evening.
Ty

The Madcap

(1,883 posts)
12. When workplaces and governments go full AI
Thu Dec 4, 2025, 02:03 PM
Dec 2025

(Artificial Imbecility), it is the end. There will be no recovery unless the plug is pulled. And even then, it will likely be too late.

getagrip_already

(17,802 posts)
13. i use it for tech stuff, but am careful with what it tells me.....
Thu Dec 4, 2025, 04:12 PM
Dec 2025

I had it help me set up a firewall, but made backups of good configs so I had a way to back out if i ended up with a tangled mess of rules. I had a few changes that needed to roll back, but generally most changes were good. I also had it troubleshoot a config issue on my network and was able to increase my internet speed from 50 to 475 mbps without any provider changes.

There are a bunch of things like that, but even though I'm technical, it still saved me a lot of research time. It was faster to use than not.

Social issues? phtkkkkt.

Latest Discussions»General Discussion»Re: AI and hallucinations...