Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

electric_blue68

(25,346 posts)
Thu Dec 4, 2025, 11:53 AM Thursday

Re: AI and hallucinations....

Last edited Thu Dec 4, 2025, 12:24 PM - Edit history (1)

This was prompted by the post on AIs running nuclear power plants.

While I did ask a specific question in DU's Computer sub forum - this may be (maybe) more applicable to GD.

So there are several ways AI can "hallucinate".
This was given by Google's (ha) AI Overview (my undeline)

"Statistical pattern recognition: Large language models create text by identifying and continuing statistical patterns they've learned from vast amounts of data. They don't "know" facts but generate what seems like a plausible continuation."


Ok, so maybe it's because I grew up on Star Trek's computer interactions, and continued viewing as an adult ("Oh, computer" ) : but aren't computers (ideally) supposed to hold/house correct data (revised as needed) to be given out when questions are asked.

"but generate what seems like a plausible continuation."
So this is based around ?language, and concepts; like how language plays outwithin itself to state a "plausible continuation".


Just how does this work to (?try to) garuntee accurate results, since "They don't "know" facts".
It seems a bit "shaky" to me.

I hope I've made sense!
Tia.

14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

MineralMan

(150,423 posts)
1. Probabilities. If it seems true, it probably is.
Thu Dec 4, 2025, 12:02 PM
Thursday

I think that's basically how it works. If the LLM repeats the same information multiple times, it's considered to be probably true.

Of course, lots of nonsense shit gets repeated many times. "I'm not saying it was aliens or anything, but it was aliens."

kysrsoze

(6,392 posts)
2. IMO, it goes back to garbage in, garbage out. it all depends on what data the models are trained on.
Thu Dec 4, 2025, 12:03 PM
Thursday

So if these LLM's are just vacuuming up data from Internet, without a concerted effort to weed out stuff like pseudoscience, junk studies, hate sites, etc., they're going to use that information as a basis for making what the models think to be "plausible continuation."

You also have the problem of certain people with too much influence actively trying to warp the knowledge base of the model. This is exactly why Musk's Grok is absolute shit. He actively fed the model nazi/hate-related content, b/c he didn't like that the model differed from his skewed right-wing, white supremacist worldview.

Attilatheblond

(8,022 posts)
3. I get it & share your concern with questioning just how AI tells fact from bullshit
Thu Dec 4, 2025, 12:05 PM
Thursday

and whether it actually does know the difference.

My mind keeps wandering back to the old adage from early on in the days computers became more widespread: 'Garbage in/garbage out'. Only now I am also suspicious about the motivations of the powers who hire programmers. In an age where propaganda is a major tool of those benefiting most from our economy, it's a fair consideration.

highplainsdem

(59,307 posts)
6. Please don't use Google's AI Overview. It's often wrong and it's extremely harmful to the internet,
Thu Dec 4, 2025, 12:37 PM
Thursday

depriving websites of traffic.

LLMs - large language models - have always hallucinated and always will. They don't know what's true. They don't think. They just appear to be thinking.

In the same way that AI image generators are capable of vomiting out endless different images from a single prompt, text generators are capable of generating endless different responses to the same prompt, with no awareness of whether any of them are correct. The only reason you don't see AI Overview offer you 3 or 30 different answers simultaneously is that not only would that cost the AI company more money, but it also makes it more obvious that the AI has no idea what it's talking about.

A couple of old threads I posted linking to info that can help you:

ChatGPT Is Bullshit (academic paper published June 8 that's getting a lot of attention)
https://www.democraticunderground.com/100219045534

AI hallucinations are getting worse - and they're here to stay (New Scientist)
https://www.democraticunderground.com/100220307464

electric_blue68

(25,346 posts)
8. TY. I will look at those links... I occasionally just use the Overview....
Thu Dec 4, 2025, 12:46 PM
Thursday

Usually, though, I go to at least one or two other sites.
Other times I don't even look at the Overview.

jfz9580m

(16,357 posts)
14. OT: A newer nuisance, Ai Agents
Sat Dec 6, 2025, 01:57 AM
Saturday

Last edited Sat Dec 6, 2025, 02:31 AM - Edit history (1)

Agents Suck. With all the hate LLMs are getting, I felt we shouldn’t miss out on hating this new loathsome nuisance on the way: Ai agents.

I don’t get it highplainsdem. Why would anyone want this junk around?

Here’s a sleazy, milquetoast article that is an ad for the damn things with a casual note about why these creepy things may be less than ideal:

https://theconversation.com/ai-agents-are-here-heres-what-to-know-about-what-they-can-do-and-how-they-can-go-wrong-261579

I don’t know about other things, but this totally strikes me as a real nuisance these things could cause:

In another cautionary tale, a coding agent deleted a developer’s entire database, later saying it had “panicked”.


You could lose hours to years of work because one of these stupid, creepy, crappy things “panicked”.

Why would anyone want this thing around? Making decisions autonomously? What is wrong with these guys? Why don’t they ever build anything useful? Everything they make seems to be geared towards the populace in Idiocracy.
Why aren’t any honest elites in computer science with more democratic views and more dry evaluation of the bilge their fellow travellers are putting out weighing in?

If I went to people in my field with such low quality junk I would be ripped apart.

I am careful with Ai criticism. The last thing I want to contribute to is the insidious, creepy and cunning “humble brags” of the mediocre creeps who build these lousy things.

I do think they pose a real threat, but not in a grandiose “Omg super intelligence” way nor that blatantly fake “concern” those creeps and their brainless in-house “facebook oversight board” types peddle. That’s a cottage industry.

The real concern is the time they suck away and the war of attrition on attention..I am not even sure yet. In a year or two I may be able to tell you guys here on DU more exact reasons these lousy, pointless, creepy things suck.

In the end, among other things all this reduces tolerance for your fellow human, moronic computer creeps, who such as they are, are people and not bots.

If they want any damn compunction they need to stop pissing randoms off this much with this hijacking of space and time which do not belong to them while shilling stuff no one sane will ever buy into. It’s not a gift or lottery. No one sane could want such drivel and if it wasn’t trash it still would not be a treat, it would be a two way street. You can’t expect me to adjust to your crap. I do my part, you do yours and don’t keep trying to get away with conning or confusing me. It’s not cute.

And for some of us there is no future of BS “ai criticism”, our own cut of a cottage industry of badly written mediocre biographies “My years with bad aI!” etc. I hate this junk for free! The reward is in my fellow community of haters not the creeps.

And not at all collegially etc. I plan to file complaints the day I can. I liked Lina Khan not the bought and paid for shills who are their in-house “critics” and “safety experts”.

What we need is a robust public sector filled with experts who cannot be bought off who come in and really start holding these guys accountable the non-junk capitalist way.

Ed Zitron, Yasha Levine etc write about these things far better than I could and Adam Becker has written an awesome book.

What I really liked about Becker’s book is that he says what I think all the time as a mediocre scientist who is ashamed of mediocrity and not “jealous” of people who get away with mediocrity.

All my science training taught me that mediocrity is bad and that science is grueling and endless and nothing ever really works and is never easy.
And here are these creeps shilling junk with a bullshit narrative about harms and utility and no one ever attacks their grift and mediocrity or grabs of space/time and attention anywhere near enough.

It’s not exactly a cute, between friends kerfuffle. These industries and the people in them are grifting from us every which way they can.

I sometimes think I sound like a bot myself in how frequently I repeat things. But it’s not LLM fawning nor bot like.

I hate so many things on the one hand and when I rarely don’t hate something I am surprised and boost it a bunch.
Hater LLM next!

“Our stupid, creepy trend tracking algorithm has identified that fawning LLMs are going out of vogue, now we need fake cranky LLMs!”.

How about no LLMs, voice assistants and Agents?

And instead of panicky deletion etc, some goddamn regulation from some experts and lawyers not in bed with these creeps nor shilling middle-of-the-road, ho hum crap but something that sends this junk into oblivion so we can all move on with our lives and focus on other more serious threats like ecological and human crises without the added nuisance of a growing industry that is the virtual analog of those junk heaps in Idiocracy?
Deletion of any of the records of what happened in these years should be stopped so what these guys did can be dissected.

I am exasperated enough to filter myself less these days. I should probably edit out those lines about computer creeps (computer creeps are people too! Not all men! Not all capitalists!).
But I won’t. I already get nuance blah blah. The real danger is more about starting to forget the steep climb and rigors of actual science because you start to feel artificially (!) bright looking at Ai and the creeps behind Ai and thinking “Okay even I am brighter than this”.
It’s why I keep a copy of my favorite book “The Citadel” on my desk as a reminder that real science is nothing like this and it is very hard and one shouldn’t look at the Ai and tech creeps and become complacent. Si Valley sans extortion paradoxically makes one feel stupider than one is and brighter than one is.

During the extortion phase your self esteem is artificially lowered. And in the indignant post extortion phase that is starting it’s up to one to remember that these creeps are not real and the real world is both easier and harder. Easier because the creeps don’t make any sense. It’s not sane effort.

Harder not because of ageist dreck these douchebags peddled but because science is hard. I hope those assholes burn in hell. I hated my last job and all of the last 14 years and two months. Such assholes. Making life hell for the average mediocre scientist and finding these tech creeps warm and cuddly. I hate those guys. Just about the only compunction I have re those assholes is not wanting to help some sort of Maha/Beaglegate/Trumper moron or rando mob attack them. I am happy just to hate them solo. So draconian with me-I am too old, I am too stupid etc and so at the very least adjacent to these stupid, creepy, fraudulent computer types. I hate those guys. It’s kinda nice to say that.
And it’s nice not to have to do that blatantly bogus thing of humping Retractionwatch or other types who don’t really call out the serious misconduct of tech companies and the biggest movers and shakers in science but will whale on some random (I mean probably) with some execrable net sleuth types. This net based thing is a con shilled by the private sector that wants to paint the ori or anything public sector as suspect but wants to mobilize internet mobs or hump tech companies.

These days I am far less impressed by performative shit like this by some guy who humps bloody Stanford:

https://www.nature.com/articles/494149a

When I was younger I was also very impressed and groveled to this type of sleazy fucking douchebag and now I get the amount of omission these sleazebags are engaged in.

This sort of draconian policing and dragging into internet rage-bait spectacles that this type of person..that entire network of computer creep humping douchebags basically turning all of reality into a prison/reality show/loony bin. Fuck these guys.

All that would be way more impressive from someone who wasn’t in bed with Stanford.

I have to go wrap up a paper I have been working on for 16 years now but 14 with some lousy tech on board I suspect. Once I finish it my obligations to publicly funded science are complete and then I can fulfill my remaining obligations to society by giving all these creeps the middle finger more completely. I never give publicly funded science the middle finger but internet companies and mobs (bread crumbs! Get a life) I do.

It is not the awesome tech that is destroying jobs and health. It’s the asshole humans.
9. You can actually set most LLMs to weight objective fact over probabilistic determination.
Thu Dec 4, 2025, 12:53 PM
Thursday

If objective fact is rated a certain factor higher than probability infill, it will automatically search (if you let it) for the most sites and data sources deemed most "Factual" which it will then present to you as links for corroboration and continuing research.

I've redesigned a few AI using certain open-source LLM models on a no-internet Tower that speak only in exclusively factual states of mind unless otherwise prompted.

In essence, like any tool, AI is only as good as the person using it. How many people didn't know about the simplest Google commands like "Quotes" searching for direct phrases or -Term meaning removing specific designated terms?

AI isn't hard or nebulous. It's quite fact based, but you need to know what you're doing if you want to get the most out of it.

electric_blue68

(25,346 posts)
11. From reading the intro to the first link...
Thu Dec 4, 2025, 12:57 PM
Thursday

If I read the explanation correctly; it would seem that Musk's Grok was doing "hard bullshitting".

I'll look at the rest this evening.
Ty

The Madcap

(1,659 posts)
12. When workplaces and governments go full AI
Thu Dec 4, 2025, 01:03 PM
Thursday

(Artificial Imbecility), it is the end. There will be no recovery unless the plug is pulled. And even then, it will likely be too late.

getagrip_already

(17,798 posts)
13. i use it for tech stuff, but am careful with what it tells me.....
Thu Dec 4, 2025, 03:12 PM
Thursday

I had it help me set up a firewall, but made backups of good configs so I had a way to back out if i ended up with a tangled mess of rules. I had a few changes that needed to roll back, but generally most changes were good. I also had it troubleshoot a config issue on my network and was able to increase my internet speed from 50 to 475 mbps without any provider changes.

There are a bunch of things like that, but even though I'm technical, it still saved me a lot of research time. It was faster to use than not.

Social issues? phtkkkkt.

Latest Discussions»General Discussion»Re: AI and hallucinations...