Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

electric_blue68

(25,412 posts)
Thu Dec 4, 2025, 11:53 AM Dec 4

Re: AI and hallucinations.... [View all]

Last edited Thu Dec 4, 2025, 12:24 PM - Edit history (1)

This was prompted by the post on AIs running nuclear power plants.

While I did ask a specific question in DU's Computer sub forum - this may be (maybe) more applicable to GD.

So there are several ways AI can "hallucinate".
This was given by Google's (ha) AI Overview (my undeline)

"Statistical pattern recognition: Large language models create text by identifying and continuing statistical patterns they've learned from vast amounts of data. They don't "know" facts but generate what seems like a plausible continuation."


Ok, so maybe it's because I grew up on Star Trek's computer interactions, and continued viewing as an adult ("Oh, computer" ) : but aren't computers (ideally) supposed to hold/house correct data (revised as needed) to be given out when questions are asked.

"but generate what seems like a plausible continuation."
So this is based around ?language, and concepts; like how language plays outwithin itself to state a "plausible continuation".


Just how does this work to (?try to) garuntee accurate results, since "They don't "know" facts".
It seems a bit "shaky" to me.

I hope I've made sense!
Tia.

14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»Re: AI and hallucinations...