Re: AI and hallucinations.... [View all]
Last edited Thu Dec 4, 2025, 12:24 PM - Edit history (1)
This was prompted by the post on AIs running nuclear power plants.
While I did ask a specific question in DU's Computer sub forum - this may be (maybe) more applicable to GD.
So there are several ways AI can "hallucinate".
This was given by Google's (ha) AI Overview (my undeline)
"Statistical pattern recognition: Large language models create text by identifying and continuing statistical patterns they've learned from vast amounts of data. They don't "know" facts but generate what seems like a plausible continuation."
Ok, so maybe it's because I grew up on Star Trek's computer interactions, and continued viewing as an adult ("Oh, computer"

) : but aren't computers (ideally) supposed to hold/house correct data (revised as needed) to be given out when questions are asked.
"but generate what seems like a plausible continuation."
So
this is based around ?language, and concepts; like how language plays outwithin itself to state a "plausible continuation".
Just how does this work to (?try to) garuntee accurate results, since "They don't "know" facts".
It seems a bit "shaky" to me.
I
hope I've made sense!
Tia.