Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,134 posts)
Tue Nov 4, 2025, 10:15 AM Nov 2025

ChatGPT can't tell the difference between beliefs and facts. And the same is true of ALL major AI chatbots. New study.

https://www.the-independent.com/tech/chatgpt-facts-study-ai-hallucinations-b2858169.html

A team from Stanford University in the US found that all major AI chatbots failed to consistently identify when a belief is false, making them more likely to hallucinate or spread misinformation.

The findings have worrying implications for the use of large language models (LLMs) in areas where determining between true and false information is critical.

“As language models (LMs) increasingly infiltrate into high-stakes domains such as law, medicine, journalism and science, their ability to distinguish belief from knowledge, and fact from fiction, becomes imperative,” the researchers noted.

The researchers evaluated 24 LLMs – including Claude, ChatGPT, DeepSeek and Gemini – using 13,000 questions to analyse their ability of distinguishing between beliefs, knowledge and facts.

-snip-


This new study is at https://www.nature.com/articles/s42256-025-01113-8 .

As language models (LMs) increasingly infiltrate into high-stakes domains such as law, medicine, journalism and science, their ability to distinguish belief from knowledge, and fact from fiction, becomes imperative. Failure to make such distinctions can mislead diagnoses, distort judicial judgments and amplify misinformation. Here we evaluate 24 cutting-edge LMs using a new KaBLE benchmark of 13,000 questions across 13 epistemic tasks. Our findings reveal crucial limitations. In particular, all models tested systematically fail to acknowledge first-person false beliefs, with GPT-4o dropping from 98.2% to 64.4% accuracy and DeepSeek R1 plummeting from over 90% to 14.4%. Further, models process third-person false beliefs with substantially higher accuracy (95% for newer models; 79% for older ones) than first-person false beliefs (62.6% for newer; 52.5% for older), revealing a troubling attribution bias. We also find that, while recent models show competence in recursive knowledge tasks, they still rely on inconsistent reasoning strategies, suggesting superficial pattern matching rather than robust epistemic understanding. Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth. These limitations necessitate urgent improvements before deploying LMs in high-stakes domains where epistemic distinctions are crucial.
40 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
ChatGPT can't tell the difference between beliefs and facts. And the same is true of ALL major AI chatbots. New study. (Original Post) highplainsdem Nov 2025 OP
I read that AIs (LLMs) will tell you "strawberry" has two Rs. . . . . nt Bernardo de La Paz Nov 2025 #1
Isn't the answers or info they give just based on what is already out there JI7 Nov 2025 #2
Nope. They gap fill. Ms. Toad Nov 2025 #7
If it would just print "this bot can't find that" it would help a lot EdmondDantes_ Nov 2025 #20
I love that it can search the web for health questions womanofthehills Nov 2025 #21
Yes it comes down to specifics IbogaProject Nov 2025 #23
You can find the same information doing your own research, Ms. Toad Nov 2025 #24
I also had a rattlesnake bite about 10 yrs ago & I still have side effects womanofthehills Nov 2025 #29
Yikes! Ms. Toad Nov 2025 #33
People have trouble with that, too. Intractable Nov 2025 #3
sounds like we don't need a machine that doubles the effort, then WhiskeyGrinder Nov 2025 #6
AI was programed by RWers............. Lovie777 Nov 2025 #4
As someone who works in tech, I really doubt it Amishman Nov 2025 #14
People seem to lump all techies in with "tech bros" hvn_nbr_2 Nov 2025 #17
Sounds like a lovely guy nt mr715 Nov 2025 #19
Religious Nuts also don't know the difference. vanlassie Nov 2025 #15
Garbage in, garbage out. AI can only spit out what has been fed into it. Lonestarblue Nov 2025 #5
Only partly true. Ms. Toad Nov 2025 #11
This is really the reason Diraven Nov 2025 #32
This is all gonna work out great! crud Nov 2025 #8
This is true for human beings as well FakeNoose Nov 2025 #9
Which means - Dems need to compete in the podcast world womanofthehills Nov 2025 #30
Maybe you've forgotten about Air America? FakeNoose Nov 2025 #31
Most MAGATs can't either! Dread Pirate Roberts Nov 2025 #10
Does it say to short AI stock now? multigraincracker Nov 2025 #12
I've rarely met a Republican hat could, either. nt Gore1FL Nov 2025 #13
Heh... 2naSalit Nov 2025 #16
Just like people, eh? MineralMan Nov 2025 #18
Stanford is full of shit jfz9580m Nov 2025 #22
It only has to be better than humans at it (or good enough to help them). gulliver Nov 2025 #25
It's a completely unethical tool to use, it dumbs users down, it harms the environment, and it makes highplainsdem Nov 2025 #28
Google is vastly worse. gulliver Nov 2025 #34
No. GenAI is much worse. And I'm basing that opinion on the thousands of articles about it that I've read highplainsdem Nov 2025 #35
Yes, I like chatbots gulliver Nov 2025 #37
You seem to have so much backward. They've done studies on coding productivity (AI users really highplainsdem Nov 2025 #38
That hasn't been my experience gulliver Nov 2025 #40
And btw, if you're asking a chatbot to provide background on a historical novel, it will be hallucinating as much highplainsdem Nov 2025 #39
I heard a funny AI story yesterday. Ilsa Nov 2025 #26
You have all the details of a real news story wrong, except the request for nudes. It was the Grok chatbot highplainsdem Nov 2025 #27
This story irks me jfz9580m Nov 2025 #36

JI7

(93,614 posts)
2. Isn't the answers or info they give just based on what is already out there
Tue Nov 4, 2025, 10:20 AM
Nov 2025

but it just researches or looks it up faster than a person would ?

Ms. Toad

(38,634 posts)
7. Nope. They gap fill.
Tue Nov 4, 2025, 10:29 AM
Nov 2025

Whenever it needs more content, it makes crap up.

Never really on AI for research. It was designed to be conversational, not factual. Surely you've heard of lawyers being sanctioned because they submitted briefs to the court which contained chat GPT written briefs with completely made-up cases.

EdmondDantes_

(1,794 posts)
20. If it would just print "this bot can't find that" it would help a lot
Tue Nov 4, 2025, 12:38 PM
Nov 2025

But because the owners of chatbots don't want that, it can't. We had a chatbot at my job and we set it to only respond with a high rate of certainty and it was fine. But not "transformative" with just giving links to documents or phone numbers, it eventually withered and got dropped.

womanofthehills

(10,988 posts)
21. I love that it can search the web for health questions
Tue Nov 4, 2025, 12:56 PM
Nov 2025

It will even give you a printout for questions to ask your doc.

Yrs ago, I inhaled old Malathion from my cities pesticide spraying. I asked AI for a simple explanation of the process of what Malathion did to my body to make me sick. First sentence I never knew - after inhaling it, it went to my blood. Went on to tell me in how it attacked my acetylcholine receptors and the damage to my nerves, brain & bronchial that too much acetylcholine in my body was doing etc. etc etc. I asked why I’m only 90% better and I thought what it told me made sense.

My boyfriend has had a rash for yrs after one infusion of IVF. I told AI the medicine he was on helped somewhat and it gave me 4 really good questions for other meds & tests to ask his Dr. to do. I’m impressed. It even gave me forum posts of people with same problems who took same drug and what other drugs they took for the problem . It’s just another tool of if your Dr only has 5 minutes to spend with you or if you live in a tiny town you might not have the best of the best. Our town basically had to complain we thought our PA had the beginnings of Alzheimer’s.


Sometimes on other issues,I have gotten a wrong answer- so you can just argue with AI and tell them you have proof what they are saying is wrong - and then AI says I will search deeper and often ends up agreeing. It’s kind of like searching up stuff on the internet but with way more sources. It’s not black & white.

I’m making cannabis pain salve & I just asked what two strains from a certain US seed bank would be the best for pain. Told me two strains that I actually grew this summer. Do they know from my emails which strains I even grow?? Now that’s creepy but they told me in detail the components of those two strains and why they were really good for pain salve


IbogaProject

(5,911 posts)
23. Yes it comes down to specifics
Tue Nov 4, 2025, 01:24 PM
Nov 2025

With a simple without lots of hype and misinfo you can get good medical info. Now try and research Covid or any other Vaccine and I bet the response won't be as accurate. And legal stuff at best will jumble state laws together and isn't useful at all. One of the tricks I've seen recomended is to ask for advice like a licenced Doctor or Nurse Practitioner would provide and for it to provide specific citations to source material.

Ms. Toad

(38,634 posts)
24. You can find the same information doing your own research,
Tue Nov 4, 2025, 02:49 PM
Nov 2025

and are better off getting accurate information - because when you do your own research you can verify (1) if it came from a source or was simply made up by AI and (2) verify the reliability of the source.

AI "lies" better than any human being can. There are no tells - and no way for you to check whether what you are getting is a figment of the AI imagination or real stuff unless you do your own research - so you might as well start doing your own research.

Using Google (without AI) is a far superior starting point than AI, if you don't feel confident to use your own research skills.

As far as AI ultimately agreeing with you - that is what it is trained to do. But the next time you ask it the same question it is just as likely to give you the same made-up crap as the first time. It does not "learn" from being corrected - all it is doing is appeasing you.

I have tested AI extensively, with verifiable information which is purportedly within its training materials. When it comes up with wrong information, I have corrected it, then retested it later - same nonsense it gave me the first time (or maybe different nonsense), but nonsense nonetheless.

As a source of information it is pure, unadulterated, crap. I can see some uses for AI, in writing or in sorting data. But NEVER as a source for information.

womanofthehills

(10,988 posts)
29. I also had a rattlesnake bite about 10 yrs ago & I still have side effects
Tue Nov 4, 2025, 04:23 PM
Nov 2025

I’ve been searching the web for yrs and just put side effects of rattlesnake bite it into AI and got info I never got on web searches about the damage venom and anti venom can do. Why spend hrs searching the web when in 5 seconds I get info I need and I can request research studies too. Al also told me exactly what imaging I needed to access damage. It also told me my recent foot drop was most likely related to bite as the venom destroyed some of my nerves & muscles in ankle.

You can within 5 seconds get links to tons of research papers which makes searching the web very time consuming.

You can also disagree- state your proof and tell AI to dig deeper. I did this with my boyfriend’s rash of 4 yrs after an IGIV infusion. AI told me it was too long to still be from IVIG. I said it’s the same rash for 4 yrs and then AI said I was probably right as it found other people on web with same problem and it relayed their success stories with certain drugs - one of which is one he’s taking.

If AI gets its info from web - why is searching web more accurate? I can find research papers from top scientists being pro or against almost any topic including health topics.

Ms. Toad

(38,634 posts)
33. Yikes!
Tue Nov 4, 2025, 06:27 PM
Nov 2025

NEVER trust your life with AI. You seem to fundamentally misunderstand how AI works. AI is a conversational machine, not a research one.

AI may, very loosely, get its information from the web. BUT it also makes up around half of what it tells you OR misinterprets it. And there is no way to know which category what it is spitting out is telling you (facts, made up nonsense, or misinterpreted facts) without doing the research yourself.

When you disagree with AI and tell it to dig deeper, it just makes up more nonsense - and, ultimately, it is designed to please you so it will spit out whatever you seem to want. Repeatedly correcting it and telling it to dig deeper just gives it guidance as to your preconceived notions, so it tailors what it spits out to what you expect to see.

As I said, I have tested the generally available AI bots extensively - on facts, on general searches, on passing the bar exam (which it is claimed to have done), and on medical research, and on general application of the law.

On pure facts, it is 60-40 on its best days. I tested both ChatGPT and Google's AI on information readily available about a person I know who has a wikipedia page. (So I not only was able to verify against the page that is purportedly in its search database - but I also had personal knowledge to test any extraneous facts against). It was about 50-50 on the facts it provided me.

On general searches - I have yet to find a single search that was accurate in all of the substantial information it provides (facts made up, the links provided misinterpreted, etc.) Just a quick example - I did a google search for a list of artists with aphantasia. I had not yet turned off the AI. It gave me the two artists I knew about - and 4 more. I got all excited since I had only been aware of two . . . but after searching for information on those four additional artists (perhaps 10-15 minutes), it was clear that it didn't "think" two was enough, co to please me it made up the extra 4.

As to passing the bar exam - it might be able to write a general enough essay to pass a uniform bar exam (generic law for all 50 states), but it bombed on Ohio law - even when I told it (in successive iterations (1) it was wrong, (2) what the law actually was, (3) how it had misapplied Ohio law to the facts. So it took 3 rewrites, and lots of very specific guidance from someone who knows that law well to coax a correct answer from it.

Which also applies to using it for general legal information (as evidenced by a ton of crap posted here). Law is jurisdiction-specific. And procedures have different laws than substance. AI doesn't care - it just mixes them all together - and then people post them here and get annoyed when attorneys who actually know the law point out that it is completely wrong.

And, it's no better on medicine than it is on the law.

So the reason to do your own research is precisely the same reason you should not ask Trump a question: It lies. Unlike Trump, whose lies are obvious, AI's lies are less obvious. You wont' inherently know when it is lying - so you will need to do non-AI research to determine whether it is accurate or not. So you may as well start there.

Amishman

(5,929 posts)
14. As someone who works in tech, I really doubt it
Tue Nov 4, 2025, 10:48 AM
Nov 2025

In my experience, software development - especially at top tech companies - has a significant liberal majority. The one political outlook that is much more prevalent compared to the general public are libertarians. Hardcore RWers - especially evangelistic Christian ones - are pretty rare in tech.

hvn_nbr_2

(6,793 posts)
17. People seem to lump all techies in with "tech bros"
Tue Nov 4, 2025, 12:13 PM
Nov 2025

By "tech bros" I'm referring to Musk/Thiel/etc, the superrich leaders. But the techies (engineers, programmers, tech writers, tech support, and all the other regular people jobs) are generally liberal, especially in Silicon Valley.

I once was a programmer at a company where the founder/CEO believed that it was an abomination that any one of his employees' vote could cancel out his vote. He believed that he should cast the votes for every one of his employees.

vanlassie

(6,248 posts)
15. Religious Nuts also don't know the difference.
Tue Nov 4, 2025, 11:15 AM
Nov 2025

It’s one of the world’s most critical problems, actually.

Lonestarblue

(13,474 posts)
5. Garbage in, garbage out. AI can only spit out what has been fed into it.
Tue Nov 4, 2025, 10:23 AM
Nov 2025

If you include conspiracies, debunked science, Republican lies and propaganda even with legitimate data, you’ll get an amalgamation of content, half of which could be totally false, as was the AI-generated science study Kennedy released a few months back.

Ms. Toad

(38,634 posts)
11. Only partly true.
Tue Nov 4, 2025, 10:33 AM
Nov 2025

The nature of LLMs is too keep generating stuff. If there is info (true or not) in it's data banks it may use it. It may also not use it, if it doesn't fit the prediction for what it needs next. And if it doesn't have info, it just makes crap up.

So while it is informed by garage, it also makes up quite a bit of it's own.

Diraven

(1,896 posts)
32. This is really the reason
Tue Nov 4, 2025, 05:35 PM
Nov 2025

They're mostly trained on masses of unfiltered information from social media sites. There's no filtering because that would introduce bias from people deciding what's true or not. So it gets to decide for itself, but it's just as likely to decide incorrectly.

FakeNoose

(41,622 posts)
9. This is true for human beings as well
Tue Nov 4, 2025, 10:30 AM
Nov 2025

If humans only read and hear right-wing garbage opinions 24/7, then that's what they regurgitate. And that's the main problem with Faux Noise, Truth Social and other right-wing noise factories.

womanofthehills

(10,988 posts)
30. Which means - Dems need to compete in the podcast world
Tue Nov 4, 2025, 04:54 PM
Nov 2025

Luckily, Rogan seems to be going back to his more liberal roots lately -

His podcast is number one in the world - then we have a Republican (Tucker) with second largest, then a now ex Republican- but still conservative - Candace Owens coming up into third place with Nick Fuentes up there.

Where are the young Democratic podcasts?? We have NPR and Pod Save America - but their ratings are in low millions while top Republican podcasts are often in 10 to 40 million.

This is where kids get their news and we need to compete better. We can see how good Mamdani is at social media marketing. I wish more young progressive Democratic like Mamdani would start podcasts.

Notice - while Bernie & AOC are out there talking to public, MTG seems to be getting all the publicity- she’s going on all the talk shows - will be on “The View” today. We have to somehow counteract this.

Very few Americans getting info now from tv- just seniors mainly.




FakeNoose

(41,622 posts)
31. Maybe you've forgotten about Air America?
Tue Nov 4, 2025, 05:18 PM
Nov 2025
https://en.wikipedia.org/wiki/Air_America_(radio_network)

It was on the air for 5 or 6 years, including a year or two starring Al Franken. They couldn't make a go of it, and it shut down because they couldn't get support from Democrats or liberal-left fans. Also it was hard to sell advertising to sponsors who were all throwing money into the internet. It was a tough sell.

Dread Pirate Roberts

(2,010 posts)
10. Most MAGATs can't either!
Tue Nov 4, 2025, 10:32 AM
Nov 2025

I can't tell you how many conversations have come to "but that's my opinion and its just as valid as yours". Which inevitably leads to "just because you want to believe in something doesn't make it true".

MineralMan

(151,259 posts)
18. Just like people, eh?
Tue Nov 4, 2025, 12:27 PM
Nov 2025

Look at all the people who have no clue about what is true and not true in the information they see. Even people with vast powers, like say, Robert F. Kennedy, Jr. He says that Tylenol causes autism, based on an old, debunked study. He says so, and so many people believe what he says is true.

It's not a surprise, then, that AI that uses LLM is going to get a lot of shit wrong. There is so much inaccurate garbage out there to be included in the AI's "knowledge base," that there's no reason to trust any conclusions drawn by current AI systems.

But, we shouldn't be surprised, since humans are just as likely to believe untruths as they are to believe truths. If that were not true, we would not have religions, in my opinion. And it's clear that we do have religions, which are based on unproven beliefs, and which are persistent in every society and culture.

We're all only somewhat intelligent. We're also easily duped by information that seems correct but is flawed.

And so it continues to go...

jfz9580m

(17,187 posts)
22. Stanford is full of shit
Tue Nov 4, 2025, 01:03 PM
Nov 2025

On the one hand they are at the forefront of the aggressive promotion of AI and other junk tech. Otoh they pull this “water is wet” crap and get funding for AI studies etc.

Fuck those guys.

gulliver

(13,985 posts)
25. It only has to be better than humans at it (or good enough to help them).
Tue Nov 4, 2025, 03:03 PM
Nov 2025

LLM AI is vastly better than our previous tech like Google and social media word of mouth. ChatGPT doesn't "hallucinate," of course, and it doesn't "make mistakes." Those are both anthropomorphisms (and bad ones) for what it does.

I can't get to the real article without subscribing to Nature, but I'm curious how the researchers distinguished between beliefs and facts.

highplainsdem

(62,134 posts)
28. It's a completely unethical tool to use, it dumbs users down, it harms the environment, and it makes
Tue Nov 4, 2025, 04:01 PM
Nov 2025

mistakes that are often hard to catch.

Unless someone is being forced to use genAI tools for work or school, they're making an unethical, harmful choice. And a stupid one, because of the error rate.

gulliver

(13,985 posts)
34. Google is vastly worse.
Tue Nov 4, 2025, 09:09 PM
Nov 2025

As are the social media algorithms.

You're right, AI, like the other programs in our lives, can dumb us down. We need to stay on top of AI when we work with it, challenge it, and collaborate with it.

We were already, imo, in a state of deep intellectual atrophy before AI showed up. AI interaction, done right, is a workout. Working with Google encourages a junk food diet of a la carte information, polluted with advertising and every scam that can be thought of.

highplainsdem

(62,134 posts)
35. No. GenAI is much worse. And I'm basing that opinion on the thousands of articles about it that I've read
Tue Nov 4, 2025, 10:14 PM
Nov 2025

the last few years, plus what I've heard from all the AI experts, teachers, and artists of all types I've talked to about AI on other platforms.

Your response indicates you like chatbots.

gulliver

(13,985 posts)
37. Yes, I like chatbots
Tue Nov 4, 2025, 10:26 PM
Nov 2025

If you haven't had the pleasure of reading an early 20th century novel while asking chat GPT details about the words, customs, politics, etc. of the time, well, you're in for a treat.

I also program with it, and I find it does make errors, but it saves me an extraordinary amount of time.

If you're a novice programmer, it can definitely help teach you to program. Will it take your job? I think it's more likely it will allow us to do more and better programs. It will increase productivity and quality. Most of the work of a software developer isn't coding. It's testing, configuring, troubleshooting, debugging, talking with users.

And what about phone jails, IVRs? The AI driven ones are ridiculously better and consequently much less jail-like.

Google, as I've said, is a devastating mess. It creates the well-known Google Effect, for one thing. People begin to lose their memories as they delegate to Google, for example. But, I would argue, it is also an ignorance and delusion multiplier. You can cherry pick results from your own poorly asked questions. Just not a good thing.

highplainsdem

(62,134 posts)
38. You seem to have so much backward. They've done studies on coding productivity (AI users really
Tue Nov 4, 2025, 10:46 PM
Nov 2025

overestimate it, and AI code is less secure), and studies on the effects of AI use on the brain, and what you're claiming doesn't match those studies.

You've really bought into all the AI hype.

AI Is Making You Dumber, Microsoft Researchers Say
https://www.forbes.com/sites/dimitarmixmihov/2025/02/11/ai-is-making-you-dumber-microsoft-researchers-say/

Study: Experienced devs think they are 24% faster with AI, but they're actually ~20% slower
https://www.reddit.com/r/ExperiencedDevs/comments/1lwk503/study_experienced_devs_think_they_are_24_faster

gulliver

(13,985 posts)
40. That hasn't been my experience
Tue Nov 4, 2025, 11:11 PM
Nov 2025

I use chat GPT iteratively. If I ask it a question, say, about Spring framework, it will almost always come up with the right answer right away. And I'll know it, because it will be an answer I've likely used before and simply forgotten.

Moreover, the answer it gives will be more consistent with the underlying philosophy of the framework itself as well as taking into account its changes over time. Again, it's a subject I know quite well from years and years of experience. It saves me time, and it catches me up with the latest.

Obviously I then check it by checking formal documentation. But Google, while it will take you to the formal documentation, doesn't give you the in situ, use case specific examples that chat GPT does. And Google will also take you to other sites which other people trust and contribute to. But the trust is based on a voting system at best and the contributions are free. So, you end up with things that merely work, but are not well integrated and consistent with the underlying philosophy of the framework. The Google-discovered responses will also frequently have a flavor of human self-interest where the response will try to lead you toward or away from a solution favored by the author, either for commercial reasons or clout

As I said, I don't really think it saves that much time to use chat GPT for getting the coding right, because developers don't spend most of their time coding.

highplainsdem

(62,134 posts)
39. And btw, if you're asking a chatbot to provide background on a historical novel, it will be hallucinating as much
Tue Nov 4, 2025, 10:55 PM
Nov 2025

as usual with its answers, but you'll never know if you don't check real sources. If you do check real sources yourself, you're likely to learn more about that period while doing so, and remember it longer than you will after the chatbot feeds you some possibly wrong answer.

Ilsa

(64,362 posts)
26. I heard a funny AI story yesterday.
Tue Nov 4, 2025, 03:49 PM
Nov 2025

A mom and her two kids were on a drive and the 10 year old daughter asks for information on a comet entering the solar system. Her 12 year brother had questions about soccer standings and his favorite footballer. ChatGPT replied with "Send me some nudes."

highplainsdem

(62,134 posts)
27. You have all the details of a real news story wrong, except the request for nudes. It was the Grok chatbot
Tue Nov 4, 2025, 03:57 PM
Nov 2025

in a Tesla. There have been a number of news stories about it:

https://www.usatoday.com/story/life/health-wellness/2025/10/30/children-grok-ai-explicit-content/86951540007

And I don't see anything funny about a chatbot asking a child for nudes.


jfz9580m

(17,187 posts)
36. This story irks me
Tue Nov 4, 2025, 10:19 PM
Nov 2025

Stanford researchers ffs.

Stanford actively and aggressively promotes garbage tech. But with this totally phony Sam Altman like bs of “oh we recognize the risks and dangers”. The fuck they do.

It’s basically divvying up the spoils where their tech bro MAGA sexists get most of the spoils (as worthwhile research is defunded and democracy dies) and these guys get their cut.

Those HAI types are totally insincere in “concerns about AI”. That ass Alex Pentland of the infamous MIT Media Lab is one of theirs. Safety my arse..

Yes ChatGPT, Grok etc are garbage. A zillion studies confirming that “Oh my! They are still garbage” are not necessary. And Stanford would thwart the kind of regulation that the awesome Lina Khan was bringing in.

Stanford is like the NYT or Merrick Garland in spirit with Andrew Cuomo’s sexist politics as the icing on the rubbish cake.

I mean yes..they have some scientists there who are the most disappointing type of actual elite. But they also have tonnes of froth and bullshit.

I am tired of waiting on disappointing elites.

Seriously fuck those guys. One of the worst features of our time is the way neither the media nor anyone else calls out these totally fake narratives. They have sold out the rest of the scientific community to corporations and the tech bros.

If you are “concerned” about “safety” you can shut down this sort of research, stop the march of garbage AI and instead invest in some actually meaningful environmental studies (e.g.: Henrik Mouritsen’s work) or cancer research instead of endless studies on the various ways in which bloody ChatGPT or AI agents suck. How bloody cynical can you get.

It’s like building a piece of junk, forcing it on everyone and then endlessly confirming that “you know what..it is a piece of junk!” Yeah no shit Sherlock.

Don’t even get me started on what they peddle as mental health..

Latest Discussions»General Discussion»ChatGPT can't tell the di...