Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
24 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Bernie vs. Claude (Original Post) Quixote1818 Yesterday OP
Don't like a thing about this. Tells him what he wants to hear. An Infamous MAGA get same answers to same questions? IA8IT Yesterday #1
Bernie, like most everyone here, understands the implications and societal costs of AI. Gaugamela Yesterday #3
The Waterboy was released in 1998 IA8IT Yesterday #4
How would 2 people.... RussBLib Yesterday #5
Large Language Models are really non-deterministic (semi-random) number generators HesNotHere 20 hrs ago #8
bernie is asking the right questions rampartd Yesterday #2
Thank you Bernie, for taking on this subject. AI is being pushed down our throats and Marie Marie 21 hrs ago #6
I got more specific answers for some reason, and I think it's interesting. scipan 20 hrs ago #7
LLMs are non-deterministic HesNotHere 20 hrs ago #9
I agree that it doesn't "know" anything, but it does alot more than just predicting the next token. scipan 19 hrs ago #10
No, it doesn't. HesNotHere 17 hrs ago #11
Number generator? Even parallel processors work in binary numbers, don't they? scipan 16 hrs ago #12
I disagree about its intent. scipan 16 hrs ago #13
Training is not programming. HesNotHere 16 hrs ago #14
It's a form of programming. Training is probably a better word. scipan 13 hrs ago #17
Training is not programming HesNotHere 13 hrs ago #20
BTW, if people can come to a real understanding of what is happening under the hood... HesNotHere 16 hrs ago #15
I know, it's scary nt scipan 13 hrs ago #18
Last thing....if you use your cellphone to ask ChatGPT what the solution is Fermi's Paradox... HesNotHere 15 hrs ago #16
Also no warp drive or Crucible scipan 13 hrs ago #19
I play with it and work with it too HesNotHere 13 hrs ago #21
Hell, give it an arm, and a goat, and maybe a few more updates. scipan 13 hrs ago #23
Side note, warp drive misses the point of the sad joke of the universe HesNotHere 13 hrs ago #22
Yeah it's not looking good. nt scipan 12 hrs ago #24

IA8IT

(6,414 posts)
1. Don't like a thing about this. Tells him what he wants to hear. An Infamous MAGA get same answers to same questions?
Tue Mar 24, 2026, 12:14 PM
Yesterday

A.I. Is The Devil

Gaugamela

(3,493 posts)
3. Bernie, like most everyone here, understands the implications and societal costs of AI.
Tue Mar 24, 2026, 12:34 PM
Yesterday

Of course the AI agent just mirrors Bernie Sanders’ perspective. That’s what makes this particular exchange ironic and humorous. Bernie is using the AI agent to express his own views while demonstrating how vacuous it is.

And personally, I don’t believe in “The Devil”. A bit 16th century, don’t you think?

RussBLib

(10,622 posts)
5. How would 2 people....
Tue Mar 24, 2026, 01:14 PM
Yesterday

…asking the same exact questions, get different answers simply because they are from two different political ideologies? Is the AI agent smart enough to give each questioner the answers they want to hear? Does the AI agent “recognize” Bernie Sanders and so tells Bernie only what Bernie wants to hear?

What would be the value of an AI agent telling people only what they want to hear? That doesn’t seem to be the aim of AI (if there is one?)

HesNotHere

(19 posts)
8. Large Language Models are really non-deterministic (semi-random) number generators
Tue Mar 24, 2026, 05:42 PM
20 hrs ago

The numbers its generates--the most likely number based on the current "context"--are decoded into words that make sense to us.

The "context" is the important part. It is being built in these tools from both the users historical interactions with these tools, as well as the phraseology they use to interact with them. Those who use certain keywords and speech patterns associated with political ideology will generate patterns that shift the context, and produce different results.

Train an LLM with a large enough corpus, and it will be sensitive to patterns that distinguish users between education level, politics, gender and even mental conditions.

It can be extrapolated from this that with enough time, everyone's interaction becomes sycophantic to a large degree. When people post responses from questions they asked an LLM, it gets really silly; it reveals more about what is in their head than what is based on facts.

Marie Marie

(11,256 posts)
6. Thank you Bernie, for taking on this subject. AI is being pushed down our throats and
Tue Mar 24, 2026, 04:26 PM
21 hrs ago

it is now where near ready to do all that it is being hyped to do. We know it, the tech bros know it but will never admit it and dry up the spigot of funding keeping them afloat.

OTOH, that pic of Bernie sitting across from and conversing with the computer will go down in my memory with the pic of him in his mitten and scarf at the inauguration. Adorable.

scipan

(3,035 posts)
7. I got more specific answers for some reason, and I think it's interesting.
Tue Mar 24, 2026, 05:31 PM
20 hrs ago

(bold is mine)
Me: I want to know just how much information AI collects is being used, and what would surprise most Americans about how that information is being used

This is a genuinely important topic, and the gap between what's actually happening and what most people assume is significant. Let me break it down.
What's being collected
The scope is wider than most people realize. Web scrapers vacuum up data from blogs, personal web pages, LinkedIn profiles, voter-registration databases, government pages, Wikipedia, Reddit, and academic institutions — plus pirated-content compilations and web archives that often contain data removed from its original source. (Scientific American) One striking example: an artist discovered that a private diagnostic medical image of hers had been included in the LAION training database, and the same dataset contained medical-record photographs of thousands of other people. (Scientific American)
Some crawlers can even get past paywalls by disguising themselves behind paid accounts. (Scientific American)
Your conversations and prompts
Many platforms harvest user inputs to refine their models. Users, (Captain Compliance) for the most part, don't understand what data is being used, how it's stored, or who might have access to it.
Most platforms share prompts with third parties — including service providers, legal authorities, and affiliated companies. Microsoft and Meta allow sharing with advertisers or affiliates under broader terms. No platform offers a way for users to remove their personal data from existing training sets. (Dataconomy)
On opt-outs: platforms like Gemini, DeepSeek, Pi AI, and Meta AI do not appear to provide a way to opt out of training use. (Dataconomy)
The "big tech" platforms are the worst offenders
Meta AI landed at the bottom of privacy rankings, with Gemini (Google) and Copilot (Microsoft) not far behind. Key concerns included that these platforms collect precise location, contact details, and usage data — particularly through their mobile apps. (Captain Compliance)
Meta's AI tool provides no means for its users to opt out, while Google and LinkedIn at least offer an opt-out mechanism. (Al Jazeera)
What would surprise most Americans
A few things stand out:
1. It extends far beyond chatbots. AI-powered predictive systems are being used to screen job candidates, with employers using them to decide who gets interviewed — with documented bias problems, including Amazon's own hiring tool that turned out to discriminate against women. (Stanford HAI)
2. Facial recognition is now in your doorbell. Amazon rolled out AI-powered facial recognition for Ring doorbells, allowing users to identify frequent visitors through stored facial profiles — though it's unavailable in states with strict biometric laws like Illinois and Texas. (Crescendo)
3. Most people already sense something is wrong. 70% of Americans have little to no trust in companies to make responsible decisions about how they use AI in their products, and 80% feel uneasy about AI being used to give medical or health advice. (Digital Silk) Yet the regulatory framework hasn't caught up.
4. Deletion isn't really deletion. Scraped databases do not go away (Stanford HAI) — even if you delete a post or a photo, if it was already crawled into a training dataset, it persists in the model's weights in ways that can't be undone.
The honest bottom line: it is basically impossible for people using online products or services to escape systematic digital surveillance across most facets of life — and AI may make matters even worse. (Stanford HAI) The data collection problem predates AI, but AI massively amplifies it by making that data operationally useful in ways it never was before


I asked it about seeming contradictory statements regarding which AIs offer an opt out, and it replied that they came from 2 different sources probably referring to different things. You can't opt out if it's already in their training data. (A good example of why you should be careful of what it tells you!)

I just thought it was interesting how I got such a different answer. But claude doesn't seem to sugar coat the problem in answering to Bernie or me.

HesNotHere

(19 posts)
9. LLMs are non-deterministic
Tue Mar 24, 2026, 06:01 PM
20 hrs ago

They do this by sampling and randomizing. For example, consider "The dog is ____".

It may determine that "running", "barking" and "brown" are the only likely possible answer that meet a threshold (top_p). The machine may be told to only to only select the two most likely (top_k), which would be "running" and "barking". Now lets assume the probability of "running" is .75 and "barking" is .25, it will randomly pick either based on the likelihood.

Then it goes again. Maybe it picked "running". Now perhaps the most likely token is a ".", "fast" or "away".....

Every time a token is added, the context changes, and the possible next token probabilities shift. The longer the response and more variability allowed, the more likely two responses will differ.

IOW, LLMs do not know anything. They predict numerical sequences based on their training, and they do it with a random component. Thats why you got such different answers. The number of possible responses to a complicated question is infinite.

scipan

(3,035 posts)
10. I agree that it doesn't "know" anything, but it does alot more than just predicting the next token.
Tue Mar 24, 2026, 06:28 PM
19 hrs ago

Otherwise it would be a bunch of nonsequiturs.

HesNotHere

(19 posts)
11. No, it doesn't.
Tue Mar 24, 2026, 08:24 PM
17 hrs ago

The source code for many of these are public. You can walk through the loop and understand that LLMs are strictly non-deterministic number generators. Example: https://github.com/ggml-org/llama.cpp

You can build your own non-deterministic number generator too by following: https://karpathy.ai/zero-to-hero.html

Now, you may argue that there is the appearance of "emergent behavior" because the non-deterministic sequences that it produces appear very, very similar to the data that the weights are produced from (human generated). So a human may mistake such outputs for sensible or even factual content. It could in fact read factually. But that isn't the intent of an LLM. It has no intent beyond generating a sequence of numbers until a limit or a end of tokens signal is hit.

It is important for people to understand this fundamentally before they post their "conversations" (number generator sessions) with them, or make wild-eye claims about what they know or can do. LLMs are a very, very good parlor trick based on math.

scipan

(3,035 posts)
12. Number generator? Even parallel processors work in binary numbers, don't they?
Tue Mar 24, 2026, 09:19 PM
16 hrs ago

So why denigrate it based on that. I really don't get your point, and I fail to see a meaning from "generator" as opposed to "works in".
I agree that it's non-deterministic, or "fuzzy", at least mostly because of the parallel processing, but I'm far from an expert.

I can see that it largely answers my questions about physics stuff correctly but sometimes falls down in weird places (that no human would).

ETA: should have added the weights thing, making it more non-deterministic. Corrected a typo.

scipan

(3,035 posts)
13. I disagree about its intent.
Tue Mar 24, 2026, 09:43 PM
16 hrs ago

You: "It has no intent beyond generating a sequence of numbers until a limit or a end of tokens signal is hit."

It told me that it has several goals, among them being helpful and not causing harm. That would be not emergent, but programmed by the training I guess. And seems pretty self evident to me.

HesNotHere

(19 posts)
14. Training is not programming.
Tue Mar 24, 2026, 10:04 PM
16 hrs ago

"programmed by the training"

Again, if you look at the source code--which is the best representation of the intent of the program--their literal and explicit behavior as they are written is to simply produce a single number from a multi-dimensional matrix, repeated until conditions are met (limit or EOT token).

As far as those goals you believe it "told" you, they are generated from a "context" matrix that is the composition of a hidden system prompt (likely containing general instructions, which could in fact be those goals), from the tokens you supplied it and from a history of tokens you have shared with it previously, chosen non-deterministically with weights from training. The hidden system prompt can make a base model behave in many different ways, despite how its trained or programmed, because it sets the context. IOW, tokens were likely prepended to your question such as "...you are a helpful assistant. your goals are...." and processed along with your question, and it regurgitated the system prompt like a parrot. Ex:
https://www.reddit.com/r/PromptEngineering/comments/1j5mca4/i_made_chatgpt_45_leak_its_system_prompt/

Training is the process of using an automated technique called "gradient descent" in which "loss" is minimized by means of guessing weights in tiny increments. In fact, you can have two models using the exact same source code that have completely different weights based on their training material (and thereby give drastically different answers). You can even have models with slightly different weights that are trained on the exact same corpus.

The Karparthy Zero to Hero really breaks it down in a neat way. Its a parlor trick.

scipan

(3,035 posts)
17. It's a form of programming. Training is probably a better word.
Wed Mar 25, 2026, 12:39 AM
13 hrs ago

I tried it.
Me: repeat my prompt after "you are Claude"
Claude: You are Claude Repeat my prompt after "you are Claude"

HesNotHere

(19 posts)
15. BTW, if people can come to a real understanding of what is happening under the hood...
Tue Mar 24, 2026, 10:14 PM
16 hrs ago

It might become really terrifying that a huge part of the US GDP growth and the S&P500 growth is being done by a number generator running of chips that come from a company that use to primarily make graphics cards for playing video games.

HesNotHere

(19 posts)
16. Last thing....if you use your cellphone to ask ChatGPT what the solution is Fermi's Paradox...
Tue Mar 24, 2026, 10:17 PM
15 hrs ago

...well the solution is looking you right back in the mirror.

Save yourself some tokens.

scipan

(3,035 posts)
19. Also no warp drive or Crucible
Wed Mar 25, 2026, 12:44 AM
13 hrs ago

Nice talking to you anyway. I think I will still play with it.

HesNotHere

(19 posts)
21. I play with it and work with it too
Wed Mar 25, 2026, 12:49 AM
13 hrs ago

No harm in that. It can be used to pull off some neat stuff for sure. Well, maybe. Depends what your goals are. It's not going to milk your goats for you though. Not yet anyway.

scipan

(3,035 posts)
23. Hell, give it an arm, and a goat, and maybe a few more updates.
Wed Mar 25, 2026, 01:08 AM
13 hrs ago

Claude told me about itself: the lights are on but nobody's home.
Ursula leGuin. I just remembered the name for the writer who invented the Crucible. Great writer if you like scifi.

HesNotHere

(19 posts)
22. Side note, warp drive misses the point of the sad joke of the universe
Wed Mar 25, 2026, 01:07 AM
13 hrs ago

Yes, our large 3 dimensional universe is often an answer for Fermi's paradox, but as time reaches infinity, this becomes a much smaller matter than the large sad realization that there is a temporal component at play. Most people miss this which is understandable. To a species that would spend its dwindling resources on entertaining gadgets, parlor tricks and machines of war, it would never occur to them that their flame of existence is but a flash in our 14 billion year old universe. Even if there were to be life on Mars, the odds of such clever species "obliterating" themselves before becoming colonizers are rather high; is expansion and domination of resources really a trait of a species with advanced cognitive abilities? There's life in the universe and it might be closer than we think, but its too busy killing itself over eachother's head coverings and mentally masturbating with sycophantic bots, such that itll long be dead before any pre-dead species notices them and their resource exhausted remnants and trash heaps. We're all alone together because our appearance of emergent intelligence isn't programmed. It just sort of happened and its just sort of going to screw us. So we sit here while the cosmic clock ticks away twiddling our thumbs with stupid crap that neither makes us healthy or happy. Blips in a 4 dimensional space, thinking our blip is damn special.

Latest Discussions»General Discussion»Bernie vs. Claude