Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

erronis

(22,388 posts)
Fri Dec 12, 2025, 05:07 PM Friday

'DeepSeek is humane. Doctors are more like machines': my mother's worrying reliance on AI for health advice

https://www.theguardian.com/news/audio/2025/dec/12/deepseek-is-humane-doctors-are-more-like-machines-my-mothers-worrying-reliance-on-ai-for-health-advice-podcast

Tired of a two-day commute to see her overworked doctor, my mother turned to tech for help with her kidney disease. She bonded with the bot so much I was scared she would refuse to see a real medic

This essay was originally published on Rest of world
By Viola Zhou


Every few months, my mother, a 57-year-old kidney transplant patient who lives in a small city in eastern China, embarks on a two-day journey to see her doctor. She fills her backpack with a change of clothes, a stack of medical reports and a few boiled eggs to snack on. Then, she takes a 90-minute ride on a high-speed train and checks into a hotel in the eastern metropolis of Hangzhou.

At 7am the next day, she lines up with hundreds of others to get her blood taken in a long hospital hall that buzzes like a crowded marketplace. In the afternoon, when the lab results arrive, she makes her way to a specialist’s clinic. She gets about three minutes with the doctor. Maybe five, if she’s lucky. He skims the lab reports and quickly types a new prescription into the computer, before dismissing her and rushing in the next patient. Then, my mother packs up and starts the long commute home.

DeepSeek treated her differently.

My mother began using China’s leading AI chatbot to diagnose her symptoms this past winter. She would lie down on her couch and open the app on her iPhone.

“Hi,” she said in her first message to the chatbot, on 2 February.

“Hello! How can I assist you today?” the system responded instantly, adding a smiley emoji.


. . .


10 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

jfz9580m

(16,445 posts)
1. Gross -DeepSeek disgusts me
Fri Dec 12, 2025, 05:49 PM
Friday

I am furious because I am just lately starting to suspect that something I didn’t understand that derailed my life may have been tied to AI.

I have been fortunate enough to never be literally poor, but I have been very psychologically poor in the last 14 years. I have felt (as I can now see) like nothing more than a host to be cannibalized for the sake of worthless junk ai competing with the human brain for control.

It is the most worthless trash imaginable.
It has just started reversing. And I am extremely angry. I have to draft a post for DU Activist HQ and go back to behaving like a normal human being.

I was a scrambled mess. Nothing to do with mental health and everything to do with politics.

I have been reproducing the self-centred, generally brainless and insane seeming behaviours of the parasitic technology forced on me by parasitic humans in or adjacent to AI/similar tech.

Or I am starting to suspect that that is what it was. I am livid.



What is so insulting is this childish crap where a schoolyard bully hits you using your own hand and says “why are you hitting yourself”.
If this thing was following one around and “mirroring” one in the shallowest ways possible (after all what else is it capable of), it would hobble your own learning and maturation as a human and if like me, you would have no intention of cooperating with such rubbish, it would be as if this thing’s almost certain idiocy was somehow something you programmed it with when you weren’t even knowingly interacting with it and just thought you were being chased around by adware while it was actually adware on steroids. Fancier rubbish.
Infuriating. Who the hell would consider something built by the kinds of people at Stanford, MIT, Google, OpenAi etc anything but complete garbage when it is not basic non interactive tech.

Human intelligence is complicated. The people whom you find in EECS at those places are across the board people I would avoid. They are vapid, dead eyed Pluribus types. There are no humans at a place like Stanford, MIT, Harvard, Yale or say Dartmouth (as a random example) that anyone sane could possibly not hate. Well okay maybe that’s ott. But wth do they mean by recruiting unsuspecting humans into their stupid, vapid, cultish, creepy drivel?

snot

(11,412 posts)
2. Please excuse if this is a tangent, but
Fri Dec 12, 2025, 06:13 PM
Friday

I was thinking just recently of how far my experiences with health care in the US have deteriorated since maybe 20 years ago. In the old days, the best doctors seemed to genuinely care about understanding your disorder and effectively addressing it. Now, even though I'm referred to some of the "best" doctors, I feel like they're just going through the motions, completely incurious about any aspects that might be relevant but aren't on some checklist... it's almost as if they were preparing us for their own replacement by AI's.

It also seems to take MUCH longer to get in to see a doctor now (I've had to wait as long as 4 months, even in a major metropolitan area), which of course leaves more time for one's condition to worsen and become more complicated. Has the number of docs per capita declined that much?

Docs also seem more specialized. What I used to get done through one specialist now takes two or three; and you have to get a referral from #1 to even make and appointment with #2, and so on – often further delaying any actual solution to the problem.

It all seems FAR less efficient than it used to be; and I'd think the work is far less interesting for the docs. I think some of these trends are probably driven by our corporate health care & insurance system in the US.

erronis

(22,388 posts)
4. I totally agree. And my cardiologist doesn't want to know about my blood cancer.
Fri Dec 12, 2025, 07:17 PM
Friday

Not in his wheelhouse.

As someone said to me yesterday, doctors (and the industry) "objectify" the patients. Treat them as items to be dealt with.

appalachiablue

(43,782 posts)
6. Also PE, Private Equity, for profit investors buy up Drs offices, clinics, hospitals, more, cut staff, services
Fri Dec 12, 2025, 07:25 PM
Friday

hlthe2b

(112,517 posts)
3. I am appalled at how much personal, identifying information--including detailed medical info patients are posting to AI
Fri Dec 12, 2025, 06:51 PM
Friday

with essentially zero protections for that data. Everything can be used against them, whether for scams, identity theft, blackmail (in some cases), and obviously urging them to do or purchase things that can harm/kill them!

I admit. I never saw this coming SO FAST. I do not know how to fight it. The most educated and already skeptical will listen to me, but that is NOT the majority.

erronis

(22,388 posts)
5. Along with musk/doge stealing all the data they could from the US health agencies.
Fri Dec 12, 2025, 07:20 PM
Friday

Per your last sentence: The sheep don't WANT to know what's happening. They'll follow the flock until they become food or fertilizer.

jfz9580m

(16,445 posts)
8. There is no option but for serious legal pushback
Fri Dec 12, 2025, 11:46 PM
Friday

Against the whole industry that is big data, AI etc.

I am appalled at how poor the data hygiene is where people have not shared any damn thing with AI.

And that is because people across the board in those industries started introducing AI agents, AI and sharing data with data miners without ever asking anyone (patients, students etc), but slowly “normalizing” the notion that if you use anything from email to a few Internet message boards privately, nothing is private.

Iow essentially any digital use at all is a free for all and expecting to get away with blaming the victim (ie the person whose data is mined).

I see this all the time. As if common expectations of decency were somehow people being “suckers” and not getting some 20-25 years ago that “data is oil”.

This is on companies like Google, Facebook, OpenAi etc and anyone who colluded with them in hospital and university administrations.

It is not the fault of people who never even used anything more than email or message boards. That was me 14 years ago and once I started noticing how ott this is, it was hard to go back to casual acceptance of it.

I had assumed that these things are of no interest to anyone. I get the basic model of the web but companies like DuckDuckGo manage without being intrusive and predatory. And some of these probably should only be state service or non-profit, as heretic as that might seem to any populace that doesn’t see itself as the property of those with any power or access or money

I never used social media and the only bot I have ever used is an obscure British bot.

This is not the way to rollout ubiquitous computing if that is the umbrella term. If data was oil, consider climate change here once the less meek citizenry or visitors find out and put it together. I myself have been angry about this increasing encroachment for 14 maybe 15 years now.

Appalachiablue is quite right about private equity. These economies are increasingly encroaching into spaces none of us ever thought to protect. They do it with the same muscular contempt for common sense morons of privacy and decency, for informed consent and any conception of IRB (in what is definitely a large scale human science experiment) that was not cobbled together in a back alley.

The push seems to be to legitimise the notion that it is never anyone’s job to make connections like many tech critics like Yasha Levine ot Ed Zitron do. And to never be anything but bone lazy and predatory and parasitic while blaming the stupidity of the humans who are increasingly carcasses as Shoshana Zuboff put it or mere waste products if nor “usable” and herdable.

The implicit message I see is that it is entirely up to the person existing in this shit milieu to not be a nuisance to these data miners, AI “researchers” and any shrinks or other professionals who jumped to exploit trends and fads in data mining and AI without asking what their own responsibilities are.

That is completely self-serving. It is not the fault of people who simply use the net or by now exist.

And the better educated or better informed who find out or know have an obligation to the broader public. And it’s a constant ransomware threat.

Strict data privacy protections were and remain the job of people higher up in these food chains nor patients, students or employees. YOYO has gone on for too long.

People accepted it with resignation because where that would end up going was less clear.

Now with physical AI and streams and street traffic manipulation all with no discussion of these things with the broader public, it has become too ott to accept. All with a slew of garbage cottage industries spawning from the original nuisance.

I don’t really understand many of those concepts much myself, but I do understand that it is exploitation. And that it is time or push back.

I don’t have use for the type of litigation that Ellen Pao was attempting, but I do for what lawyers like Alec Karakatsanis or Lina Khan would back.
This is about civil rights and women’s rights and labor and environmental protections and more than a joke notion of being a citizen who can participate as they choose to in what affects their own home or street.

jfz9580m

(16,445 posts)
9. This article makes me appreciate my doctors here
Sat Dec 13, 2025, 04:27 AM
Saturday
My mother has told me that whenever she steps into her nephrologist’s office, she feels like a schoolgirl waiting to be scolded. She fears annoying the doctor with her questions. She also suspects that the doctor values the number of patients and earnings from prescriptions over her wellbeing


None of my doctors have ever been anything like this. It’s why I have a fair amount of faith in doctors (outside psychiatry - the few doctors I have had negative experiences with have been psychiatrists and even there it is 50-50. I didn’t like the shrinks with my sleazy former employer in the US. But the psychiatrist outside the school was very cool like the therapist. I disliked the two boorish shrinks I came across here in 2012 as well, but not the shrink with my main hospital. She was decent). And while I have cobbled together a DIY mental health strategy, it is AI free and I would still prefer my original shrink in the US who prescribed Adderall which actually helped over my AI shilling sleazy former employer.

A documentary I found via the inimitably awesome Yasha Levine, made it easier to grasp what the deal with the Rosenhan Experiment actually was:

Another strand in the documentary is the work of R. D. Laing, whose work in psychiatry led him to model familial interactions using game theory. His conclusion was that humans are inherently selfish and shrewd and spontaneously generate stratagems during everyday interactions. Laing's theories became more developed when he concluded that some forms of mental illness were merely artificial labels, used by the state to suppress individual suffering. This belief became a staple tenet of counter-culture in the 1960s. Reference is made to an experiment run by one of Laing's students, David Rosenhan, in which bogus patients, self-presenting at a number of American psychiatric institutions, were falsely diagnosed as having mental disorders, while institutions, informed that they were to receive bogus patients, misidentified genuine patients as imposters. The results of the experiment were a disaster for American psychiatry, because they destroyed the idea that psychiatrists were a privileged elite that was genuinely able to diagnose, and therefore treat, mental illness.
Curtis credits the Rosenhan experiment with the inspiration to create a computer model of mental health. Input to the program consisted of answers to a questionnaire. Curtis describes a plan of the psychiatrists to test the computer model by issuing questionnaires to "hundreds of thousands" of randomly selected Americans. The diagnostic program identified over 50% of the ordinary people tested as suffering from some kind of mental disorder. According to Jerome Wakefield, who refers to the test as "these studies", the results it found were viewed as a general conclusion that "there is a hidden epidemic." Leaders in the psychiatric field never addressed whether the computer model was being tested or used without having been validated in any way, but rather used the model to justify vastly increasing the portion of the population they were treating.

Google’s Thomas Insel would totally be a shill for some awful new AI and data driven or VR shilling bs for mental health. A pox on all these creeps.

I am wary of hospital admins - they are usually not doctors anyway. They have MBAs. That’s where the profit driven crap comes in.

And doctors who are influencers..yeah that I would avoid. I love my onc. He is the opposite of that and such a good doctor. He is pretty overworked though.

This article repeats a lot of outdated narratives about aging societies and reads like a pitch for more AI than not. That’s an ad dressed up as a humblebrag. Doctors here are overworked because of overpopulation. You can’t train doctors at the same pace at which people have kids. So the idea is more growth of this noxious kind with stolen medical and academic data and theft from our living and working spaces. More formal discussion of family planning without eugenics in public health, that would help. Unsustainable..this Ponzi scheme worldview. Even though Steve Chu works at the execrable Stanford (which no decent human should going forward), he got that part right.

This is why I am filing complaints about this stuff from the last 14-15 years. I am drafting a post about it for activist HQ to initiate pushback.
Latest Discussions»Issue Forums»Health»'DeepSeek is humane. Doct...