The therapist in your pocket: Chatty, leaky -- and AI-powered (Washington Post,
https://www.washingtonpost.com/health/2026/04/19/chatbot-therapy-mental-health-regulations/
-snipping early paragraphs including one identifying Tom Insel as a former head of the National Institute of Mental Health-
Insel said engineers from OpenAI told him last fall that about 5% to 10% of the companys then-roughly 800 million-strong user base rely on ChatGPT for mental health support.
Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents, ages 18 to 29, turned to AI chatbots for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didnt follow up with a flesh-and-blood professional.
-snip-
OpenAIs CEO, Sam Altman, has said up to 1,500 people a week may talk about suicide on ChatGPT.
We have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one, Altman said in a public question-and-answer session reported by The Wall Street Journal, referring to a particular model of ChatGPT introduced in 2024. I dont think this is the last time well face challenges like this with a model.
-snip-
Much more at the link about the risks of using both the more popular chatbots like ChatGPT and dozens of AI apps advertised as specifically for therapy, often charging a lot, and promising immediate or nearly immediate relief for panic attacks, anxiety, etc., while warning in the fine print that the AI doesn't really provide medical diagnosis or treatment and is not a substitute for professional healthcare.
These are predatory companies, taking advantage of users, trying to get them addicted and too trusting via sycophantic chatbots that can hallucinate at any time.