Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(49,020 posts)
Thu Apr 18, 2024, 04:21 PM Apr 18

AI-Powered World Health Chatbot Is Flubbing Some Answers

Source: Bloomberg

The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.

-snip-

Read more: https://www.bloomberg.com/news/articles/2024-04-18/who-s-new-ai-health-chatbot-sarah-gets-many-medical-questions-wrong



Found this thanks to this tweet from another Bloomberg reporter, Rachel.Metz:




Rachel Metz
@rachelmetz

i asked SARAH, the World Health Organization's new AI chatbot, for medical help near me, and it provided an entirely fabricated list of clinics/hospitals in SF. fake addresses, fake phone numbers. check out @jessicanix_'s take on SARAH here:
https://bloomberg.com/news/articles/2024-04-18/who-s-new-ai-health-chatbot-sarah-gets-many-medical-questions-wrong?utm_source=website&utm_medium=share&utm_campaign=twitter via
@business
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

paleotn

(17,938 posts)
1. AI is overrated.
Thu Apr 18, 2024, 07:06 PM
Apr 18

Studies show that as AI algorithms feed on AI generated information they become no more creative than the average 5 year old at best. Spit out gibberish at worst. Seems they devolve into Trump voters.

https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/

highplainsdem

(49,020 posts)
5. The good things GenAI can do are limited. The bad things it can do are almost unlimited, unfortunately, and
Fri Apr 19, 2024, 08:46 AM
Apr 19

are already causing huge problems.

GenAI chatbots are already in wide use and providing so much false information to customers that the federal government warned banks about using them.

ChatGPT used as a free cheating tool has led to cheating by students being much, much more common now, wasting much more of teachers' time.

Deepfakes, especially porn deepfakes and political deepfakes, are much more common now.

There are many more websites set up just to pump out misinformation and get clicks, and these websites can add hundreds of articles a day, apiece.

Platforms for releasing and selling books, art and music have been completely overwhelmed by GenAI submissions, often by people with no skill and no real interest in writing or art or music who view throwing AI-generated fake creativity in all directions as a get-rich-quick scheme. The books that are nonfiction are typically filled with lots of errors, some of it dangerous if recommendations are followed. Amazon announced a while back that it would try to limit this tsunami of AI garbage by allowing these fake authors to upload "only" three books per day, but that's still potentially over 1,000 new AI-created garbage publications per year from each of these frauds. A lot of the books are published under pseudonyms similar or identical to famous authors' names, with titles similar or identical to titles they've used, in the hope of confusing potential buyers.

That's publishing. In music, by last year, one AI music company alone was boasting it had released 15 million tracks. The problem is much worse now.

In art and photography, AI images have flooded the platforms so much that a scientist and author I follow tweeted the other day that all the platforms he checked with told him they could no longer guarantee that their images were not AI. So he's considering hiring a professional if stock images can't be trusted to be genuine and created by humans.

I've seen AI-using fake artists online taunting real artists for needing more than seconds to create a work of art. I saw one tweet that they were looking forward to artists writhing in agony as AI art floods out real art on those online platforms. They're furious that they're not treated as real artistic geniuses for typing in a few words as a prompt and having the AI give them potentially thousands of different results they can choose from and claim they "created."

We Democrats are very aware of the culture war between left and right.

More and more there's a culture war between those who value human work and talent, and those who want to exploit GenAI despite all of its flaws, to replace those talented and skilled humans whose work was stolen to train the GenAI models. GenAI companies are fighting lawsuits and lobbying to get copyright laws changed, at least to exempt AI being trained with stolen data, since those AI companies want their own intellectual property protected, as always.

I've been disgusted by how many individual users of GenAI, even after being told it works only because so much of its training data was stolen, just shrug off that vast theft as irrelevant if GenAI gives them even the slightest profit, use or amusement. Their attitude is a giant fuck-you to all the people whose carefully crafted work the AI companies stole.

jmowreader

(50,562 posts)
7. Adobe has gone all-in on generative AI
Tue Apr 23, 2024, 03:55 AM
Apr 23

I use Illustrator professionally...professional as in "someone pays me to do this" as well as "I make pretty pictures with this." A LOT. I've been using it 35 years and I'm good at it.

One of the things I do with it is convert the Associated Press's full-page financial report graphic to a half-page graphic, which entails deleting quite a bit of it and moving around or resizing almost everything else on the page. In the most recent version of Illustrator, which I use, if you select a block of graphic for some reason a little tool pops up asking if you want to use AI on it...and YOU CANNOT TURN THIS OFF.

Now for the $1 million question: What is the last program in the world you'd ever expect to see AI in? If you said "Acrobat," which doesn't even have the capacity to generate a file - you can manipulate a file by doing things like changing the color space, adding or removing pages, converting fonts to outlines and so on but you can't use Acrobat to make a file that has never existed before; that is done in another program in the suite - you would be quite correct...but the current version of Acrobat has full generative AI capability.

highplainsdem

(49,020 posts)
8. Too many companies have gone all-in on GenAI, in this bubble, and
Tue Apr 23, 2024, 06:56 AM
Apr 23

Adobe is losing credibility with artists, judging by what I'm seeing online:

https://www.democraticunderground.com/10143224711

Part of what's going on is obviously FOMO, but I think a lot of it is pressure from the AI companies, especially Microsoft and Google, to get GenAI used so widely in so many sectors of the economy and society that it will be their main defense against being required by courts and governments to get rid of those AI models and train new ones on completely legal datasets.

A news story came out yesterday about an Amazon exec who was fired in part for objecting to Amazon's copyright violations. She was told to ignore those concerns because "everyone" in AI was doing it.

https://www.theregister.com/2024/04/22/ghaderi_v_amazon/

jmowreader

(50,562 posts)
9. Trust me on this...
Tue Apr 23, 2024, 04:46 PM
Apr 23

If I could possibly get rid of InDesign, Illustrator and Photoshop, I would do it in a second.

The problem is that all the open-source apps on the market are RGB-only and they don't have PANTONE licenses - two things I simply MUST have in my work. I've gotta have InDesign because the newspaper industry is standardized on it, but I only use it when I have to. I do all the work I possibly can in QuarkXPress rather than InDesign. I'm probably the only person in the country that uses QuarkXPress as a database - it's not ideal but it works.

Aussie105

(5,415 posts)
6. AI should not be trusted as yet.
Fri Apr 19, 2024, 09:10 AM
Apr 19

I asked AI some computer related questions I already knew the answers to and got info that was either out of date and misleading, or just wrong.

To be expected. The AI learning model needs to be constantly fed with updated information, and outdated information purged.

"i asked SARAH, the World Health Organization's new AI chatbot, for medical help near me, and it provided an entirely fabricated list of clinics/hospitals in SF. fake addresses, fake phone numbers."

In this case, the framework to answer the question was probably set up but not yet populated with locale specific data.
The blank spaces were just filled in with seemingly appropriately shaped but irrelevant data.

Can AI trawl all forms of information out in the real world by itself and organise it so that the answers it gives are accurate and up to date?
Most likely, but we aren't there yet.


Latest Discussions»Latest Breaking News»AI-Powered World Health C...