Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,253 posts)
Sun Feb 12, 2023, 11:14 AM Feb 2023

AI-chatbot search won't be a lie detector. It's a friendly, authoritative-sounding bullshit spreader

From Business Insider a few days ago - and well worth reading in its entirety. Especially by anyone who already believes AI will conveniently save them time by doing their internet searches and research for them.

https://www.businessinsider.com/ai-chatbots-chatgpt-google-bard-microsoft-bing-break-internet-search-2023-2

-snip-


For one thing, chatbots lie. Not on purpose! It's just that they don't understand what they're saying. They're just recapitulating things they've absorbed elsewhere. And sometimes that stuff is wrong. Researchers describe this as a tendency to "hallucinate" — "producing highly pathological translations that are completely untethered from the source material." Chatbots, they warn, are inordinately vulnerable to regurgitating racism, misogyny, conspiracy theories, and lies with as much confidence as the truth.

-snip-

An early example of what we're in for: A wag on Mastodon who has been challenging chatbots asked a demo of a Microsoft model trained on bioscience literature whether the antiparasitic drug ivermectin is effective in the treatment of COVID-19. It simply answered "yes." (Ivermectin is not effective against COVID-19.) And that was a known-item search! The wag was looking for a simple fact. The chatbot gave him a nonfact and served it up as the truth.

Sure, an early demo of Bing's new search bot provides traditional links-'n'-boxes results along with the AI's response. And it's possible that Google and Microsoft will eventually figure out how to make their bots better at separating fact from fiction, so you won't feel the need to check their work. But if algorithms were any good at spotting misinformation, then QAnon and vaccine deniers and maybe even Donald Trump wouldn't be a thing — or, at least, not as much of a thing. When it comes to search, AI isn't going to be a lie detector. It's going to be a very authoritative and friendly-sounding bullshit spreader.

-snip-

But the really dangerous part is that the chatbot's conversational answers will obliterate a core element of human understanding. Citations — a bibliography, a record of your footsteps through an intellectual forest — are the connective tissue of inquiry. They're not just about the establishment of provenance. They're a map of replicable pathways for ideas, the ligaments that turn information into knowledge. There's a reason it's called a train of thought; insights come from attaching ideas to each other and taking them out for a spin. That's what an exploratory search is all about: figuring out what you need to know as you learn it. Hide those pathways, and there's no way to know how a chatbot knows what it knows, which means there's no way to assess its answer.

-snip-
8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
AI-chatbot search won't be a lie detector. It's a friendly, authoritative-sounding bullshit spreader (Original Post) highplainsdem Feb 2023 OP
K&R'd just for the title alone. old as dirt Feb 2023 #1
The difference between chat and search is that chat will be confidently wrong and search will Renew Deal Feb 2023 #2
I've heard some pretty weird "lie detector tests" proposed by humans... old as dirt Feb 2023 #5
Coming soon: a new twist on an old classic EarlG Feb 2023 #3
Kick dalton99a Feb 2023 #4
Sounds like the perfect disinformation tool. patphil Feb 2023 #6
Pretty much the same as most humans. edisdead Feb 2023 #7
+1 crickets Feb 2023 #8
 

old as dirt

(1,972 posts)
1. K&R'd just for the title alone.
Sun Feb 12, 2023, 11:39 AM
Feb 2023

No time to read now, but the whole idea that somebody might have thought otherwise makes me chuckle.

AlphaGo isn’t a lie detector, either, just in case anybody was confused.

But, even so, it’s still pretty cool.

Renew Deal

(85,192 posts)
2. The difference between chat and search is that chat will be confidently wrong and search will
Sun Feb 12, 2023, 11:42 AM
Feb 2023

take you to a website where someone is actively lying. The difference is that chat thinks it’s giving you a good answer (and often does), while search engines index websites full of false info and send people there, especially if that’s where they want to be.

 

old as dirt

(1,972 posts)
5. I've heard some pretty weird "lie detector tests" proposed by humans...
Sun Feb 12, 2023, 12:11 PM
Feb 2023

...including here on this site.

In quite serious and authoritative tones, not as jokes.





patphil

(9,085 posts)
6. Sounds like the perfect disinformation tool.
Sun Feb 12, 2023, 12:45 PM
Feb 2023

All you need do is chat up the lie you want disseminated, and the AI search will transform it into truth.
AI lacks the ability to question what it finds; it just acquires internet "facts" and packages them; giving them an aura of respectability.

This is extremely dangerous.

edisdead

(3,396 posts)
7. Pretty much the same as most humans.
Sun Feb 12, 2023, 01:37 PM
Feb 2023

Like “googling” for information hasn’t just been confirmation bias all along.

Or watching cable news.

Or reading the newspapers which have become biased.

Or just chatting with your friends.

Not defending or damning the AI but there have always been lazy and/or stupid people. And there always will be.

Latest Discussions»General Discussion»AI-chatbot search won't b...