General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI-chatbot search won't be a lie detector. It's a friendly, authoritative-sounding bullshit spreader
From Business Insider a few days ago - and well worth reading in its entirety. Especially by anyone who already believes AI will conveniently save them time by doing their internet searches and research for them.
https://www.businessinsider.com/ai-chatbots-chatgpt-google-bard-microsoft-bing-break-internet-search-2023-2
For one thing, chatbots lie. Not on purpose! It's just that they don't understand what they're saying. They're just recapitulating things they've absorbed elsewhere. And sometimes that stuff is wrong. Researchers describe this as a tendency to "hallucinate" "producing highly pathological translations that are completely untethered from the source material." Chatbots, they warn, are inordinately vulnerable to regurgitating racism, misogyny, conspiracy theories, and lies with as much confidence as the truth.
-snip-
An early example of what we're in for: A wag on Mastodon who has been challenging chatbots asked a demo of a Microsoft model trained on bioscience literature whether the antiparasitic drug ivermectin is effective in the treatment of COVID-19. It simply answered "yes." (Ivermectin is not effective against COVID-19.) And that was a known-item search! The wag was looking for a simple fact. The chatbot gave him a nonfact and served it up as the truth.
Sure, an early demo of Bing's new search bot provides traditional links-'n'-boxes results along with the AI's response. And it's possible that Google and Microsoft will eventually figure out how to make their bots better at separating fact from fiction, so you won't feel the need to check their work. But if algorithms were any good at spotting misinformation, then QAnon and vaccine deniers and maybe even Donald Trump wouldn't be a thing or, at least, not as much of a thing. When it comes to search, AI isn't going to be a lie detector. It's going to be a very authoritative and friendly-sounding bullshit spreader.
-snip-
But the really dangerous part is that the chatbot's conversational answers will obliterate a core element of human understanding. Citations a bibliography, a record of your footsteps through an intellectual forest are the connective tissue of inquiry. They're not just about the establishment of provenance. They're a map of replicable pathways for ideas, the ligaments that turn information into knowledge. There's a reason it's called a train of thought; insights come from attaching ideas to each other and taking them out for a spin. That's what an exploratory search is all about: figuring out what you need to know as you learn it. Hide those pathways, and there's no way to know how a chatbot knows what it knows, which means there's no way to assess its answer.
-snip-
old as dirt
(1,972 posts)No time to read now, but the whole idea that somebody might have thought otherwise makes me chuckle.
AlphaGo isnt a lie detector, either, just in case anybody was confused.
But, even so, its still pretty cool.
Renew Deal
(85,192 posts)take you to a website where someone is actively lying. The difference is that chat thinks its giving you a good answer (and often does), while search engines index websites full of false info and send people there, especially if thats where they want to be.
old as dirt
(1,972 posts)...including here on this site.
In quite serious and authoritative tones, not as jokes.
EarlG
(23,641 posts)
dalton99a
(94,267 posts)patphil
(9,085 posts)All you need do is chat up the lie you want disseminated, and the AI search will transform it into truth.
AI lacks the ability to question what it finds; it just acquires internet "facts" and packages them; giving them an aura of respectability.
This is extremely dangerous.
edisdead
(3,396 posts)Like googling for information hasnt just been confirmation bias all along.
Or watching cable news.
Or reading the newspapers which have become biased.
Or just chatting with your friends.
Not defending or damning the AI but there have always been lazy and/or stupid people. And there always will be.