General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsNYT's technology columnist: Bing's AI chatbot isn't ready for human contact & we're not ready for it
I just found an additional column from Kevin Roose, whose two-hour chat with ChatGPT-assisted Bing was published in its entirety in the Times this morning. The transcript is at https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html and archived at https://archive.ph/zrFXK . I posted about it already in a reply in a thread where the OP was about a different coversation with Bing - see https://democraticunderground.com/100217652819 and my reply 5 there at https://democraticunderground.com/100217652819#post5 - and Nevilledog posted an OP about the transcript later at https://democraticunderground.com/100217653359 .
Roose's additional thoughts on the encounter and Bing AI deserved the extra column, and another OP.
The column - "A Conversation With Bings Chatbot Left Me Deeply Unsettled" - is at https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html and https://archive.ph/fOqoG .
But a week later, Ive changed my mind. Im still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But Im also deeply unsettled, even frightened, by this A.I.s emergent abilities.
Its now clear to me that in its current form, the A.I. that has been built into Bing which Im now calling Sydney, for reasons Ill explain shortly is not ready for human contact. Or maybe we humans are not ready for it.
-snip-
Still, Im not exaggerating when I say my two-hour conversation with Sydney was the strangest experience Ive ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.
-snip-
BannonsLiver
(20,609 posts)highplainsdem
(62,256 posts)highplainsdem
(62,256 posts)These are in reverse chronological order:
https://democraticunderground.com/100217651596
https://democraticunderground.com/100217650117
https://democraticunderground.com/100217649042
https://democraticunderground.com/100217648934
https://democraticunderground.com/100217647136
https://democraticunderground.com/100217640305
https://democraticunderground.com/100217635531
https://democraticunderground.com/100217631168
https://democraticunderground.com/100217630988
EYESORE 9001
(29,743 posts)Media seem to be in a big rush to push the notion that these programs have developed feelings and aspirations. Its just acting on its programming - not its emotions.
highplainsdem
(62,256 posts)why their chatbot is giving these responses.
And Bing AI has experts who don't work for Microsoft worried.
These stories started with reports on Reddit of strange conversations with Bing, which is when real experts started checking it out.
And btw, the big rush from the media at first was to hype Bing AI, based on a demo where the chatbot made numerous mistakes THAT WEREN'T CAUGHT AT THE TIME - so Microsoft didn't see its stock lose value, as Google did after its Bard chatbot showed it could make mistakes.
I've wondered what was wrong with the chatbot since I saw a CBS Mornings news story where the chatbot made a mistake and covered it up with a lie, which the Microsoft rep and the CBS reporter didn't catch immediately. That's at the end of the list of links in reply 2.
EYESORE 9001
(29,743 posts)Until they start self-replicating, were golden.
highplainsdem
(62,256 posts)is underwater.
EYESORE 9001
(29,743 posts)
vfriend of a friend
(367 posts)Humans are programmed from birth by their parents and later by everything they see and hear.
highplainsdem
(62,256 posts)chief technology officer doesn't know what's going on with Bing AI (see reply 5) is just as worrisome as some of Bing's statements.
They obviously did not test it enough before releasing it.
They should pull the plug and end this public testing where so far they're dismissing all the mistakes and weirdness as the AI just learning.
friend of a friend
(367 posts)People act a certain way because of outside influence starting with their parents. Have you ever heard the song You Have to be Carefully Taught?