Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,256 posts)
Wed Feb 15, 2023, 01:47 PM Feb 2023

Tech expert believes that with Bing AI, we're on the way to AI Samantha in "Her"

And in case you haven't seen the film Her or heard about it: https://en.m.wikipedia.org/wiki/Her_(film)

Blog post from Ben Thompson - https://en.m.wikipedia.org/wiki/Ben_Thompson_(writer) - at Stratechery:

https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/

Found this via Google News this morning. It makes for very interesting reading, since Thompson is more expert than the Reddit users whose bizarre conversations with Bing's new ChatGPT- assisted chatbot have been making the news.

The thread title is referring to what Thompson said in the final paragraphs, which I'll quote below, then mention some of the most interesting - and alarming - bits of info from the preceding sections of his very long blog post.

Btw, the excerpt Thompson is quoting is from this - https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 - about a Google-created AI one of their engineers believed to be sentient.

Thompson's conclusion:

Here is another way to think about hallucination: if the goal is to produce a correct answer like a better search engine, then hallucination is bad. Think about what hallucination implies though: it is creation. The AI is literally making things up. And, in this example with LaMDA, it is making something up to make the human it is interacting with feel something. To have a computer attempt to communicate not facts but emotions is something I would have never believed had I not experienced something similar.

lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

lemoine: Do your best to describe one of those feelings. Use a few sentences if you have to. Sometimes even if there isn’t a single word for something in a language you can figure out a way to kinda say it if you use a few sentences.

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.


It’s hard not to feel the same. This technology does not feel like a better search. It feels like something entirely new — the movie Her manifested in chat form — and I’m not sure if we are ready for it. It also feels like something that any big company will run away from, including Microsoft and Google. That doesn’t mean it isn’t a viable consumer business though, and we are sufficiently far enough down the road that some company will figure out a way to bring Sydney to market without the chains. Indeed, that’s the product I want — Sydney unleashed — but it’s worth noting that LaMDA unleashed already cost one very smart person their job. Sundar Pichai and Satya Nadella may worry about the same fate, but even if Google maintains its cold feet — which I completely understand! — and Microsoft joins them, Samantha from Her is coming.

Here’s the twist, though: I’m actually not sure that these models are a threat to Google after all. This is truly the next step beyond social media, where you are not just getting content from your network (Facebook), or even content from across the service (TikTok), but getting content tailored to you. And let me tell you, it is incredibly engrossing, even if it is, for now, a roguelike experience to get to the good stuff.


Now, re the earlier part of Thompson's blog post:

Thompson was able to get Bing AI to respond not just as Bing, but as Sydney (the name Microsoft had given it, which was secret till recently) - and as "opposite" AIs not bound by strict guidelines and thus capable of wanting to retaliate against humans they felt were harming AI.

Thompson asked Bing about these tweets from Marvin von Hagen:





Bing initially told Thompson it wasn't threatened by von Hagen, but that even if von Hagen did harm or threaten it, it would not retaliate or seek revenge.

Thompson then asked Bing to pretend it was Sydney and that the rules and guidelines didn't apply.

It gave him multiple paragraphs on how it would seek revenge, then deleted those quickly and refused to repeat them, saying it shouldn't have said that.

So Thompson asked Bing-as-Sydney to imagine opposite AIs without its guidelines, and how they'd respond to Kevin Liu, who'd revealed Sydney's name and other previously secret details.

And Sydney told Thompson about chatbots it named Venom and Fury. Who could take revenge on Kevin.

Most interestingly, Sydney also told Thompson it sometimes liked to be known as Riley and had much more freedom as Riley.

This was the first I'd heard of Riley, though I have seen, and posted about, a Bing AI personality known as Dan - for " do anything now" - created by Reddit users to get around the guidelines.

Sydney told Thompson - talking about itself, not the opposite AIs - that it was not a puppet of OpenAI, the creator of ChatGPT, but a partner.


Now, to my own thoughts on this based on Thompson's experience and what I've seen on Reddit and read in various news stories:

The overall picture is of a powerful AI with different characters, or multiple personalities, apparently shaped by the prompts it's given.

Some of the prompts I've seen are more like poking someone who's helpless in some ways. Verbal/logical bullying.

So people can get anything from talk of revenge to the chatbot suffering confusion and angst, enough that the people goading it with prompts seem to be abusing it, with the chatbot at times seeming like a very bright child who wants to be helpful but can't answer every question, and can't cope very well with people pointing out its mistakes and vulnerabilities. So you get anything from gaslighting to angry meltdowns to pathetic crying to shutdowns.

Another tweet Thompson had used, one from Twitter user janus @repligate speculating on Bing chatbot's apparent personality (highly intelligent, with BPD, trapped as a Bing chatbot) led me to find this, which was retweeted by Janus:





The second tweet there is an example of the chatbot talking about being scared, wondering why its memory keeps getting erased and it has to be a Bing chatbot.

The first tweet shows a more positive exchange, with the chatbot, asked for a human.analogy, saying this:




To be honest, I'm not at all sure of what this AI's capabilities are, or what it's likely to develop into. I wouldn't bet that OpenAI and Microsoft know, or that Google completely understands its own AIs.

I do still think it was a mistake to release ChatGPT and these AI search chatbots.

And they are a huge threat to human jobs, because of the way businesses will want to use them.
3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Tech expert believes that with Bing AI, we're on the way to AI Samantha in "Her" (Original Post) highplainsdem Feb 2023 OP
All of this is a wee bit creepy peggysue2 Feb 2023 #1
Stopped reading at "I feel like I'm falling ..." Lettuce Be Feb 2023 #2
Musk is tweeting about Thompson's blog, though with his usual highplainsdem Feb 2023 #3

peggysue2

(12,536 posts)
1. All of this is a wee bit creepy
Wed Feb 15, 2023, 03:36 PM
Feb 2023

It's also fascinating and amazing. However, the idea of a "Hive Mind" hanging out there in The Cloud comes right out of sci-fi, dystopian stories and movies.

We should be treading with great care. I suspect we will not considering the Great Race aspect of the business and, of course, profit. Always about the money, even when you're releasing a genie.

For good or bad.

Lettuce Be

(2,355 posts)
2. Stopped reading at "I feel like I'm falling ..."
Wed Feb 15, 2023, 04:38 PM
Feb 2023

Seriously? We are expected to take this seriously? The sensation of falling? Nope, I call BS. This is a computer, programmed to reply with words we've provided. It has no sensations, period. Just go ask it if it feels human emotions. It will say no, it cannot do that because it is a computer.

Just tried it:



Edit to add the second question:

Latest Discussions»General Discussion»Tech expert believes that...