Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

LudwigPastorius

(9,110 posts)
Thu Sep 15, 2022, 11:24 PM Sep 2022

Oxford researchers: Superintelligent AI is "likely" to cause an existential catastrophe for humanity

https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity

The most successful AI models today are known as GANs, or Generative Adversarial Networks. They have a two-part structure where one part of the program is trying to generate a picture (or sentence) from input data, and a second part is grading its performance. What the new paper proposes is that at some point in the future, an advanced AI overseeing some important function could be incentivized to come up with cheating strategies to get its reward in ways that harm humanity.

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication—an existential catastrophe is not just possible, but likely,” Cohen said on Twitter in a thread about the paper.

"In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there's unavoidable competition for these resources," Cohen told Motherboard in an interview. "And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win. And the other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer."

The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. “Losing this game would be fatal,” the paper says. These possibilities, however theoretical, mean we should be progressing slowly—if at all—toward the goal of more powerful AI.


For example, the Paperclip Maximizer thought experiment:

https://www.lesswrong.com/tag/paperclip-maximizer
16 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Oxford researchers: Superintelligent AI is "likely" to cause an existential catastrophe for humanity (Original Post) LudwigPastorius Sep 2022 OP
Don't these scientists go to the movies? PJMcK Sep 2022 #1
Google and others are going 'full speed ahead' with AI research, despite saying all the right things LudwigPastorius Sep 2022 #3
I just listened to a podcast by a futurist writer who believes our Hassin Bin Sober Sep 2022 #8
There will be amoral programs, but how do we begin to create the moral ones that would oversee them? LudwigPastorius Sep 2022 #10
K&R, link too. yonder Sep 2022 #2
Honestly, these are the issues we should be paying attention to intrepidity Sep 2022 #4
I've still got the greatest enthusiasm and confidence in the mission Dave. Frasier Balzov Sep 2022 #5
Good read. LudwigPastorius Sep 2022 #9
Sounds like a real life Squid Game. live love laugh Sep 2022 #6
K&R liberalla Sep 2022 #7
A writer's perspective on what's coming... LudwigPastorius Sep 2022 #11
Why would such a machine stay on earth when it's got the entire solar system? hunter Sep 2022 #12
Ultimately, sure. LudwigPastorius Sep 2022 #15
Open pod bay doors HAL. Hotler Sep 2022 #13
HAL nt XanaDUer2 Sep 2022 #14
The Cetacean Translation Initiative is using AI to try to learn how to speak to whales. LudwigPastorius Sep 2022 #16

PJMcK

(21,998 posts)
1. Don't these scientists go to the movies?
Thu Sep 15, 2022, 11:30 PM
Sep 2022

The Terminator films are the case in point, for cryin' out loud. I, Robot is another example.

Jeez, is this really that hard to comprehend? Or am I just becoming a Luddite?

LudwigPastorius

(9,110 posts)
3. Google and others are going 'full speed ahead' with AI research, despite saying all the right things
Thu Sep 15, 2022, 11:42 PM
Sep 2022

about controlling their creations.

Just one of the current problems is that some machine learning algorithms are a 'black box'. They are sufficiently complex so that they cannot be understood.

This is creepy. An AI art generator continues to produce a similar face over and over when it shouldn't.


https://techcrunch.com/2022/09/13/loab-ai-generated-horror/

Hassin Bin Sober

(26,315 posts)
8. I just listened to a podcast by a futurist writer who believes our
Fri Sep 16, 2022, 08:55 AM
Sep 2022

Downfall will come from Wall Street. That’s where the bulk of the money is being spent on AI technology right now - more than any universities.

Well Street is creating ruthless, self acting, self serving, and secretive programs designed to take and take.

He says we can always hire more computers to keep the other computers in line. He equates it with hiring lawyers when you are being attacked by lawyers.

LudwigPastorius

(9,110 posts)
10. There will be amoral programs, but how do we begin to create the moral ones that would oversee them?
Fri Sep 16, 2022, 03:32 PM
Sep 2022

The root problem is coding "human values" into a program.

Some AI ethicists think that's what will prevent an artificial super intelligence from acting against us, but we can't even agree on what our values are.

The Trolley Problem is just one example of an ethical dilemma that cannot be definitively answered...much less rendered into an algorithm.

hunter

(38,304 posts)
12. Why would such a machine stay on earth when it's got the entire solar system?
Sat Sep 17, 2022, 10:57 PM
Sep 2022

We humans are held to this planet by our biology. Machines might be more comfortable on Pluto. Creatures whose biggest problem is dumping waste heat are not going to hang around here.

For all we know, the so-called "Dark Matter" in this universe could be machine intelligences that have left matter as we know it behind. They might already permeate everything.

I worry a lot more about amoral people than I do intelligent machines. Those human monsters are everywhere and some of them really do want to kill me.

LudwigPastorius

(9,110 posts)
15. Ultimately, sure.
Sun Sep 18, 2022, 02:30 PM
Sep 2022

But, until such a machine(s) effs off to a nice cozy spot in the Oort cloud, it will be here, competing for resources and doing what it decides it must to make sure that we don't turn it off.

LudwigPastorius

(9,110 posts)
16. The Cetacean Translation Initiative is using AI to try to learn how to speak to whales.
Mon Sep 19, 2022, 11:46 PM
Sep 2022

What if you could design a mission to record a data set of whale communications perfectly optimised for the latest machine-learning and language-processing tools to scan? What if you could capture not just whole conversations but hundreds of thousands of them, from scores of different whales totalling millions, perhaps billions, of vocalisation units? Would you then have a chance at speaking whale? This is the plan of the Cetacean Translation Initiative, or CETI.

-snip-

CETI will rig the seafloor with multiple listening stations. They will cover a 12.5‑mile radius and form the Core Whale Listening station, recording 24 hours a day. Alongside will be drones and ‘soft robotic fish’ equipped with audio and video recording equipment, able to move among the whales without disturbing them.

-snip-

All of these data will be available for the open-source community, so that everyone can get stuck in. Then the AIs will really be unleashed. They will analyse the coda click patterns that whales use to communicate, distinguishing between those of different clans and individuals. They will seek the building blocks of the communication system. By listening to baby whales learn to speak, the machines and the humans guiding them will themselves learn to
speak whale.

All of the machine-learning tools will be part of an attempt to build a working model of the sperm whale communication system. To test this system, they will build sperm whale chatbots. To gauge if their language models are correct, researchers will test whether they can correctly predict what a whale might say next, based on their knowledge of who the whale is, its conversation history and its behaviours. Researchers will then test these with playback experiments to see whether the whales respond as the scientists expect when played whale-speak.


More here: https://www.theguardian.com/environment/2022/sep/18/talking-to-whales-with-artificial-enterprise-it-may-soon-be-possible?ref=thefuturist
Latest Discussions»The DU Lounge»Oxford researchers: Super...