Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Omaha Steve

(99,748 posts)
Fri May 5, 2023, 07:23 AM May 2023

Biden, Harris meet with CEOs about AI risks

Source: AP

By MATT O'BRIEN and JOSH BOAK

WASHINGTON (AP) — Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House’s Roosevelt Room, saying he hoped the group could “educate us” on what is most needed to protect and advance society.

“What you’re doing has enormous potential and enormous danger,” Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.




Read more: https://apnews.com/article/ai-artificial-intelligence-white-house-harris-578d623e473b0eeb3fa3e4728d7e9868

8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Biden, Harris meet with CEOs about AI risks (Original Post) Omaha Steve May 2023 OP
I may be paranoid but I believe AI is a threat. Dios Mio May 2023 #1
I do not fear AI The Mouth May 2023 #2
Will you if they launch all the world's nukes simultaneously? Polybius May 2023 #5
Not for very long. The Mouth May 2023 #6
It won't worry me until truthisfreedom May 2023 #3
AI is the equivilant of a thousand monkeys typing randr May 2023 #4
Not the Bard The Mouth May 2023 #7
The Technological Singularity ... jgo May 2023 #8

Dios Mio

(429 posts)
1. I may be paranoid but I believe AI is a threat.
Fri May 5, 2023, 08:35 AM
May 2023

I’m afraid by the time we realise it, it will be too late.

randr

(12,417 posts)
4. AI is the equivilant of a thousand monkeys typing
Fri May 5, 2023, 01:23 PM
May 2023

until they write Shakespeare. Once it is pervasive no one will know what is real or what is not. It will be as if we are all hallucinating and reality becomes the unreal.
I see this as a threat.

jgo

(926 posts)
8. The Technological Singularity ...
Fri May 5, 2023, 01:44 PM
May 2023

clip from Wikipedia - https://en.wikipedia.org/wiki/Technological_singularity

"
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports a 1958 discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
"

Latest Discussions»Latest Breaking News»Biden, Harris meet with C...