Biden, Harris meet with CEOs about AI risks
Source: AP
By MATT O'BRIEN and JOSH BOAK
WASHINGTON (AP) Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting peoples rights and safety at risk.
President Joe Biden briefly dropped by the meeting in the White Houses Roosevelt Room, saying he hoped the group could educate us on what is most needed to protect and advance society.
What youre doing has enormous potential and enormous danger, Biden told the CEOs, according to a video posted to his Twitter account.
The popularity of AI chatbot ChatGPT even Biden has given it a try, White House officials said Thursday has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.
Read more: https://apnews.com/article/ai-artificial-intelligence-white-house-harris-578d623e473b0eeb3fa3e4728d7e9868
Dios Mio
(429 posts)Im afraid by the time we realise it, it will be too late.
The Mouth
(3,164 posts)In the slightest.
Polybius
(15,498 posts)The Mouth
(3,164 posts)truthisfreedom
(23,157 posts)we cant unplug it.
randr
(12,417 posts)until they write Shakespeare. Once it is pervasive no one will know what is real or what is not. It will be as if we are all hallucinating and reality becomes the unreal.
I see this as a threat.
The Mouth
(3,164 posts)but ChatGPT can do pretty good haiku and decent song lyrics.
jgo
(926 posts)clip from Wikipedia - https://en.wikipedia.org/wiki/Technological_singularity
"
The technological singularityor simply the singularity[1]is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]
The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports a 1958 discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]
Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
"