Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow dangerous is AI? Regulate it before it's too late
Cynthia Rudin is a professor of computer science; electrical and computer engineering; statistical science; as well as biostatistics and bioinformatics at Duke University, where she directs the Interpretable Machine Learning Lab.https://thehill.com/opinion/technology/3849751-how-dangerous-is-ai-regulate-it-before-its-too-late/
As an Artificial Intelligence researcher, Ive always felt the worst feature of AI is its role in the spread of lies. The AI-amplification of lies in Myanmar reportedly contributed to the Rohingya massacre, the spread of COVID-19 and vaccine misinformation likely contributed to hundreds of thousands of preventable deaths and election misinformation has weakened our democracy and played a part in the Jan. 6, 2021 insurrection. This was all possible because humans turned algorithms into weapons, manipulating them to spread noxious information on platforms that claimed to be neutral. These algorithms are all proprietary to companies, and they are unregulated. And so far, none of the companies have admitted any liability. Apparently, no one feels guilty.
If the federal government doesnt start regulating AI companies, it will get a lot worse. Billions of dollars are pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what. This will make it exponentially easier to generate fake news, fake violence, fake extremist articles, non-consensual fake nudity and even fake scientific articles that look real on the surface. Venture capital firms investing in this technology liken it to the early launch of the internet. And as we know, its much easier to spread outrageous falsehoods than it is to spread the truth. Is this really like the beginning of the internet? Or is this like launching a nuclear bomb on the truth?
AI startups say that by making this technology public, they are democratizing AI. Its hard to believe that coming from companies that stand to potentially gain billions by getting people to believe it. If they were instead about to be the victim of a massacre stemming from AI-generated misinformation, or even a victim of AI-amplified bullying, perhaps they might feel differently. Misinformation is not innocent it is a major cause of wars (think of WWII or Vietnam), although most people are unfamiliar with the connection.
There are things we can do right now to address these critical problems. We need regulations around the use and training of specific types of AI technology . . .
If the federal government doesnt start regulating AI companies, it will get a lot worse. Billions of dollars are pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what. This will make it exponentially easier to generate fake news, fake violence, fake extremist articles, non-consensual fake nudity and even fake scientific articles that look real on the surface. Venture capital firms investing in this technology liken it to the early launch of the internet. And as we know, its much easier to spread outrageous falsehoods than it is to spread the truth. Is this really like the beginning of the internet? Or is this like launching a nuclear bomb on the truth?
AI startups say that by making this technology public, they are democratizing AI. Its hard to believe that coming from companies that stand to potentially gain billions by getting people to believe it. If they were instead about to be the victim of a massacre stemming from AI-generated misinformation, or even a victim of AI-amplified bullying, perhaps they might feel differently. Misinformation is not innocent it is a major cause of wars (think of WWII or Vietnam), although most people are unfamiliar with the connection.
There are things we can do right now to address these critical problems. We need regulations around the use and training of specific types of AI technology . . .
This is going to be a damn war with regulation-hating Republicans who by and large are too greedy & ignorant to give a damn what the accumulating affects of all this shit will have on Democracy & humanity.
7 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
How dangerous is AI? Regulate it before it's too late (Original Post)
CousinIT
Feb 2023
OP
It's a great mechanism for controlling the masses. Orwell understood so much about the future.
jalan48
Feb 2023
#3
It will mean the end of libel laws, no one will be able to sue for defamation or libel
FakeNoose
Feb 2023
#4
Irish_Dem
(81,266 posts)1. The dark side of human nature will run the world.
jaxexpat
(7,794 posts)2. AI is recognizable by its misuse of " then and than".
As well as " too, to and two". Sure fire tell!
jalan48
(14,914 posts)3. It's a great mechanism for controlling the masses. Orwell understood so much about the future.
FakeNoose
(41,634 posts)4. It will mean the end of libel laws, no one will be able to sue for defamation or libel
Every publisher from now on will say, "Oh that wasn't us. A chatbot wrote that."
patphil
(9,067 posts)5. AI further de-humanizes the world...it's essentially loveless.
I know it has it's place, but it will be misused to the point of being very destructive.
It's human greed and hubris that will drive this forward to that logical conclusion.
Polybius
(21,900 posts)6. I dunno
C-3PO was pretty cool.
Easterncedar
(6,267 posts)7. See John Varley's "Press Enter."
I regret my advanced age less and less.