How dangerous is AI? Regulate it before it's too late [View all]
Cynthia Rudin is a professor of computer science; electrical and computer engineering; statistical science; as well as biostatistics and bioinformatics at Duke University, where she directs the Interpretable Machine Learning Lab.
https://thehill.com/opinion/technology/3849751-how-dangerous-is-ai-regulate-it-before-its-too-late/
As an Artificial Intelligence researcher, Ive always felt the worst feature of AI is its role in the spread of lies. The AI-amplification of lies in Myanmar reportedly contributed to the Rohingya massacre, the spread of COVID-19 and vaccine misinformation likely contributed to hundreds of thousands of preventable deaths and election misinformation has weakened our democracy and played a part in the Jan. 6, 2021 insurrection. This was all possible because humans turned algorithms into weapons, manipulating them to spread noxious information on platforms that claimed to be neutral. These algorithms are all proprietary to companies, and they are unregulated. And so far, none of the companies have admitted any liability. Apparently, no one feels guilty.
If the federal government doesnt start regulating AI companies, it will get a lot worse. Billions of dollars are pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what. This will make it exponentially easier to generate fake news, fake violence, fake extremist articles, non-consensual fake nudity and even fake scientific articles that look real on the surface. Venture capital firms investing in this technology liken it to the early launch of the internet. And as we know, its much easier to spread outrageous falsehoods than it is to spread the truth. Is this really like the beginning of the internet? Or is this like launching a nuclear bomb on the truth?
AI startups say that by making this technology public, they are democratizing AI. Its hard to believe that coming from companies that stand to potentially gain billions by getting people to believe it. If they were instead about to be the victim of a massacre stemming from AI-generated misinformation, or even a victim of AI-amplified bullying, perhaps they might feel differently. Misinformation is not innocent it is a major cause of wars (think of WWII or Vietnam), although most people are unfamiliar with the connection.
There are things we can do right now to address these critical problems. We need regulations around the use and training of specific types of AI technology . . .
This is going to be a damn war with regulation-hating Republicans who by and large are too greedy & ignorant to give a damn what the accumulating affects of all this shit will have on Democracy & humanity.