Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(26,382 posts)
Tue Nov 7, 2023, 03:11 PM Nov 2023

Risks with AI and ML

Last edited Tue Nov 7, 2023, 04:34 PM - Edit history (1)

https://www.cerias.purdue.edu/site/blog/post/ai_and_ml_sturm_und_drang/

This article was submitted to the venerable risks digest, which has been scaring sysadmins (myself included) for decades.

This is the long form submission from the Purdue Center for Education and Research in
Information Assurance and Security (CERIAS) by spaf (who is 100% likely to be security expert Gene Spafford)

The author agrees with me on one point. That there seems to be a zeroth-amendment allowing sociopathic individuals to run companies, and (you guessed it) even governments. No cure in sight:

humanity is full of venal, greedy, and sociopathic individuals who are more likely to use technology to lead us to a "Blade Runner" future ... or worse.


I’ll just quote 4 paragraphs (cuz rules don’t allow any more here)

First, LLMs such as ChatGPT, Bard, et al. are not really "intelligent." They are a form of statistical inference based on a massive ingest of data. That is why LLMs "hallucinate" -- they produce output that matches their statistical model, possibly with some limited policy shaping. They are not applying any form of "reasoning,"

Second, these systems are not accountable in current practice and law. If a machine learning system (I'll use that term but cf my 2nd paragraph) comes up with an action that results in harm, we do not have a clear path of accountability/responsibility. For instance, who should be held at fault if an autonomous vehicle were to run down a child? It is not an "accident" in the sense that it could not be anticipated. Do we assign responsibility to the owner of the vehicle? The programmers? The testers? The stockholders of the vendor? We cannot say that "no one" is responsible because that leaves us without recourse to force a fix of any underlying problems, of potential recompense to the victims, and to general awareness for the public. Suppose we use such systems safety or correctness-critical systems (and I would put voting, healthcare, law enforcement, and finance as exemplars). In that case, it will be tempting for parties to say, "The computer did it," rather than assign actual accountability. That is obviously unacceptable: We should not allow that to occur. The price of progress should not be to absolve everyone of poor decisions (or bad faith). So who do we blame?

Third, the inability of much of the general public to understand the limitations of current systems means that any use may introduce a bias into how people make their own decisions and choices. This could be random, or it could be manipulated; either way, it is dangerous. It could be anything from gentle marketing via recency effects and priming all the way to Newspeak and propaganda. The further towards propaganda we go, the worse the outcome may be. Who draws the line, and where is it drawn?

As I wrote at the beginning, there are potential good uses for some of these systems, and what they are now is different from what they will be in, for example, a decade. However, the underlying problem is what I have been calling "The Trek futurists" -- they see all technology being used wisely to lead us to a future roughly like in Star Trek. However, humanity is full of venal, greedy, and sociopathic individuals who are more likely to use technology to lead us to a "Blade Runner" future ... or worse. And that is not considering the errors, misunderstandings, and limitations surrounding the technology (and known to RISKS readers). If we continue to focus on what the technology might enable instead of the reality of how it will be (mis)used, we are in for some tough times. One of the more recent examples of this general lack of technical foresight is cryptocurrencies. They were touted as leading to a more democratic and decentralized economy. However, some of the highest volumes of uses to date are money laundering, illicit marketplaces (narcotics, weapons, human trafficking, etc.), ransomware payments, financial fraud, and damage to the environment. What valid uses of cryptocurrency there might be (if there are any) seem heavily outweighed by the antisocial uses.


More in the linked article.
3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Risks with AI and ML (Original Post) usonian Nov 2023 OP
Is there a link to the article itself? Silver Gaia Nov 2023 #1
Fixed it! usonian Nov 2023 #2
Thank you! Silver Gaia Nov 2023 #3

Silver Gaia

(5,409 posts)
1. Is there a link to the article itself?
Tue Nov 7, 2023, 03:51 PM
Nov 2023

Neither link seemed to lead to the article unless I'm missing something. Thanks!

usonian

(26,382 posts)
2. Fixed it!
Tue Nov 7, 2023, 04:30 PM
Nov 2023

Updates were failing. I think because I put an apostrophe or backtick in the “explain” field.

Thanks for pointing this out.

https://www.cerias.purdue.edu/site/blog/post/ai_and_ml_sturm_und_drang/

Silver Gaia

(5,409 posts)
3. Thank you!
Wed Nov 8, 2023, 01:08 AM
Nov 2023

I want to share this with students in a critical thinking class where AI is one of the topics to be discussed. 👍

Latest Discussions»General Discussion»Risks with AI and ML