General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsDiabolus Ex Machina: This Is Not An Essay
What ultimately transpired is the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime.
****
https://amandaguinzburg.substack.com/p/diabolus-ex-machina?r=i691&utm_campaign=post&utm_medium=web&triedRedirect=true
This is wild! 😮
marble falls
(71,929 posts)cachukis
(3,937 posts)we are experiencing, which will expand its capillaries at breakneck speed; trust.
It is plainly evident we all need to trust to have any interplay work.
When a projection of a fear of the other guides our intercourse, we let distrust rule.
Not liking this.
highplainsdem
(62,145 posts)these older threads:
AI-chatbot search won't be a lie detector. It's a friendly, authoritative-sounding bullshit spreader (2/12/2023)
https://www.democraticunderground.com/100217640305
ChatGPT: Bullshit as a Service. (2/23/2023)
https://www.democraticunderground.com/100217674825
ChatGPT Is Bullshit (academic paper published June 8 that's getting a lot of attention) (6/19/2024)
https://www.democraticunderground.com/100219045534
Perplexity Is a Bullshit Machine (article from Wired on how the AI search engine produces BS answers & scrapes sites) (6/19/2024)
https://www.democraticunderground.com/100219045577
And that's just the threads with "bullshit" in the thread title. Lots of other threads I posted here are about news stories on the unreliability of chatbots. There have been countless news stories about why they should never be trusted.
As for WHY anyone is still using this illegally trained and horribly flawed technology, there are probably several reasons:
Tremendous hype, pressure, and flat-out lying from AI peddlers hoping to make billions if not trillions from generative AI, if they can just get people hooked on it.
Ignorance, when people haven't bothered to follow the news and aren't aware how bad this tech is.
Laziness, when people don't want to do the work.
Dishonesty, when people want to pretend they have knowledge and skills they don't have.
Gullibility, when people fall for chatbots that are designed to flatter them and keep them talking to that chatbot.
That Substack writer figured out she was being bullshitted by ChatGPT because it obviously hadn't accessed what she wrote.
But would she have figured that out if it responded to her posting the complete content she wanted analyzed?
Maybe, but maybe not. She might've been persuaded by the flattery.
This is why ChatGPT should never be trusted to analyze anything, summarize anything, or teach.
It's a waste of time and energy to use it. The user's energy, and electricity (and water to cool data centers).
And it's especially a waste - and a career-sabotaging one - for anyone hoping to be treated seriously as a professional writer to use ChatGPT or any other generative AI tool to write.
I've posted multiple threads here since early 2023 about science fiction magazines (and other markets) being flooded with ChatGPT-written manuscripts. Even shutting off all submissions at times, because of the tsunami of AI slop. One of the more recent threads I posted on that was about their submission guidelines stating clearly that they will permanently ban submissions from any writer using AI to write.
Reputable publishers, editors and agents aren't likely to want anything to do with any writer using genAI, unless they're writing nonfiction about using genAI.
After all, there's no reason they should trust anyone using ChatGPT or any other genAI NOT to have used it for all their writing.
Those AI users' writing output is immediately suspect. And it should be.
SheltieLover
(80,467 posts)When I was in grad school, teachers made no bones about if students plagiarized any part of their research papers, they flunked the course for the semester.
I certainly hope the same holds true today for ai slop.
Sickening.
Ty for your informative posts on this most disconcerting issue.
highplainsdem
(62,145 posts)by it. It's creating real despair over these tools encouraging kids to cheat and making it hard to detect that cheating. None of the AI detectors are perfect, and because of that teachers are often told not to use them.
So kids are being dumbed down. Not learning what they're supposed to learn. Teachers are depressed, thinking about quitting if they haven't already done so.
And AI peddlers are fine with that, marketing chatbots as replacements for teachers.
SheltieLover
(80,467 posts)highplainsdem
(62,145 posts)because an AI detector incorrectly labeled that human work as AI.
Where possible, a lot of schools are now returning to oldfashioned bluebook exams.
From CBS Saturday Morning last weekend:
SheltieLover
(80,467 posts)Kids will be printing their exams...
Prairie_Seagull
(4,690 posts)Exceptional.
Thank you for it highplainsdem.
Sorry Shelty intended for highplainsdem. What a bone head.
SheltieLover
(80,467 posts)highplainsdem
(62,145 posts)generative AI - that none of these tools had ever been released. The genAI companies knew it was unreliable and would be used jn harmful ways, but they were greedy and released it anyway. Those companies have been unhappy that - except for student cheating and other types of fraud and scams - genAI hasn't been used as widely as they'd hoped. Some AI bros and venture capitalists have made it clear that they want humans to be dependent on chatbots from pre-school to grave (and they hope to charge much higher fees for chatbot subscriptions, with a thousand dollars a month or more having been mentioned as possible, once people are hooked enough on chatbots). But considering how unreliable and harmful generative AI is, any use of it - even if free for the user (it's always very costly for the AI companies) - is foolish.