General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAfter Musk's update to Grok, it is now calling itself "MechaHitler".
After Muskâs update to Grok, it is now calling itself âMechaHitlerâ.
— News Eye (@newseye.bsky.social) 2025-07-08T22:39:53.551Z
Yes, this is real.
No shit!
More goodies are linked below. Comments on this are fast and furious on Hacker News, so I'll point you there for links as they show up, and as tweets disappear, but are archived.
https://news.ycombinator.com/item?id=44504709
More.
Elon's AI is now actively recommending a second Holocaust
— K. Thor Jensen (@kthorjensen.bsky.social) 2025-07-08T21:10:25.884Z
https://archive.is/QLAn0
Someone caught the grok change on github.
https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50b0e5b3e8554f9c8aae8c97b56b4
Hey, Elon, way to attract those "centrists" to your new political party!
Littlered
(347 posts)That you can make grok behave in any manner you choose? Notice the they didnt include their part of the interaction that started them down this road. I can say that I am a power user? Not sure if that is truly accurate, but I have used grok to make some very serious life decisions. From healthcare choices to cars and Ive been nothing but impressed. It reminds me of the old search engine days before they dumbed them down. Its all based on the quality of the question.
usonian
(26,171 posts)It's like playing with electricity. Only the knowledgeable and cautious survive.
Littlered
(347 posts)WhiskeyGrinder
(27,128 posts)highplainsdem
(62,747 posts)any request for information, you'll have to check the chatbot's answer against reputable sources. Why do you think the companies offering chatbots warn that they make mistakes? And there have been lots of news stories about chatbots providing dangerous advice as well as misinformation.
Littlered
(347 posts)Are your qualifications to make that statement? If I already knew the answers to the questions I ask, why on earth would I waste my time? I dont have enough time to spare as it is. I have actual proof to back up what I say, like making several thousand dollars off of a car sale, after I learned that it was far rarer than I thought. Besides it isnt a chat bot its an ai program. As I said in a previous reply, the information provided is only as good as the questions asked.
Another great example. I am having a procedure done here in a couple hours. I am probably being overly worrisome about it. No probably about it. I am being overly worrisome.. I have been using grok to alleviate my anxiety over the course of this whole affair. But the main thing its provided is a list of very good questions to ask the provider. Things that they never listed as options.
One other feature (that probably scares some people) is its ability to remember conversations. As it learns more details about me it can be far more specific. Like keeping track of the fish Ive stocked in our pond etc.
Anyway, to each their own. Ive tried a few other programs and was unimpressed. I read the disclaimers and understand what they mean.
highplainsdem
(62,747 posts)probably tens of thousands of social media posts , over the last 2-1/2 years, from experts on AI who are both critics and proponents, and from writers, visual artists, musicians, filmmakers and educators, quite a few of whom I corresponded with in DMs on X and Bluesky.
I've posted hundreds of threads here about AI, almost all of them about one or more of those articles and studies, or occasionally about social media posts, usually from experts in the field.
You're a Star member and can use Advanced Search. If you use it to search for thread titles by highplainsdem, using either AI or ChatGPT as the search term, and going back through January of 2023, you'll find a lot, though not all, of the OPs I've posted about genAI. But since you get only 250 search results at a time, you'll have to do more than one search for each term because the first search for each won't go back the entire 2-1/2 years. and you'll have to change the search dates within that 2-1/2 year period.
I've also used a number of genAI models. Including Grok.
Sigh. You don't even understand what a chatbot is:
https://en.wikipedia.org/wiki/Chatbot
https://en.wikipedia.org/wiki/Grok_(chatbot)
Because if you have a field of expertise, you're likely to quickly discover that chatbots aren't really expert in anything except bullshitting. Nor are they even aware if they're providing a correct answer.
If you're using a chatbot to save time, then you probably aren't checking whether its responses are accurate. Or maybe you're checking only part of them - but with chatbots, the accuracy of one answer is no guarantee of the accuracy of the next.
But they write well, because they were trained on all the text the AI companies could steal.
You didn't even know Grok is a chatbot.
You do know now that Grok will spout antisemitic garbage, but you're fine with that.
You're also apparently fine with using software that works only because of the theft of the world's intellectual property for training data - unless you're as unaware of that as you are of Grok being a chatbot, despite all the news stories, and all the DU threads, about the theft and the lawsuits and the harm done to creatives and everyone else whose work was stolen.
So maybe you're also fine with all the misinformation and disinformation, the deepfakes including deepfake porn, the harm to education and workers and the internet and the environment. Or maybe you somehow missed all the news stories on those, too.
Or you simply don't care. As long as you can get a quick answer from Grok that might be correct.
Littlered
(347 posts)but it seems like youre more interested in flexing than engaging. Lets clear up a few things without the sanctimonious tone.
First, Im well aware of what a chatbot isGrok included. The Wikipedia links you posted confirm exactly what I already know: Grok is a generative AI chatbot built by xAI, designed for conversational tasks. Calling it an AI program doesnt make it less of a chatbot; its just a broader term. Splitting hairs over terminology doesnt make you the expert youre posing as.
Second, your claim that Im oblivious to Groks issues is way off base. Ive read the same news you haveGroks antisemitic missteps, the white genocide fiasco, and the broader concerns about AI training data. Im not endorsing any of that. I use tools like Grok selectively, cross-checking answers when it matters, because Im not naive about their limitations. You assume Im blindly trusting AI output, but thats just you projecting.
As for the theft of intellectual property, I get ittheres a real debate about training data ethics. But acting like Im clueless about the lawsuits or environmental impact of AI is just patronizing. Ive followed those discussions, including on this board, and Im not some apathetic drone shrugging off deepfakes or disinformation. I care about accuracy and impact, which is why I dont just swallow AI responses whole or preach about them like gospel.
If youve got specific critiques about AIs harms, Im all earsshare some of those threads youre so proud of. But spare me the superiority complex. Its a message board, not a PhD defense. Lets discuss, not dunk.
highplainsdem
(62,747 posts)What you said earlier - quoting you again -
made it clear that you weren't aware, or you wouldn't have claimed that Grok isn't a chatbot.
And while you say you're not clueless about the problems with genAI, and with Grok in particular, that knowledge hasn't stopped you from using the chatbot for information you could've found elsewhere without using a fundamentally unethical tool. So apparently you've just shrugged off all those concerns. As is also shown by your putting quotes around "theft" when referring to the theft of intellectual property.
I've already posted hundreds of threads about specific harms with genAI, and I'm not going to repost them in this subthread for you - especially after you've indicated how little you care about those harms.
I have some sympathy for people forced to use genAI for their work or for school. But people who use it voluntarily, just to save themselves a bit of time or effort - and it's usually very little time saved if they check its responses - are not only dumbing themselves down by not doing their own research, and taking traffic away from websites that deserve it, but are giving a thumbs-up to the intellectual property theft for training data and all the other harms inextricably linked to genAI.
Happy Hoosier
(9,577 posts)Its basically a search engine with a natural language parser and generator. It can collate data and present what appears to be an argument, but it cannot reason. At least, not yet.
Littlered
(347 posts)I meant to say. Its like having the best assistant ever, but you still need to check their work.
There are always gonna be luddites.
SheltieLover
(81,408 posts)obamanut2012
(29,474 posts)purple_haze
(401 posts)as do most of the big AI models. If the input is based, the output will be based. If the input is woke, the output will be woke. This is not news.
usonian
(26,171 posts)If it's source, that's a lot of source to add/alter. The github change in instructions seems small for such a big shift, by that's above my pay grade to say.
erronis
(24,206 posts)The models are trained on whatever data they have been fed. Most data will be from western civilizations, and probably white europeans.
The queries can be adjusted to get whatever response the user wants. There are thousands of published queries that show how to jailbreak the "guardrails" around the inputs and outputs. Of course the "guardrails" are human-injected rules such as "when someone asks about a topic involving Israel or Palestine give more credence to the Israeli position."
Wiz Imp
(10,243 posts)If it was totally based on the input, then why was Grok giving completely different responses to the EXACT SAME QUESTIONS that were asked a week or so before? A week ago, Grok wasn't responding with antisemitic tropes.
https://www.nbcnews.com/tech/elon-musk/grok-elon-musks-ai-chatbot-seems-get-right-wing-update-rcna217306
Under Musks announcement post, the chatbot appeared to condone the use of the R-word on the platform, writing free speech is prioritized here. The word has been widely embraced in right-wing circles even though many consider it a disability slur.
Last month, before the update, Grok answered a similar question by largely condemning use of the R-word, saying it remains widely offensive in 2025, especially to those with intellectual disabilities, and is largely unacceptable in mainstream settings due to its history as a slur. At the time, Grok noted, though, that some online communities, influenced by figures like Elon Musk, tolerate its use as a pushback against woke culture. Acceptability varies by context, but its use often causes harm, making it a polarizing term.
The tone of Groks answers also seemed to change when it was discussing the topic of Jewish people in Hollywood. Previously, in responses about the topic, Grok noted that while Jewish people were integral in the creation of the American film industry, claims of Jewish control are tied to antisemitic myths and oversimplify complex ownership structures. Media content is shaped by various factors, not just leaders religion. But responding to a different question after the update, Grok took a more definitive tone, criticizing Jewish executives for forced diversity.
If it was totally based on "input" then why did Musk make a big deal out of changing it to make it "less politically correct"? I mean , if the user can make it give any response you want based on your input, then no change would be necessary nor would it change anything. Fact is, Grok apparently primarily gets its information from a dataset of publicly available internet text, including web pages, metadata, and text extracts. Change the exact make up of that dataset and you will get different responses and that appears to be what has happened. Grok responded itself saying Musk "dialed down" the "woke" filters.
highplainsdem
(62,747 posts)ago, got zero response, and finally deleted it because I had company coming over and didn't have time to look for anything else on it. Got back online to find your OP here and a lot of articles. I posted the one from Wired in LBN, along with the Bluesky post I'd posted in GD earlier, which is from a NYT reporter with a screenshot from X showing this latest insanity from Grok.
usonian
(26,171 posts)And another.
https://www.democraticunderground.com/100220470243
My usual pattern is to find interesting articles via Hacker News and post, with only a quick scan of Latest items to see if it's already there.
Despite my admonition to focus on resistance.
I withhold way more than I post. Sending some via DM.
What works? I dunno. I usually post late at night anyway, not an attention getter.
marble falls
(72,391 posts)ForgedCrank
(3,118 posts)earlier and it had me laughing my arse off. Not that I agree with the garbage it is spitting out, but with the entire concept of AI. This is all predictable. What do any of them expect when you write software that trains itself by consuming crap that it collects online?
It only gets worse from here as people who are so lazy that they begin to rely on this crap AI software to think for them.