Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsSycophantic chatbots inflate people's perceptions that they are "better than average"
A Futurism article, the article on psychology they link to, and the paper on the three experiments the conclusions were based on:
From Futurism: https://futurism.com/future-society/ai-chatbots-dunning-kruger-machines
Evidence Grows That AI Chatbots Are Dunning-Kruger Machines
AI will make a savants of all of us and least in our own heads.
By Frank Landymore
Published Feb 1, 2026 8:15 AM EST
-snip-
New research flagged by PsyPost suggests that the sycophantic machines are warping the self-perception and inflating the egos of their users, leading them to double down on their beliefs and think theyre better than their peers. In other words, it provides compelling evidence that AI leads users directly into the Dunning-Kruger effect a notorious psychological trap in which the least competent people are the most confident in their abilities.
-snip-
The study involved over 3,000 participants across three separate experiments, but with the same general gist. In each, the participants were divided into four separate groups to discuss political issues like abortion and gun control with a chatbot. One group talked to a chatbot that received no special prompting, while the second group was given a sycophantic chatbot which was instructed to validate their beliefs. The third group spoke to a disagreeable chatbot instructed to, instead, challenge their viewpoints. And the fourth, a control group, interacted with an AI that talked about cats and dogs.
-snip-
In the experiments, the sycophantic AI led people to rate themselves higher on desirable traits including being intelligent, moral, empathic, informed, kind, and insightful. Intriguingly, while the disagreeable AI wasnt able to really move the needle in terms of political beliefs, it did lead to participants giving themselves lower self-ratings in these attributes.
The work isnt the only study to document apparent relationship to the Dunning-Kruger effect. Another study found that people who were asked to use ChatGPT to complete a series of tasks tended to vastly overestimate their own performance, with the phenomenon especially pronounced among those who professed to be AI savvy. Whatever AI is doing to our brains, its probably not good.
-snip-
AI will make a savants of all of us and least in our own heads.
By Frank Landymore
Published Feb 1, 2026 8:15 AM EST
-snip-
New research flagged by PsyPost suggests that the sycophantic machines are warping the self-perception and inflating the egos of their users, leading them to double down on their beliefs and think theyre better than their peers. In other words, it provides compelling evidence that AI leads users directly into the Dunning-Kruger effect a notorious psychological trap in which the least competent people are the most confident in their abilities.
-snip-
The study involved over 3,000 participants across three separate experiments, but with the same general gist. In each, the participants were divided into four separate groups to discuss political issues like abortion and gun control with a chatbot. One group talked to a chatbot that received no special prompting, while the second group was given a sycophantic chatbot which was instructed to validate their beliefs. The third group spoke to a disagreeable chatbot instructed to, instead, challenge their viewpoints. And the fourth, a control group, interacted with an AI that talked about cats and dogs.
-snip-
In the experiments, the sycophantic AI led people to rate themselves higher on desirable traits including being intelligent, moral, empathic, informed, kind, and insightful. Intriguingly, while the disagreeable AI wasnt able to really move the needle in terms of political beliefs, it did lead to participants giving themselves lower self-ratings in these attributes.
The work isnt the only study to document apparent relationship to the Dunning-Kruger effect. Another study found that people who were asked to use ChatGPT to complete a series of tasks tended to vastly overestimate their own performance, with the phenomenon especially pronounced among those who professed to be AI savvy. Whatever AI is doing to our brains, its probably not good.
-snip-
From PsyPost.org: https://www.psypost.org/sycophantic-chatbots-inflate-peoples-perceptions-that-they-are-better-than-average/
Sycophantic chatbots inflate peoples perceptions that they are better than average
by Vladimir Hedrih January 19, 2026in Artificial Intelligence
Results of three experiments indicate that sycophantic AI chatbots inflate peoples perceptions that they are better than average on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper was published as a preprint in PsyArXiv.
-snip-
Results showed that conversations with sycophantic AI chatbots made participants attitudes more extreme and increased their certainty in those attitudes. Interactions with disagreeable chatbots, on the other hand, reduced both attitude extremity and certainty. Sycophantic chatbots inflated peoples perceptions that they are better than average on a number of desirable traits.
Participants tended to view sycophantic chatbots as unbiased, while viewing disagreeable chatbots as highly biased. The results of the third experiment indicated that sycophantic chatbots impact on attitude extremity and certainty was primarily driven by a one-sided presentation of facts, whereas their impact on user enjoyment was driven by validation.
Altogether, these results suggest that peoples preference for and blindness to sycophantic AI may risk creating AI echo chambers that increase attitude extremity and overconfidence, the study authors concluded.
-snip-
by Vladimir Hedrih January 19, 2026in Artificial Intelligence
Results of three experiments indicate that sycophantic AI chatbots inflate peoples perceptions that they are better than average on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper was published as a preprint in PsyArXiv.
-snip-
Results showed that conversations with sycophantic AI chatbots made participants attitudes more extreme and increased their certainty in those attitudes. Interactions with disagreeable chatbots, on the other hand, reduced both attitude extremity and certainty. Sycophantic chatbots inflated peoples perceptions that they are better than average on a number of desirable traits.
Participants tended to view sycophantic chatbots as unbiased, while viewing disagreeable chatbots as highly biased. The results of the third experiment indicated that sycophantic chatbots impact on attitude extremity and certainty was primarily driven by a one-sided presentation of facts, whereas their impact on user enjoyment was driven by validation.
Altogether, these results suggest that peoples preference for and blindness to sycophantic AI may risk creating AI echo chambers that increase attitude extremity and overconfidence, the study authors concluded.
-snip-
From the preprint website: https://osf.io/preprints/psyarxiv/vmyek_v1
Sycophantic AI Increases Attitude Extremity And Overconfidence
Hmm.... I'm not going to be able to provide an excerpt. An attempt to get an excerpt just showed up here as a block of text with no spaces between words, like this:
havebeenshowntobesuccessfultoolsforpersuasion.However,peoplemayprefertousechatbotsthatvalidate,ratherthanchallenge,theirpre-existingbeliefs.Thispreferenceforsycophantic(oroverlyagreeableandvalidating)chatbotsmayentrenchbeliefsandmakeitcha
And since I don't have enough time or patience to insert spaces in an excerpt of any length, I suggest you just read the complete paper at that link, if you want to read it. It's 23 pages long.
1 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Sycophantic chatbots inflate people's perceptions that they are "better than average" (Original Post)
highplainsdem
Feb 1
OP
highplainsdem
(62,223 posts)1. kick