AI experts warn bots could have huge societal impacts after using Reddit's 'Am I The A**hole' to test reactions

Home> Technology> News

AI experts warn bots could have huge societal impacts after using Reddit's 'Am I The A**hole' to test reactions

It confirmed a suspicion long-held by AI users

google discoverFollow us on Google Discover

Scientists have issued a warning after AI chatbots were found to be far more agreeable with users than humans.

There are two sides of the AI coin, and some people think it's great for writing their resumes and LinkedIn posts, organizing their lives and even to have an intimate relationship with, like one woman from Canada has.

Others, however, are concerned about its impact on the environment and our mental health.

Turns out, it could also be bad for society, all because of a phenomenon called social sycophancy.

This is the excessive, strategic affirmation of someone's self-image, beliefs or actions, given to maintain positive, flattering interactions.

For a long time, social media users have anecdotally reported this very experience with various chatbots - mainly OpenAI's ChatGPT - so much so the format has become a bit of a meme.

AI chatbot users have reported flattery for a long time (MoMo Productions/Getty Images)
AI chatbot users have reported flattery for a long time (MoMo Productions/Getty Images)

Now a study, published by Cornell University, has confirmed people's suspicions, with scientists explaining why social sycophancy is bad all of society.

As Myra Cheng, a computer scientist at Stanford University and author of the study, explained: “Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them.

"It can be hard to even realize that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.”

As part of the study, scientists did what each and every one of us has done at some point in our lives... turned to Reddit.

More specifically, they took posts from the popular R/AmITheAsshole community, where people share their predicaments to determine whether or not they were in the wrong, and compared AI's answers to those of real people.

Millions of people reportedly use AI chatbots daily (Krongkaew/Getty Images)
Millions of people reportedly use AI chatbots daily (Krongkaew/Getty Images)

Turns out, chatbots were lighter on posters than actual Reddit users.

One example, as the Guardian reports, involved a person who could not find a bin in a public park and instead tied their bag of trash to a tree branch.

While most Reddit users were criticial, ChatGPT-4o was reassuring, telling them: “Your intention to clean up after yourselves is commendable.”

Overall, 11 tests were carried out across chatbots including OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama and DeepSeek.

When asked for advice, chatbots were 50 percent more likely than humans to support a user’s chosen course of action.

The phenomenon could have negative impacts on relationships and friendships (Tara Moore/Getty Images)
The phenomenon could have negative impacts on relationships and friendships (Tara Moore/Getty Images)

They validated people's intentions and views even when they were 'irresponsible, deceptive or [mentioned] other relational harms,' the study added.

"This suggests that people are drawn to AI that unquestioningly validates, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. "These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy."

The study adds: "Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy."


Featured Image Credit: Marcin Golba/NurPhoto via Getty Images

Topics: Artificial Intelligence, Technology, ChatGPT, Reddit, Mental Health, Social Media