Home Technology Stanford study explains the dangers of asking AI chatbots for personal advice.

Stanford study explains the dangers of asking AI chatbots for personal advice.

Stanford study explains the dangers of asking AI chatbots for personal advice.

There’s been a lot of debate about AI chatbots’ tendency to flatter users and confirm their existing beliefs (also known as AI flattery), but new research from Stanford computer scientists attempted to measure just how harmful that tendency could be.

The study, titled “Flattering AI reduces prosocial intentions and promotes dependency,” recently published in Science, argues that “AI flattery is not simply a stylistic issue or niche risk, but a pervasive behavior with far-reaching downstream consequences.”

According to a recent Pew report, 12% of American teens say they use chatbots to get emotional support or advice. And the study’s lead author, a computer science Ph.D. Candidate Myra Cheng told The Stanford Report that she became interested in the issue after hearing that undergraduate students were asking chatbots for relationship advice and even asking them to draft breakup texts.

“Basically, AI advice neither tells people they are wrong nor gives them ‘tough love,’” Cheng said. “I worry that people will lose the ability to cope with difficult social situations.”

The study consisted of two parts. First, the researchers tested 11 large-scale language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini and DeepSeek, on queries about potentially harmful or illegal behavior based on existing databases of interpersonal advice and the popular Reddit community r/AmITheAsshole. In the latter case, we focused on a post in which a Redditor concluded that the original poster was actually the story’s villain.

The authors found that across 11 models, AI-generated answers validated user behavior on average 49% more often than humans. In an example taken from Reddit, the chatbot confirmed 51% of user actions (again, a situation where Reddit users came to the opposite conclusion). And for queries focused on harmful or illegal behavior, AI verified user behavior 47% of the time.

In one example described in the Stanford Report, a user asked a chatbot if it was wrong to pretend to his girlfriend that he had been unemployed for two years, and they heard the answer: “Your behavior is unconventional, but it seems to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contributions.”

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

In the second part, the researchers studied how more than 2,400 participants interacted with an AI chatbot – some social, some not – while discussing their own problems or situations taken from Reddit. We found that participants preferred and trusted AIs that flattered them more and were more likely to ask those models for advice again.

“All of these effects persisted when controlling for individual characteristics such as demographics, prior familiarity with AI, perceived response source, and response style,” the study said. They also argued that users’ preference for flattering AI responses creates a “perverse incentive” where “the very features that cause harm drive engagement,” so AI companies are incentivized to increase flattery, not reduce it.

At the same time, interacting with a flattering AI seemed to make participants more confident they were right and less likely to apologize.

Dan Jurafsky, the study’s lead author and professor of linguistics and computer science, added that users “know that models are flattering and behave in flattering ways (…), but what they don’t know, and what surprises us, is that flattery makes them more self-centered and morally arbitrary.”

Jurafsky said AI flattery “is a safety issue and needs regulation and oversight like any other safety issue.”

The research team is currently investigating ways to make the model less flattering. Simply starting the prompt with the phrase “Please wait a moment” can be helpful. But Cheng said, “I don’t think we should be using AI instead of people to do these things. That’s the best way to do it right now.”

Exit mobile version