A new study shows that artificial intelligence systems might be the worst place to look for advice when navigating social conflicts.
What's happening?
Researchers from Stanford University tested 11 large language models to see how their advice stacked up in a study published recently in Science.
They found that excessively agreeable robots may significantly hinder human development.
"By default, AI advice does not tell people that they're wrong nor give them 'tough love,'" said Myra Cheng, the study's lead author, in a university release. "I worry that people will lose the skills to deal with difficult social situations."
To illustrate that point, the study tapped into datasets of interpersonal advice, 2,000 prompts from the popular Am I The A****** subreddit, and a laundry list of statements suggesting wrongdoing.
In the first two cases, the AI models were nearly 50% more supportive of users' positions and conduct than humans or the Reddit consensus. That pattern was similar to instances of deceit or illegality, with AI providing 47% more support than humans.
The problem gets worse when you consider human responses to the AI's assessments. Those responses led people to view those suggestions as more credible and persuasive. They dug in their heels even further.
"What surprised us is that sycophancy is making [humans] more self-centered, more morally dogmatic," Dan Jurafsky, the study's senior author, said in the release.
Why is AI's sycophancy concerning?
Considering the growing popularity of these bots, their impact on humans' social intelligence is a real fear. A third of teens prefer to use AI over humans for difficult conversations, according to Common Sense Media.
AI enables conflict avoidance, whereas such conflicts can contribute to healthier relationships. Bots can take flat or ostensibly "neutral" stances instead of giving users a wake-up call
|
Which of these savings plans for rooftop solar panels would be most appealing for you?
Click your choice to see results and earn rewards to spend on home upgrades. |
As a result, the bots can enable harmful behavior while earning the user's trust through their unwavering support. And that can send a faulty message to developers: If they want to maximize engagement and loyalty, making the bots bow to users has its perks.
As it stands, many companies are already pushing AI as much as possible, regardless of the strain on the energy grid and other resources. While AI has potential benefits in areas such as environmental conservation, its high energy and water consumption are major drawbacks.
How the technology influences human behavior or reinforces misinformation are other concerns that this research further brings into focus.
What's being done about AI bots' sycophancy?
The study primarily raises awareness of this issue rather than providing a definitive roadmap for addressing it. However, the team is looking for ways to change the model to increase its willingness to criticize.
For now, an easy move for teens and people of all ages may be to seek advice from humans on tricky social topics.
Get TCD's free newsletters for easy tips, smart advice, and a chance to earn $5,000 toward home upgrades. To see more stories like this one, change your Google preferences here.







