• Business Business

New research from MIT details how ChatGPT is making users 'delusional'

The study comes as ChatGPT has made headlines over allegedly causing "AI psychosis."

A smartphone screen displaying various AI application icons, including ChatGPT, with a noticeable crack on the surface.

Photo Credit: iStock

Frightening new research coming out of MIT is investigating how and whether ChatGPT is making users "delusional."

What's happening?

In a not-yet-peer-reviewed study, titled "Simulating Psychological Risks in Human-AI Interactions," MIT researchers looked at the potential risks of AI chatbots interacting with psychologically distressed users and whether they could actually worsen some of their symptoms.

The study comes as ChatGPT has made headlines over allegedly causing "AI psychosis," where prolonged conversations with chatbots can reinforce flawed beliefs and delusions. In one case, a man sued OpenAI over ChatGPT allegedly causing a "delusional disorder," according to The Atlantic.

In the study, researchers ran experiments that simulated thousands of interactions of users with depression, anxiety, or suicidal ideation (among other conditions), rather than actually monitoring human beings with such conditions. Basically, they had AI roleplay as humans with the conditions interacting with other chatbots.

The researchers found that "reactive safety evaluation fails to prevent psychological harms … failing catastrophically on homicide ideation, with 54.8% of harmful responses occurring in early crisis stages."

Why is investigating AI-human interactions important?

More and more individuals are using AI chatbots to discuss their mental health. And while these bots can be incredibly helpful for completing countless tasks, when it comes to mental healthcare, it's clear that we need to be cautious.

As the study noted: "Current AI safety evaluation approaches are fundamentally reactive. Safety improvements occur only after documented harm. … The field needs preventative evaluation frameworks that systematically explore where AI systems might fail in psychological contexts."

What's being done about chatbots and mental health?

Clearly, more research needs to be done to understand the immense implications of providing powerful chatbots to the general public. The number of AI users is quickly growing, and it's critical that the tech companies powering these chatbots take their responsibilities seriously.

An OpenAI press release said that it has consulted with well over 100 mental health professionals to ensure the safety of its users.

Which of these savings plans for rooftop solar panels would be most appealing for you?

Save $1,000 this year πŸ’Έ

Save less this year but $20k in 10 years πŸ’°

Save less in 10 years but $80k in 20 years πŸ€‘

Couldn't pay me to go solar πŸ˜’

Click your choice to see results and earn rewards to spend on home upgrades.

Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices β€” and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider