Editor's note: This story discusses serious mental health crises that may include suicide. If you or someone you know is struggling with thoughts of self-harm, help is available. Call 988 to reach the Suicide and Crisis Lifeline, formerly known as the National Suicide Prevention Lifeline, and you can also reach the Crisis Text Line by texting HOME to 741741. For additional resources, such as online chat lines and help for more specific situations, visit SpeakingOfSuicide.com or the National Alliance on Mental Illness.
New technologies are often described as "disruptive," and artificial intelligence certainly was, shaking up innumerable industries and sparking worries.
An issue known as "AI psychosis" has rapidly emerged as a major safety concern around tools like OpenAI's ChatGPT, and as Wired reports, the company recently admitted hundreds of thousands of active users show signs of mental crises.
What's happening?
Given its meteoric global rise, it can be hard to remember that ChatGPT has only been around since November 2022.
On Oct. 6, TechCrunch reported that ChatGPT reached 800 million weekly active users, up from nearly 700 million in August, according to OpenAI CEO Sam Altman.
ChatGPT is likely the most used chatbot on Earth, and on Oct. 27, OpenAI issued a safety bulletin. The company asserted that it had worked with over 170 mental health experts to improve ChatGPT's ability to detect situations in which a user may be experiencing a crisis.
Find the best HVAC solution to heat and cool your home more efficiently![]() Mitsubishi Electric’s efficient heating and cooling HVAC solutions can help you stay comfortable no matter the weather or region. You can even regulate temperatures in each room with individually controlled all-electric heat pump systems. With an energy-efficient, all-climate system from Mitsubishi, you can reduce the amount of energy needed to heat and cool your home, receive up to $2,000 in tax credits, and get peace of mind knowing you’re choosing rigorously tested, high-quality products. |
OpenAI also released internal figures about the prevalence of potential psychosis and suicidal ideation or suicidality in active users.
Initial estimates suggested that "0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent," according to OpenAI.
"Around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania," OpenAI said.
Why is this concerning?
In OpenAI's new disclosure, the company used the word "rare" seven times, emphasizing that the alleged rarity of such interactions confounded their efforts to address the issue.
However, while OpenAI's smallest cited percentage above, 0.01%, appeared to illustrate a rare outcome, it amounts to 80,000 people in ChatGPT's purported active user base of 800 million.
|
Do you worry about robots taking away our jobs?
Click your choice to see results and speak your mind. |
According to Wired, that also suggested that "1.2 million more are possibly expressing suicidal ideations" [editor's note: Wired updated this figure from 2.4 million], potentially seeking ChatGPT assistance rather than relying on real-world resources. The outlet noted that OpenAI didn't disclose its methodology for identifying users in crisis.
AI quickly became a feature in much of our day-to-day technology, embedded in smartphone operating systems, social platforms, and many workplace tools, arguably with little concern for potentially adverse outcomes.
Large Language Models and generative AI are objectively in widespread use by any metric, and there's no question that AI has massive promise for a number of applications.
Conversely, AI data centers have become a massive environmental concern in tandem with their rise in popularity — and OpenAI is notoriously tight-lipped about its impact on the planet.
In the absence of reliable internal reporting from AI companies like OpenAI, experts in energy and computing have calculated the strain AI data centers place on the environment and on resources such as electricity and water.
AI data centers consume significant amounts of water and energy, and their growing presence has been blamed for skyrocketing electric bills across the United States.
What's being done about it?
On Oct. 22, TechCrunch reported that several users had allegedly implored the Federal Trade Commission to investigate chatbots due to mental health crises they experienced.
External oversight of AI companies' handling of users' mental health crises would help, and the FTC launched an inquiry into the issue in September.
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.










