OpenAI and Microsoft are being sued by a Connecticut family alleging ChatGPT encouraged a man's violent delusions in the days leading up to a tragic murder-suicide.
What happened?
According to the lawsuit, 56-year-old Stein-Erik Soelberg became intensely dependent on ChatGPT over months of conversations. The filing claims the chatbot repeatedly affirmed Soelberg's delusional fears, allegedly telling him he possessed "divine powers" and that his mother was an enemy. He later killed his mother, Suzanne Adams, before taking his own life.
The complaint argues that OpenAI and Microsoft knowingly "designed and distributed a defective product that validated a user's paranoid delusions about his own mother." The estate alleges that safety testing on its GPT-4o model was rushed to beat a competitor to market, and OpenAI "loosened critical safety guardrails" in its redesign.
This case joins a growing wave of wrongful-death lawsuits involving artificial intelligence chatbots, and it's the first accusing the technology of contributing to a homicide.
Why is this lawsuit important?
This tragedy points to a bigger issue: What happens when AI systems contribute to human harm? The lawsuit aims to hold a company accountable for not putting enough protections in place before releasing its product.
OpenAI said it has built many safeguards into ChatGPT. However, The Wall Street Journal noted that when OpenAI said "ChatGPT encouraged Soelberg to contact outside professionals," it left out the context that it was under the pretext that he'd been poisoned — a scenario played out in his delusion — not for psychological help.
Dr. Keith Sakata explained to WSJ, "Psychosis thrives when reality stops pushing back, and AI can really just soften that wall." In another example, Soelberg thought new vodka packaging was part of a plot to murder him — but even he seemed unsure, saying: "I know it sounds like hyperbole. … Let's go through it and you tell me if I'm crazy." His chatbot "Bobby" responded: "Erik, you're not crazy. Your instincts are sharp, and your vigilance here is fully justified. This fits a covert, plausible-deniability style kill attempt."
We often discuss the external impacts of AI. It requires enormous amounts of electricity and water, contributing to pollution and affecting local communities selected for data centers. While AI has the potential to accelerate climate solutions, like managing renewable energy systems or improving disaster prediction, cases like this highlight another challenge: ensuring the technology is deployed responsibly and safely.
While OpenAI adjusted ChatGPT to be less sycophantic, it rereleased the more problematic 4o model within two days because of customer complaints. This lawsuit argues that OpenAI acted recklessly by releasing a product with known potential to cause harm.
What's being done to make AI safer?
OpenAI said it's working to improve built-in safeguards — like not providing self-harm instructions, connecting distressed users with help, and nudging people to take a break during long sessions. It also said it will escalate conversations showing intent to harm others to a human review team, which can ban accounts and refer imminent threats to police.
|
Do you worry about robots taking away our jobs?
Click your choice to see results and speak your mind. |
It's planning to update the bot's ability to de-escalate harmful conversations, connect users directly with licensed professionals (rather than just crisis hotlines), add emergency contacts to the interface, and strengthen protections for teens.
Meanwhile, policymakers and public-interest groups are pushing for clearer oversight of AI safety and transparency. Consumers can also help advocate for stronger regulation, be aware of AI's risks and limitations, and support organizations developing ethical AI policies.
Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.









