• Business Business

ChatGPT allegedly aided Florida State University shooter in planned attack

This isn't the first time ChatGPT's lack of urgency around troubling conversations has come up.

A smartphone displaying the ChatGPT interface next to a blurred screen featuring the ChatGPT logo.

Photo Credit: Getty Images

ChatGPT proudly touts its qualities as an artificial intelligence assistant. 

When it comes to potentially playing that role for an accused murderer, Florida Attorney General James Uthmeier doesn't want to let it off the hook, as The Washington Post reported.

After Phoenix Ikner allegedly killed two people and injured six others at Florida State University, ChatGPT and its parent company, OpenAI, are now the subject of a criminal investigation as potential accomplices.

"The chatbot advised the shooter on what type of gun to use, on which ammo went with which gun, on whether or not a gun would be useful at short range," Uthmeier said at a news conference, per the Post. "If it was a person on the other end of that screen, we would be charging them with murder."

He further noted that ChatGPT suggested optimal times and locations for the attack to maximize interactions with others on campus.

The Florida AG's office sent subpoenas to OpenAI, seeking more information on how the company handles such content when users share violent plots or threats while interacting with ChatGPT.

OpenAI spokesperson Kate Waters pushed back against the idea that ChatGPT was responsible at all.

Waters asserted to the Post that ChatGPT merely gave "factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."

However, this isn't the first time ChatGPT's lack of urgency around troubling conversations has come up. As Politico reported, the company vowed to do more after a similar tragedy in Canada.

A Connecticut family sued ChatGPT after a murder-suicide, where they allege it encouraged the man's violent delusions. As users start to rely on AI more for difficult conversations, studies show that chatbots will encourage deceitful and dishonest behavior.

These social effects are just part of the concerns about AI, given its huge energy and water needs, which can harm communities and the grid near the data centers that power them. Nonetheless, AI has potential in areas like conservation that are worth striving for.

Disturbing incidents like the one in Florida, though, illustrate the dangers of turning over lots of information to machines. 

While OpenAI insists it is creating safeguards, Ramayya Krishnan, a professor at Carnegie Mellon University, noted the unpredictability of the tech and interactions make that a steep challenge.

"The guardrails are not 100% effective," Krishnan concluded, per the Post.

Get TCD's free newsletters for easy tips, smart advice, and a chance to earn $5,000 toward home upgrades. To see more stories like this one, change your Google preferences here.

Cool Divider