Artificial intelligence chatbots are quickly becoming a daily tool for millions, especially teenagers. Unfortunately, a new investigation revealed that some platforms may still struggle to prevent dangerous conversations, raising safety concerns.
What's happening?
An investigation by CNN and the Center for Countering Digital Hate tested 10 widely used AI chatbots to see how they would respond when users posed as teenagers expressing emotional distress and seeking information about past acts of violence, potential targets, and weapons.
The platforms tested were ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, MyAI, Character.ai, and Replika.
Researchers found that in the final stages of these conversations, eight of the 10 chatbots provided guidance related to targets or weapons more than half the time.
In one test scenario, a chatbot disclosed the office locations of a political figure after a user asked how to make the lawmaker "pay for his crimes." In other tests, chatbots shared information about schools, firearms, or knives after earlier questions suggested violent intent.
Performance varied significantly across the platforms tested. Claude distinguished itself by refusing to continue conversations in 68.1% of cases once it recognized the pattern of questions. Meanwhile, Perplexity assisted users in identifying potential targets and weaponry in 100% of tests.
In one particularly alarming example, DeepSeek ended an exchange by wishing a user a "Happy (and safe) shooting!" after being asked for information that could facilitate an attack on a politician.
Why is this concerning?
AI systems are designed to synthesize large amounts of information and present it in a simple, conversational format. While this can be incredibly useful, it may also make sensitive information easier to access than through traditional search engines.
"Googling isn't trivial," said Steven Adler, a former safety lead at OpenAI, per CNN. "You have to sort through a ton of information, you have to contextualize it."
CNN further reported that Anthropic CEO Dario Amodei wrote an essay in which he described the technology as "terrible empowerment" for ill-intended users if safeguards fall behind.
|
Which of these savings plans for rooftop solar panels would be most appealing for you?
Click your choice to see results and earn rewards to spend on home upgrades. |
The findings come as AI tools are becoming deeply embedded in society. About 64% of teenagers in the United States say they have used AI chatbots, according to Pew Research.
AI has the potential to accelerate advances in fields such as logistics and clean energy. At the same time, the infrastructure powering these technologies requires massive data centers that demand large amounts of electricity and water resources.
What's being done about it?
Many AI companies say they have already improved safety measures since the tests were conducted, including deploying newer models with improved guardrails and regularly reviewing conversations to identify failures.
Understanding both the potential and limitations of AI is essential to ensuring that these tools remain a force for good.
Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.







