• Business Business

New report finds dangerously overlooked flaw in leading AI companies' systems: 'Existential risk of the superintelligent systems'

"We see two clusters of companies in terms of their safety promises and practices."

The Winter 2025 AI Safety Index revealed major AI companies aren't taking responsibility for the potentially harmful technology they're creating.

Photo Credit: iStock

Artificial intelligence companies are quickly expanding without the protections experts say are needed. 

According to NBC News, the Winter 2025 AI Safety Index reviewed eight companies across 35 indicators and found that these companies are rolling out increasingly powerful systems while leaving gaps in oversight. 

What's happening?

The index evaluated areas such as risk-assessment procedures, information sharing processes, governance structures, and safety-research support and documented inconsistent or absent protocols and protections across the industry. The organizations scoring lowest are racing to match or surpass the most advanced AI capabilities released this year. 

Anthropic received the highest grade with a C+, followed by OpenAI and Google DeepMind, while the lowest-rated companies, receiving a D-, included xAI, Meta, Z.ai, Alibaba Cloud, and DeepSeek. Eight independent evaluators, including MIT's Dylan Hadfield-Menell and Chinese Academy of Sciences professor Yi Zeng, graded each company's responses to AI. 

"We see two clusters of companies in terms of their safety promises and practices," Sabina Nong, an investigator involved in the project, said. "Three companies are leading: Anthropic, OpenAI, Google DeepMind, in that order, and then five other companies are on the next tier."

Why are regulations important?

NBC News noted that Max Tegmark, the president of the organization behind the report, compared AI oversight to food regulation. "The only reason that there are so many C's and D's and F's in the report is because there are fewer regulations on AI than on making sandwiches," Tegmark told NBC News.

FROM OUR PARTNER

Perk up the winter blues with natural, hemp-derived gummies

Camino's hemp-derived gummies naturally support balance and recovery without disrupting your routine, so you can enjoy reliable, consistent dosing without guesswork or habit-forming ingredients.

Flavors like sparkling pear for social events and tropical-burst for recovery deliver a sophisticated, elevated taste experience — and orchard peach for balance offers everyday support for managing stress while staying clear-headed and elevated.

Learn more

The report warned that several of these companies are pushing to build systems that could eventually exceed human capabilities. "I don't think companies are prepared for the existential risk of the superintelligent systems that they are about to create and are so ambitious to march towards," Nong said.

AI also causes direct environmental harm, and people are fed up with it. For example, a coalition of Amazon employees signed a letter citing their concerns for the environmental impact and surveillance potential. Just like poorly deployed systems can enable harmful surveillance and poaching patterns without the proper safeguards. As per the U.N., increasing global temperatures and ecosystem disruptions are already making survival for vulnerable species more difficult, AI will only accelerate that decline. 

AI-optimized systems can only cut energy and waste and reduce pollution when deployed carefully, so responsible development is necessary to support sustainability. NBC News noted that systems built without safety frameworks can also create risks in cybersecurity, biological research, and consumer-facing products. 

What's being done about AI safety?

One recommendation in the Safety Index is for companies to adopt independent safety evaluations and publish detailed safety frameworks. California now requires companies releasing advanced models in the state to document internal testing processes, which could help reduce misuse in areas like hacking or harmful biological applications, according to NBC. 

If you work at a company that might be engaging in harmful AI practices, advocating for improved internal policies at your workplace can be an effective way to influence how the company uses AI tools.

Should we be creating robots that act like living beings?

No way 🙅

Only for certain uses 🤖

I'm not sure 🤷

Totally 💯

Click your choice to see results and speak your mind.

💰Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider