Artificial intelligence leader Anthropic's rapid ascent in the "AI wars" screeched to a halt after the entire source code for its flagship chatbot Claude was leaked, the Wall Street Journal reported.
What's happening?
OpenAI's flagship chatbot is known as ChatGPT, with Anthropic's Claude an increasingly lauded rival product.
ChatGPT might have broader name recognition, but Claude has repeatedly bested it on several metrics. It "emerged as the definitive victor" in Tom's Guide's "2026 AI Madness," a technology news take on March Madness.
On March 31, Anthropic deployed what was supposed to be a routine update — and inadvertently exposed Claude's source code in its entirety.
The incident, which occurred overnight, left Anthropic "racing to contain the fallout," according to the WSJ.
As the Guardian noted, the leak, which was attributed to "human error," was Anthropic's second in a span of days. On March 26, Fortune reported that internal data not intended for public release was accidentally exposed through Anthropic's content management system, or CMS.
Anthropic ultimately detected the second leak and acted on it, but the data remained exposed for hours, allowing users to download and mirror Claude Code across the web.
A user on the open source network DEV Community analyzed the content of the leak, describing its contents as Claude's "agentic harness," "strategically valuable" code that functioned like a blueprint for the product.
Why is this concerning?
While the DEV Community analysis speculated that Anthropic's leak was a possible PR stunt, it also acknowledged the incident was "genuinely damaging" to the brand.
Anthropic was founded by former OpenAI executives concerned about responsible development and AI safety, and a pair of embarrassing leaks could undermine the brand's core identity.
|
Which of these savings plans for rooftop solar panels would be most appealing for you?
Click your choice to see results and earn rewards to spend on home upgrades. |
In 2025 and 2026, the AI industry came under sharper public scrutiny for several reasons, including the technology's ability to make costly or even dangerous errors at scale.
A grandmother was jailed for six months after AI facial recognition software misidentified her as a suspect in a fraud case. In a less severe incident, an AI tool erroneously claimed that a police officer had transformed into a frog due to interference from background information.
Experts have warned that AI's greatest risks might lie in small failures at scale. On the user side, a general sense that the technology isn't living up to the hype has begun to set in.
At the same time, AI's impact on public resources has become a controversy in its own right, particularly regarding the once-localized effects of data centers.
As facilities began springing up around the country, their intense energy demand caused electric bills nationwide to double or triple, shifting the costs of AI growth to ratepayers.
What's being done about it?
On April 1, Anthropic Claude Code creator Boris Cherney (@bcherny) addressed the leak on X.
"Mistakes happen," he began. "In this case, there was a manual deploy step that should have been better automated. Our team has made a few improvements to the automation for next time, a couple more on the way."
As for data centers, communities across the country have come together to successfully object to their construction.
Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.






