Following a weeks-long standoff over whether Anthropic would allow the United States military to use its artificial-intelligence chatbot Claude for "all lawful use," Anthropic's CEO said that his company was still having conversations with Pentagon officials, CBS News reported.
Speaking at a conference in San Francisco, Dario Amodei, the Anthropic CEO, said that his company and the U.S. military "have much more in common than we have differences," per CBS News.
What's happening?
While the U.S. military has been using a variety of different AI tools, Anthropic's Claude was the only chatbot approved to work on classified materials. However, Anthropic and top military brass recently reached an impasse over whether Claude could be used in certain applications, including autonomous weapon systems and surveillance of Americans, Axios reported.
According to Amodei, Anthropic tried to draw red lines over how the Pentagon could use its technology.
"We believe that crossing those lines is contrary to American values, and we wanted to stand up for American values," he said, per CBS News.
In response, the current administration moved to cancel all government contracts with Anthropic.
Anthropic's competitors sought to seize the opportunity, with xAI reaching a deal for the Pentagon to use its Grok chatbot for "all lawful use." The New York Times reported that OpenAI and Google were also engaged in talks with the military.
Amodei's comments suggested that Anthropic still had hopes of reaching common ground with the Pentagon. Experts have said that completely unwinding Claude's use within sensitive systems would be a complicated and costly endeavor.
Why is it important?
The situation has cast a spotlight on the military's use of AI technology, with many observers raising concerns about the potential dangers of involving AI in what are often split-second, life-or-death decisions.
"While military AI is intended to increase precision, efficiency, and reduce risk to personnel and civilians alike, it introduces uncharted risks into military operations," warned the non-profit organization Diplo.
|
Which of these savings plans for rooftop solar panels would be most appealing for you?
Click your choice to see results and earn rewards to spend on home upgrades. |
Experts have cautioned that the black-box nature of AI decision-making and the bias inherent in all AI systems could lead to unintended consequences.
Aside from military applications, AI comes with other costs as well. For example, the energy-hungry data centers that power AI models have placed a growing strain on America's aging electrical grid. The result has been skyrocketing electricity prices for everyday consumers.
What's being done about it?
As artificial intelligence spreads into nearly every aspect of modern life, it is important to have open conversations about its potential strengths and weaknesses. This is particularly important in military applications, where human lives are often at stake.
By standing its ground and seeking to establish guardrails around Claude's potential military uses, Anthropic has set a precedent for other companies to follow when placing limits on the manner in which governments use their technology.
Consumers seem to have appreciated Anthropic's stance, with Claude taking over the top spot on the Apple Store's most downloaded iPhone apps in the midst of the standoff, per Scripps News.
Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.







