Pentagon pressures Anthropic in escalating AI showdown
February 27, 2026
The US government says it will pull AI startup Anthropic from Pentagon supply chains and rip up any agreements it has with it as the row between the two escalates sharply.
US Under Secretary of Defense Emil Michael strongly rebuked Anthropic CEO Dario Amodei on Thursday, calling him a "liar" with a "God-complex." He said Amodei "wants nothing more than to try to personally control the US military and is ok putting our nation's safety at risk."
The row follows a meeting earlier this week between Amodei and US Defense Secretary Pete Hegseth, where — according to sources familiar with the meeting — Hegseth told Anthropic it had until Friday (February 27) to give the US military full access to its Claude model.
If access is not given, Hegseth threatened to cut AI group Anthropic from government supply chains, or possibly compel it to prioritize government orders, according to the sources.
Anthropic has so far refused to give Washington complete access to its models for classified military use, including for potentially lethal missions carried out without human control and for domestic mass surveillance.
In a statement on Thursday, Amodei said, "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
While Anthropic's Claude model is currently used by the Pentagon, Amodei says he believes the company's red lines of mass surveillance and fully autonomous weapons "need to be deployed with proper guardrails, which don't exist today."
It's the latest example of Washington's strong-arm tactics in the corporate sector, while it also shows how control over AI models is becoming a new battleground for the Trump administration. It is also likely to prompt a legal battle between the government and Anthropic.
What exactly has Hegseth threatened and why?
Hegseth called Anthropic CEO Dario Amodei to Washington for a meeting on Tuesday. An Anthropic spokesperson confirmed the meeting took place and told DW:
"We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
However, sources familiar with the talks said Hegseth made two direct threats to Amodei if Anthropic did not comply.
One was to cut the company out of the Pentagon's supply chain, while the other would be to invoke the Defense Production Act, a measure from the Cold War era, which gives the US president the power to control domestic industry in the supposed interest of national defense.
Hegseth wants the Pentagon to have unrestricted access to Anthropic's generative AI chatbot Claude, but Anthropic, which has long billed itself as a safety-oriented AI company, has been consistently opposed.
The company is firmly against its Claude technology being used in operations where final military targeting decisions are taken without human intervention, or for mass surveillance within the United States.
"To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date," Amodei said in the statement of February 26.
"Anthropic views these things as being not in humanity's long-term best interest, at least at the current level of technology and safety guardrails that exist, whereas the Pentagon is pushing to have any lawful use that it wants," Geoffrey Gertz, senior fellow at the Center for a New American Security, told DW.
He says talk of invoking the Defense Production Act would be an attempt to exert control over an AI company in an "unprecedented" way, and he is concerned that it could thwart Anthropic's development.
"There's a big worry that the government ends up taking actions that hurt Anthropic's ability to continue to be at the forefront of responsible AI," he said. "Actions that are trying to curtail Anthropic's potential markets, I think, could be very harmful and really backfire on what the administration is trying to do on AI policy."
What has been the relationship between Anthropic and the US military?
Since November 2024, Anthropic has been providing the Claude model to US intelligence and defense agencies.
According to the Wall Street Journal, the US military used Claude during the 2026 raid on Venezuela which resulted in the capture of Nicolas Maduro. Neither Anthropic nor the US defense department commented on the claims, and it is not clear precisely how the AI system was used in the raid.
The threat by Hegseth to remove Anthropic from Pentagon supply chains would have a financial impact on the company.
In July 2025, the US Department of Defense awarded Anthropic a $200 million contract to "prototype frontier AI capabilities that advance US national security."
Anthropic hailed the arrangement, with Thiyagu Ramasamy, the company's head of public sector, saying it opened "a new chapter in Anthropic's commitment to supporting US national security."
However, at the time, it also emphasized its commitment to "responsible AI deployment."
"At the heart of this work lies our conviction that the most powerful technologies carry the greatest responsibility," it said in a statement. "We're building AI systems to be reliable, interpretable, and steerable precisely because we recognize that in government contexts, where decisions affect millions and stakes couldn't be higher, these qualities are essential."
Is Anthropic as safety-oriented as it says?
Anthropic was founded in 2021 by seven former employees of OpenAI. According to CEO Dario Amodei, it was built "on a simple principle: AI should be a force for human progress, not peril."
However, despite the row with the Pentagon, there are signs that Anthropic is reconsidering that commitment in pursuit of commercial ambitions.
On Tuesday (February 24), the same day as the Hegseth meeting, the company announced it was softening its core safety policy to remain competitive with other leading AI models.
"The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," Anthropic said in a blog post announcing the changes.
Anthropic faces intense competition from AI rivals such as OpenAI and Google and is making the policy pivot as a result of what it sees as a lack of AI regulation at the federal level. The Trump administration has resisted AI regulation at both the state and federal levels.
The spokesperson for Anthropic told DW that the policy shift was unrelated to the Pentagon negotiations.
What ethical questions are at stake?
If Anthropic submits to Hegseth's demands, or if the defense department were to take control of Anthropic by invoking the Defense Production Act, it would inevitably lead to accusations that the company's AI was no longer being used with a safety-first mindset.
The issue also shines a light on the Trump administration's strong willingness to directly intervene in corporate decision-making and in sectors it deems of critical importance.
In August 2025, the Trump administration announced it had made a $8.9 billion investment in Intel, part of a series of moves to directly intervene in US chipmaking.
It has also intervened directly in the rare-earth sector, making major investments in firms such as Vulcan Elements, MP Materials and USA Rare Earth.
Gertz says that corporations and CEOs are "navigating a new landscape" when it comes to the Trump administration's more interventionist policy.
"This is an outlier," he said. "This is a big shift from a traditional US approach of more hands-off, let the private sector develop."
Edited by: Ashutosh Pandey
Editor's note: The article, originally published on February 25, has been updated to reflect the escalation of the dispute between Anthropic and the Trump administration, with a statement from Anthropic CEO Dario Amodei and a comment by US Under Secretary of Defense Emil Michael.