"Retaliatory and punitive": Anthropic CEO Dario Amodei on Pentagon's decision to label company supply chain risk

Feb 28, 2026

Washington DC [US], February 28 : Anthropic AI CEO Dario Amodei on Saturday pushed back against the Pentagon's decision to label the company a supply chain risk, calling it "retaliatory and punitive," in an interview with CBS.
He told CBS, "This is unprecedented and never happened before with an American company and it was made very clear in some of their statements and language that this was retaliatory and punitive. I don't know what else to call it."
This move comes after Anthropic refused to grant the military unrestricted access to its AI model, citing concerns over mass domestic surveillance and fully autonomous weapons.
The Pentagon's designation restricts military contractors from working with Anthropic, effectively blocking the company's $200 million contract with the government. Amodei argues that the decision is contradictory, as the Pentagon claims Anthropic's AI is essential to national defence while also labelling it a security risk.
Anthropic has vowed to challenge the designation in court, with support from industry leaders like OpenAI's Sam Altman. The standoff highlights the growing tension between tech companies and the government over AI regulation and usage.
On Friday, the face-off between Dario Amodei-led Anthropic AI and the US administration intensified. First, it was Secretary of War Pete Hegseth launching into a diatribe against Anthropic, accusing them of engaging in duplicity, then it was Anthropic's turn. Amodei's company, in its statement, called the US administration's decision to label it a supply chain risk legally unsound.
"Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk. This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons. We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above. To the best of our knowledge, these exceptions have not affected a single government mission to date," Anthropic said in its statement.
"We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights. Designating Anthropic as a supply chain risk would be an unprecedented action--one historically reserved for US adversaries, never before publicly applied to an American company...We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
The company further alleged that Hegseth does not have the authority to designate Anthropic a supply-chain risk.
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts--it cannot affect how contractors use Claude to serve other customers," it said.
Earlier, after President Trump ordered all federal agencies to stop using Anthropic AI, Hegseth directed the Department of War to designate Anthropic.
"This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every Lawful purpose in defence of the Republic. Instead, Anthropic AI and its CEO Dario Amodei have chosen duplicity. Cloaked in the sanctimonious rhetoric of "effective altruism," they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signalling that places Silicon Valley ideology above American lives," he posted on X.
"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War with its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he added.
Earlier, Amodei, in a statement on Thursday, said that the firm would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems.
Amodei stated that despite pressure from the DOW to agree to "any lawful use" of its technology and remove specific safeguards, the company would not change its position.