Washington — President Trump announced Friday that he is ordering all federal agencies to “immediately” stop using Anthropic’s artificial intelligence technology, as the company neared a Pentagon deadline to drop its push for guardrails over the military’s use of its AI.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Mr. Trump wrote on Truth Social. “We don’t need it, we don’t want it, and will not do business with them again!”
The president said he will give certain agencies, like the Department of Defense, that use Anthropic’s technology six months to phase out their use of its products and threatened to take additional action against the company if it does not assist during that period.
“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” he wrote.
Mr. Trump attacked the company as a “Radical Left AI company run by people who have no idea what the real World is all about.”
About an hour and a half after Mr. Trump’s Truth Social post, Defense Secretary Pete Hegseth followed through on his promise to designate Anthropic a supply chain risk.
“I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote. “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”
In a statement Friday in response to Hegseth’s designation, Anthropic said it would “challenge any supply chain risk designation in court.”
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments,” the company said.
It argued such a designation “would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.” The company wrote that Hegseth doesn’t have the legal authority to ban military contractors from doing business with Anthropic, since a risk designation would only apply to contractors’ work with the Pentagon.
The Defense Department and Anthropic have been at odds over the military’s use of its AI model, Claude, and the safeguards that the company, led by CEO Dario Amodei, wants in place over the use of the technology. The Pentagon has insisted Anthropic agree to give the military unrestricted access to its AI model, and Hegseth had set a deadline of 5:01 p.m. Friday for it to agree to drop its guardrails.
The Free Press: Will AI Doom Us All? The Market Can’t Decide
Chief Pentagon spokesman Sean Parnell said Thursday that the department is seeking to use Anthropic’s AI model “for all lawful purposes.”
The Pentagon had also threatened to invoke the Defense Production Act to force the removal of the company’s standards, Amodei said. Two sources told CBS News the Pentagon argues that a contractor that believes it has a say in government policy decisions cannot be relied upon to work with other U.S. partners and contractors.
Anthropic was awarded a $200 million contract from the Pentagon last July to develop AI capabilities that would advance national security. It is currently the only AI company with its model deployed on the Pentagon’s classified networks through a partnership with data analytics company Palantir. But a senior Pentagon official told CBS News that Grok, the model owned by Elon Musk’s xAI, could be used in a classified setting.
Anthropic had asked the Defense Department to agree to certain limits on the use of its model, including a restriction against using Claude to conduct mass surveillance of Americans, sources told CBS News.
The company also sought to ensure Claude isn’t used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune to hallucinations and is not reliable enough to avoid potentially lethal mistakes, such as unintended escalation or mission failure without human judgment, the source said.
In an interview with CBS News on Thursday, the Pentagon’s chief technology officer, Emil Michael, said the military “made some very good concessions” in order to reach a deal with Anthropic. The Pentagon offered to “put it in writing that we’re specifically acknowledging” federal laws that restrict the military from surveilling Americans, he said, and offered language “specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons.”
“At some level, you have to trust your military to do the right thing,” Michael said.
But an Anthropic spokesperson said Thursday that the new contract language it received from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” the company said.
In his own statement Thursday, Amodei said that the threats from the Defense Department would do nothing to change its position on the need for guardrails around the use of its AI systems.
“Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” he said. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”
Mr. Trump’s announcement targeting Anthropic was met with pushback from Democratic Sen. Mark Warner of Virginia, the vice chair of the Senate Intelligence Committee. Warner accused the president and Hegseth of “bullying” the company to deploy “AI-driven weapons without safeguards,” and said it should “scare the hell out of all of us.”
“The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” he said in a statement.
