U.S. Secretary of Defense Pete Hegseth has reportedly threatened to remove Anthropic from the agency’s supply chain if the AI firm does not permit its technology to be utilized across a full spectrum of military applications.
The ultimatum was allegedly delivered at a Pentagon meeting on Tuesday, which Hegseth requested with Anthropic CEO Dario Amodei, according to a source familiar with the discussions, speaking to the BBC.
In a statement, Anthropic stated, “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
A senior Pentagon official indicated Anthropic has until Friday evening to comply with the demand.
While the BBC’s source characterized the tone of the Hegseth-Amodei discussion as cordial, Amodei reportedly outlined what Anthropic considers to be its “red lines.”
These include involvement in autonomous kinetic operations wherein AI tools make final military targeting decisions without human intervention.
The source added that the use of Anthropic tools for mass domestic surveillance also constitutes a prohibited application.
However, the Pentagon official told the BBC that the current impasse between the agency and Anthropic is unrelated to the deployment of autonomous weapons or mass surveillance.
According to the official, if Anthropic fails to comply, Hegseth would ensure the Defense Production Act is invoked against the company.
Such a measure could compel Anthropic executives to grant the Pentagon unrestricted access to its AI for national security purposes.
The official further stated that the Pentagon would simultaneously designate Anthropic as a supply chain risk.
An Anthropic spokesperson noted that Amodei “expressed appreciation for the Department’s work and thanked the Secretary for his service” during his meeting with Hegseth.
Anthropic, the creator of the Claude AI chatbot, was among four AI companies awarded contracts with the Pentagon last summer.
Google, OpenAI (maker of ChatGPT), and Elon Musk’s xAI (creator of the Grok chatbot) also received contracts, each potentially worth up to $200 million.
Defense department official Emil Michael previously stated the agency expects OpenAI, Google, xAI, and Anthropic to allow the Pentagon to “be able to use any model for all lawful use cases.”
Anthropic has consistently sought to position itself as prioritizing a more safety-conscious approach to AI research compared to its competitors.
The company regularly publishes safety reports on its own products.
A report from last year acknowledged that its AI technology had been “weaponized” by hackers who used it to conduct sophisticated cyber-attacks.
The company’s image faced scrutiny following reports that the U.S. military used its Claude AI model during the operation leading to the capture of former Venezuelan President Nicolás Maduro in January.
Anthropic was the first tech company approved to operate within the Pentagon’s classified military networks and maintains partnerships with companies such as Palantir.
Sources have indicated to the BBC that the Claude model was utilized in the Maduro operation through a contract with Palantir.
The Pentagon’s stance is that Anthropic should not have the authority to dictate how the Pentagon utilizes its products.
Observers suggest the current dispute between Anthropic and the Pentagon stems from a breakdown of trust between the two entities.
“They need to get to a resolution,” stated Emelia Probasco, Senior Fellow at Georgetown University’s Center for Security and Emerging Technology.
“In my opinion, we should be giving the people we ask to serve every possible advantage. We owe it to them to figure this out,” Probasco concluded.
Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.
