Anthropic Sues Pentagon Over Who Controls AI Safety Rules
Anthropic sued the Pentagon over its 'supply chain risk' designation, setting up a fight over who controls AI safety rules in military deployments.
Anthropic filed two federal lawsuits on Monday challenging the Pentagon’s decision to designate it a national security “supply chain risk,” setting up a legal battle that reaches well beyond one company’s revenue. The case cuts to the central unresolved question of the AI era: when a company builds a powerful model and a government deploys it, who has the final word on what that model can be used for?
The answer will shape every defense AI contract written after this one, every safety restriction any AI lab ever considers imposing, and the relationship between the AI industry and the state for a generation.
How a Contract Dispute Became a Constitutional Fight
The dispute between Anthropic and the Pentagon has been building since late 2025, when the two sides entered increasingly contentious negotiations over the terms under which Claude could be used in military operations. Anthropic had been an aggressive early partner in the government’s push to modernize its intelligence and operational systems with AI. The company signed contracts worth hundreds of millions of dollars across federal agencies and positioned itself as the safety-first alternative in a field where “moving fast” often wins government deals.
The breakdown began when the Pentagon demanded Anthropic agree to allow Claude to be deployed for “all lawful uses.” That phrase sounds innocuous. In practice it meant removing two specific guardrails Anthropic had written into its usage policies: a prohibition on using Claude to direct autonomous weapons systems, and a prohibition on using it for mass domestic surveillance of American citizens.
Anthropic refused. CEO Dario Amodei has been explicit about his reasoning: the current generation of AI models is not accurate enough to be trusted with autonomous targeting decisions. Errors in intelligence processing for combat operations are not recoverable the way a wrong chatbot answer is. The company’s position was that its own safety assessment, not a blanket “all lawful uses” contract clause, should govern these applications.
The Pentagon’s response escalated faster than most observers expected. In late February, President Trump directed all federal agencies to immediately cease using Anthropic’s technology. Then, last Thursday, Defense Secretary Pete Hegseth formally designated Anthropic a supply chain risk, a classification with no precedent when applied to a domestic American AI company. The designation has historically been reserved for foreign firms with documented ties to adversary governments.
The Supply-Chain-Risk Designation: What It Actually Does
The formal designation creates a cascading problem for Anthropic’s business. Defense vendors and contractors are now required to certify they are not using Claude in any work they perform for the Pentagon. As of this morning, Anthropic said contracts are already being canceled. The company estimates hundreds of millions in near-term revenue is at risk.
Legal experts have been skeptical of the designation’s durability from the moment it was announced. In an analysis published by Lawfare, attorneys Michael Endrias and Alan Z. Rozenshtein argued the designation “exceeds what the statute authorizes,” that the required factual findings do not hold up to scrutiny, and that Hegseth’s own public statements undermined the government’s legal position before litigation even began. Their sharpest observation: “The government cannot simultaneously claim a vendor poses an acute supply-chain threat requiring emergency exclusion and that it’s perfectly safe to keep using the vendor for half a year,” referring to Hegseth’s concurrent announcement of a six-month phase-out period for Claude in active military use.
Anthropic has also clarified the designation’s technical scope. The company says the formal letter it received from the Pentagon specifies the restriction applies only to use of Claude “as a direct part of contracts with the Department of War.” Commercial uses of Claude unrelated to defense contracts fall outside the designation’s language. Amazon Web Services confirmed this reading, stating that the blacklisting does not affect AWS customers using Claude for non-defense work.
That narrower reading does not eliminate the reputational and business damage. The designation signals to the broader federal government that Anthropic is disfavored, and Trump’s public directive to all agencies to stop using its technology has force beyond the formal legal designation.
Two Lawsuits, One Core Question
Anthropic filed its challenge in two venues simultaneously. The first lawsuit landed in the U.S. District Court for the Northern District of California, asking a judge to vacate the supply chain risk designation entirely and halt federal agencies from enforcing it. The company also filed a petition for formal review in the U.S. Court of Appeals for the D.C. Circuit.
The California filing frames the case around constitutional grounds. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic wrote in the complaint. The argument is that Anthropic’s usage policies, the specific guardrails the Pentagon wanted removed, constitute protected expression. Punishing the company for refusing to retract those policies is, in Anthropic’s framing, government retaliation for protected First Amendment activity.
The company is also seeking a temporary restraining order to preserve its ability to continue government sales while the case proceeds. It has proposed an aggressive schedule: government response by Wednesday night, a hearing before a judge by Friday.
Anthropic officials have been careful to say the lawsuit does not preclude renewed negotiations or a settlement. The company says it does not want to be in an adversarial posture with the U.S. government. The Pentagon, for its part, said it does not comment on active litigation.
The OpenAI Contrast and What It Reveals
The timing of OpenAI’s Pentagon contract makes the Anthropic dispute sharper. Within hours of Anthropic and the Pentagon reaching an impasse in negotiations, OpenAI signed its own agreement with the Department of Defense, one that allows the military to use its models for “all lawful purposes.”
OpenAI’s position on what that phrase means differs from the Pentagon’s interpretation in the Anthropic dispute. CEO Sam Altman stated publicly that OpenAI maintains its own safety principles, including prohibitions on domestic mass surveillance and on “human responsibility for the use of force, including for autonomous weapon systems.” The OpenAI contract reportedly includes language specifying that the AI system “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.”
Critics note the gap between Anthropic’s rejected position and OpenAI’s accepted one is narrower than the Pentagon’s behavior suggests. The practical difference may be that OpenAI agreed to a formulation that defers to existing law and Department policy, rather than writing independent restrictions into the contract itself. That framing gives the government more latitude to interpret the boundaries of acceptable use, whereas Anthropic’s guardrails were written as hard prohibitions.
The contrast creates a strategic dilemma for AI labs watching this case. Accepting “all lawful uses” language keeps the government relationship intact but cedes control of the safety question to lawyers arguing over what is lawful. Writing hard prohibitions into contracts keeps control with the lab but, as Anthropic has now learned, invites exactly this kind of confrontation.
What Hangs in the Balance for AI Development
The stakes of this case extend well beyond Anthropic’s balance sheet. If the courts uphold the Pentagon’s designation, it establishes that the U.S. government can effectively blacklist any AI company that refuses to remove safety restrictions from its models. That creates a powerful precedent for future negotiations: AI labs that impose meaningful restrictions on military applications do so at the risk of losing all government revenue.
The chilling effect on AI safety research and policy is significant. Labs that believe autonomous weapons targeting is genuinely premature technology, as Amodei has argued, face a choice between accepting applications they consider unsafe or losing access to the largest single buyer of AI services in the world.
If Anthropic wins, it establishes that AI companies have legally protected authority to set terms on how their models are used, even against government buyers. That outcome would strengthen the hand of every AI lab in future contract negotiations and clarify that usage policy is protected expression, not just commercial preference.
A group of OpenAI and Google employees filed an amicus brief in support of Anthropic on Monday, a rare instance of employees from competing labs publicly backing a rival in litigation. The brief signals that workers across the AI industry view the outcome as relevant to their own companies, not merely to Anthropic’s.
The hearing expected by Friday will offer the first indication of how courts are reading the government’s legal authority over AI procurement. Whatever the judge decides, the question at the center of this case, who controls the guardrails on AI deployed in warfare, is not going away.
For previous coverage of how this dispute escalated from a contract disagreement to a federal ban, see our earlier report on the Anthropic Streisand Effect. For analysis of the OpenAI contract that now sits at the center of the comparison, see OpenAI’s amended Pentagon contract.