Anthropic Sues Pentagon and Trump Administration Over “Supply Chain Risk” Designation — Calls Actions “Unprecedented and Unlawful”

Anthropic filed two separate lawsuits on March 9, 2026 — one in federal court in California and another in the federal appeals court in Washington, D.C. — challenging the Pentagon’s extraordinary decision to designate the artificial intelligence company a “supply chain risk” to national security. In a 48-page complaint filed in the U.S. District Court for the Northern District of California, Anthropic called the government’s actions “unprecedented and unlawful,” arguing that no federal statute authorizes the Pentagon or President Trump to blacklist a company over a policy disagreement about how its technology may be used. 

The dispute centers on Anthropic’s refusal to allow its Claude AI model to be deployed for mass domestic surveillance or fully autonomous weapons — two uses the company says its technology cannot safely and reliably support. This is not a consumer class action. It is a constitutional and administrative law challenge brought by Anthropic as a private company seeking to reverse a government designation it says violates the First Amendment and exceeds executive authority.

Quick Facts

  • Plaintiff: Anthropic PBC (maker of Claude AI)
  • Defendants: U.S. Department of Defense; Secretary of Defense Pete Hegseth; U.S. Department of the Treasury; U.S. Department of State; General Services Administration; and more than a dozen other federal agencies and officials
  • Case 1: U.S. District Court for the Northern District of California
  • Case 2: U.S. Court of Appeals for the D.C. Circuit
  • Date Filed: March 9, 2026
  • Legal Basis: First Amendment retaliation; Administrative Procedure Act; statutory authority limits on supply chain risk designations
  • What Triggered the Lawsuit: Pentagon formally designated Anthropic a “supply chain risk” after contract negotiations broke down over Anthropic’s refusal to allow “all lawful uses” of Claude — including mass surveillance and autonomous weapons
  • Relief Sought: Vacate the supply chain risk designation; block its enforcement; require federal agencies to withdraw directives to stop using Anthropic’s technology; declaratory judgment that the actions are unlawful
  • Revenue at Stake: Hundreds of millions of dollars, according to the complaint; Anthropic projects $14 billion in total revenue for 2026
  • Pentagon Response: Declined to comment, citing pending litigation
  • White House Response: Said the president “will never allow a radical left, woke company to jeopardize our national security”

Timeline: How a Contract Dispute Became a Federal Lawsuit in Two Weeks

Understanding this case requires understanding the extraordinary sequence of events that preceded it:

  • Before February 2026: Anthropic served as a key AI partner across multiple U.S. government agencies. Claude was the only AI model authorized for use on classified networks. The military was using Claude in its operations, including processing intelligence.
  • February 24, 2026: Anthropic CEO Dario Amodei met with Defense Secretary Pete Hegseth but the two failed to reach an agreement. The dispute: Anthropic wanted written contractual assurances that Claude would not be used for mass domestic surveillance of U.S. citizens or for fully autonomous weapons. The Pentagon wanted Claude available for “all lawful purposes” with no vendor-imposed restrictions.
  • February 27, 2026: President Trump ordered all federal agencies to “immediately cease” all use of Anthropic’s technology. Trump posted on Truth Social: “WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
  • Late February – Early March 2026: Just hours after the Trump administration’s order, OpenAI struck a deal with the Pentagon, seemingly agreeing to provide its models without the contractual limitations Anthropic had insisted upon. The move drew sharp criticism, with many questioning whether OpenAI’s contract language offered meaningfully different protections. OpenAI later acknowledged the announcement looked “sloppy and opportunistic” and said it was renegotiating some terms.
  • March 5, 2026: Anthropic was officially designated a supply chain risk — an extraordinary move that has historically been reserved for foreign adversaries — requiring defense vendors and contractors to certify they do not use Anthropic’s models in their work with the Pentagon.
  • March 9, 2026: Anthropic files two federal lawsuits. “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation,” the lawsuit states.

What the Lawsuits Argue

Lawsuit 1 — Northern District of California: First Amendment Retaliation

The first lawsuit claims the designation punishes Anthropic for being outspoken about its views on AI policy, including its advocacy for safeguards against its technology being used for mass domestic surveillance or autonomous weapons. The Pentagon has a right to disagree and choose not to work with Anthropic, the company argues, but it cannot stigmatize the company as a security risk over protected speech.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here,” the filing states.

The complaint further argues that the challenged actions “inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.”

Anthropic Sues Pentagon and Trump Administration Over Supply Chain Risk Designation — Calls Actions Unprecedented and Unlawful

Lawsuit 2 — D.C. Circuit Court of Appeals: Statutory Authority Challenge

A second, shorter lawsuit was filed in the D.C. Circuit Court of Appeals because another statute the government invoked can only be challenged there. Similar arguments are being made — that procurement laws passed by Congress do not give the Pentagon or President Trump the power to blacklist a company.

The “Scope Dispute” — How Broad Is the Blacklist?

A critical legal and practical question sits at the center of the case. Hegseth had said that the supply chain risk designation would mean that defense contractors would have to sever all commercial ties to Anthropic — something most legal experts have said is not supported by the statute on supply chain risks.

Anthropic has sought to convince businesses and other government agencies that the Trump administration’s penalty is a narrow one that only affects military contractors when they are using Claude in work for the Department of Defense — noting that most of its projected $14 billion in revenue this year comes from businesses and government agencies using Claude for computer coding and other tasks unrelated to defense work.

Companies including Microsoft and Google have said they will be able to continue non-defense related work with Anthropic.

The Two Red Lines at the Heart of the Dispute

The entire legal battle traces back to two specific restrictions Anthropic refused to remove from its Pentagon contract:

Red Line 1 — Mass Domestic Surveillance: Anthropic demanded contractual assurance that Claude would not be used to conduct mass surveillance of U.S. citizens.

Red Line 2 — Autonomous Weapons: Anthropic demanded that Claude not be used for fully autonomous weapons systems — where lethal decisions are made by AI without meaningful human control.

The Pentagon argued it could not allow a private company to dictate how it uses its tools in a national security emergency, though it previously claimed it has no interest in using AI for mass surveillance of U.S. citizens or autonomous weapons. The core disagreement was not about whether these uses were desirable — it was about whether Anthropic could legally enforce those restrictions in a government contract.

In a blog post explaining the company’s decision, Amodei said AI cannot currently be used reliably and safely for mass surveillance and autonomous weapons cases.

The Pentagon’s Position

The government’s counterargument is straightforward. Pentagon officials say this has always been about the military’s ability to use technology legally, without a vendor inserting itself into the chain of command and putting warfighters at risk. Allowing a private technology company to unilaterally restrict how the military uses a tool it has contracted for — especially during active operations — the Pentagon argues, is operationally and legally untenable.

The military has reportedly been using Claude in its current operations with Iran to help process intelligence and targeting data. The Pentagon indicated it would phase out Claude over six months to avoid disrupting critical operations.

Legal Expert Analysis

The legal community has weighed in quickly — and largely skeptically of the government’s position.

In an article in the nonprofit publication Lawfare, lawyers Michael Endrias and Alan Z. Rozenshtein argued the designation “exceeds what the statute authorizes,” that “the required findings don’t hold up,” and that Hegseth’s own public statements “may have doomed the government’s litigation posture before it even begins.”

One of the same law firms representing Anthropic — WilmerHale — is the same firm Trump tried and failed to suppress through an executive order last year that was ruled unconstitutional in May 2025. The firm’s involvement adds a striking dimension to an already historically significant case.

What This Means for the AI Industry

The implications of this case extend far beyond Anthropic and Claude.

Anthropic previously said the supply chain risk designation “sets a dangerous precedent for any American company that negotiates with the government.” If the government prevails, it would establish that the executive branch can effectively destroy a technology company’s commercial relationships — not for national security violations — but for holding a policy position the administration disagrees with.

The company says its two lawsuits are not meant to force the government to work with Anthropic, but to prevent officials from blacklisting companies over policy disagreements.

The case carries direct parallels to the American Academy of Pediatrics lawsuit — where a federal court issued a preliminary injunction ordering HHS to restore $12 million in terminated grants after finding the terminations amounted to First Amendment retaliation against an organization that publicly criticized administration policy. That case, still in active litigation, established that courts are willing to intervene when the government appears to use its contracting power to punish protected speech.

The Anthropic case also intersects with the broader legal debate about AI’s role in warfare and civil liberties — a debate that the Character AI lawsuit and similar cases are already forcing courts to grapple with, as judges begin making foundational determinations about when and how AI companies can be held accountable — or protected — under the Constitution.

What Happens Next

  • Immediate: Courts will consider Anthropic’s request for a stay — a temporary court order blocking the supply chain risk designation from being enforced while the lawsuit proceeds. If granted, defense contractors could continue working with Anthropic.
  • Short term: The government must respond to the complaints. Expect motions to dismiss on jurisdictional and standing grounds.
  • Medium term: If the stay is denied and the designation remains in force, Anthropic’s contracts with the federal government — already being canceled — will continue to unwind.
  • Ongoing: Even amid the supply chain risk designation and federal lawsuits, the two sides are still reportedly in talks and seeking a path forward for Claude in the military. A negotiated resolution remains possible.

FAQs

Why is Anthropic suing the Pentagon?

Anthropic is suing to reverse the Pentagon’s decision designating the AI company a “supply chain risk” over its refusal to allow unrestricted military use of its technology. The company argues the designation is an unconstitutional act of retaliation for protected speech and exceeds the government’s statutory authority.

What is a “supply chain risk” designation?

The supply chain risk designation is an extraordinary measure historically reserved for foreign adversaries. It requires defense vendors and contractors to certify they do not use the designated company’s models in their work with the Pentagon. Anthropic is the first American company to receive this designation.

What were Anthropic’s two red lines?

Anthropic refused to allow Claude to be used for mass domestic surveillance of U.S. citizens or for fully autonomous weapons systems. The Pentagon wanted these restrictions removed and insisted Claude be available for “all lawful purposes.”

Did OpenAI take Anthropic’s place?

OpenAI struck a deal with the Pentagon just hours after the Trump administration’s order against Anthropic. However, the announcement quickly drew criticism, with many questioning whether OpenAI’s contract language offered meaningfully different protections. OpenAI later acknowledged the announcement looked “sloppy and opportunistic” and said it was renegotiating some terms.

How much revenue could Anthropic lose?

Most of Anthropic’s projected $14 billion in revenue for 2026 comes from businesses and government agencies using Claude for coding and other tasks. The complaint says the actions could jeopardize “hundreds of millions of dollars” in revenue.

Can Anthropic still work with non-defense companies?

Companies including Microsoft and Google have said they will be able to continue non-defense related work with Anthropic. Anthropic argues the designation applies narrowly to work done directly for the Department of Defense — not all commercial relationships.

Where are the lawsuits filed?

Anthropic filed one lawsuit in federal court in California and a second in the federal appeals court in Washington, D.C., each challenging different aspects of the Pentagon’s actions.

Is a settlement still possible?

Even amid the supply chain risk designation and federal lawsuits, the two sides are still reportedly in talks and seeking a path forward for Claude in the military. A negotiated resolution that satisfies both parties’ core concerns — if one can be reached — would likely moot the litigation.

What is the White House’s position?

The White House stated it “will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates.” The Pentagon has declined to comment on the pending litigation.

Last Updated: March 9, 2026

Disclaimer: This article is for informational purposes only and does not constitute legal advice. Legal claims and outcomes depend on specific facts and applicable law. For advice regarding a particular situation, consult a qualified attorney.

About the Author

Sarah Klein, JD

Sarah Klein, JD, is a licensed attorney and legal content strategist with over 12 years of experience across civil, criminal, family, and regulatory law. At All About Lawyer, she covers a wide range of legal topics — from high-profile lawsuits and courtroom stories to state traffic laws and everyday legal questions — all with a focus on accuracy, clarity, and public understanding.
Her writing blends real legal insight with plain-English explanations, helping readers stay informed and legally aware.
Read more about Sarah

Leave a Reply

Your email address will not be published. Required fields are marked *