Federal Court Grants Preliminary Injunction Blocking Government-Wide Ban on AI Developer Anthropic, Finding Likely First Amendment Retaliation, Due Process Violations, and Arbitrary Agency Action

Introduction

Anthropic PBC, the developer of the AI model Claude, secured a preliminary injunction blocking the federal government from enforcing a sweeping ban on the company's technology across all federal agencies and the defense industrial base.

The dispute originated in contract negotiations between Anthropic and the Department of War (DoW). DoW sought unrestricted access to Claude for "all lawful uses," arguing that existing military policies and laws already governed which applications were permissible. Anthropic agreed to remove most usage restrictions but insisted on retaining two contractual guardrails: that Claude not be used for mass surveillance of Americans or for lethal autonomous warfare. Anthropic stated these restrictions reflected the current limitations of the technology — that Claude had not been developed or tested for safe use in those applications — and that removing them would "undercut Anthropic's core identity" as an AI safety-focused company.

Anthropic's CEO, Dario Amodei, told DoW leadership that if Anthropic's position meant it was not the right vendor, the company would respect that decision and facilitate an orderly offboarding. The negotiations remained, by all accounts, "cordial and amicable."

The Government's Response

After Anthropic publicly stated its position on the guardrails, three things happened in rapid succession.

The Presidential Directive (February 27, 2026). President Trump posted on Truth Social directing "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," calling the company a "radical left, woke company" run by "leftwing nut jobs" who had made a "DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." The directive was permanent, with a six-month phase-out for agencies currently using Claude.

The Hegseth Directive (February 27, 2026). Secretary of War Pete Hegseth posted on X that Anthropic had delivered "a master class in arrogance and betrayal," accused the company of "sanctimonious rhetoric" and "corporate virtue-signaling," and ordered that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." He further directed DoW to designate Anthropic a supply chain risk. At oral argument, the government conceded that Secretary Hegseth had no statutory authority for this blanket prohibition and that it had "absolutely no legal effect at all." When asked why Hegseth made a public statement with no legal effect and that did not reflect DoW's immediate intent, government counsel stated, "I don't know."

The Supply Chain Risk Designation (March 3–4, 2026). Secretary Hegseth formally designated Anthropic a "supply chain risk" under 10 U.S.C. § 3252 — an authority Congress created in 2011 to address the risk that foreign intelligence agencies, terrorists, or hostile actors might sabotage national security information technology systems. The designation had never previously been applied to a domestic company.

Immediate Consequences

The combined effect was devastating. Within days:

  • GSA removed Anthropic from USAi.gov, its AI platform for federal agencies.

  • The Treasury Department, Federal Housing Finance Agency, Department of State, HHS, and the Department of Energy's Lawrence Livermore National Laboratory either terminated or announced plans to terminate Claude.

  • Defense contractors began assessing and in many cases terminating their use of Claude-integrated APIs.

  • Anthropic received inquiries from over 100 enterprise customers "expressing deep fear, confusion, and doubt."

  • Major law firms issued client alerts advising government contractors to "audit their Anthropic exposure now" and "prepare to deploy alternatives."

  • Deals worth hundreds of millions of dollars were delayed from closing, prospective clients pulled out of negotiations, and some customers terminated contracts.

  • Anthropic's CFO projected revenue losses between hundreds of millions and multiple billions of dollars for 2026.

Small developers expressed uncertainty about whether they could continue using Claude Code or open source libraries containing code written by Claude. Industrial trade associations questioned whether Claude-generated code already embedded in shipping products would be rejected by DoW.

The Court's Analysis

The court found Anthropic likely to succeed on three independent grounds.

First Amendment Retaliation

The court applied the three-part test for First Amendment retaliation claims: (1) constitutionally protected activity, (2) actions that would chill a person of ordinary firmness, and (3) a nexus between the protected activity and the adverse action.

On the first element, the court found that Anthropic's public advocacy on AI safety — including CEO Amodei's essays and public statements about the contracting dispute — constituted speech "at the heart of the First Amendment's protection" on "matters of public concern." On the second element, the record showed the challenged actions threatened to cripple the company, which easily satisfied the chilling standard. Amicus briefs from 37 AI professionals described a chilling effect on professional debate about AI risks.

On the critical third element — causation — the court found that the government's own records showed the punitive measures were motivated by Anthropic's public statements rather than merely its contracting position. The court noted:

  • Anthropic had imposed the same usage restrictions since DoW first began using Claude Gov in March 2025, without objection.

  • Anthropic had passed extensive national security vetting, including a Top Secret facility clearance and FedRAMP High authorization.

  • DoW had consistently praised Anthropic and expanded the relationship, including a $200 million agreement.

  • It was only when Anthropic publicly discussed its disagreement that Defendants criticized its "rhetoric" and "ideology" and adopted punitive measures.

The Michael Memo — the internal DoW document supporting the supply chain designation — stated that Anthropic's "risk level escalated" principally because it was "leveraging DoW's ongoing good faith negotiations for Anthropic's own public relations" and engaging in an "increasingly hostile manner through the press." The court called this "classic illegal First Amendment retaliation."

The government argued that Anthropic's contracting position (conduct), not its speech, drove the challenged actions. The court rejected this, noting the "but for" question was whether the government would have taken the same actions absent the protected speech — not whether it would have taken them if Anthropic had agreed to its terms. The court observed that if this were merely a contracting impasse, DoW would presumably have just stopped using Claude, rather than pursuing a permanent government-wide ban and private-sector blacklist.

Procedural Due Process

The court found that Anthropic received no meaningful notice or opportunity to respond before being effectively debarred government-wide. Under the Mathews v. Eldridge balancing test, the court found:

  • Protectible interest: Debarring a company from government contracts implicates both liberty interests (the right to pursue a profession) and reputational interests (being labeled a national security adversary). The court applied the D.C. Circuit's "reputation plus" test, finding that the reputational harm of being branded an adversary and potential saboteur, combined with the tangible loss of contracting eligibility, satisfied the standard.

  • Risk of erroneous deprivation: This risk was high. Anthropic demonstrated that DoW's stated factual basis for the designation contained "core misunderstandings about how Anthropic's technology works" — including the false premise that Anthropic could unilaterally access, modify, or shut down Claude after deployment on government systems. Government counsel at oral argument admitted he was unaware of DoW having any knowledge that Anthropic had such capabilities.

  • Government interest in bypassing process: The government failed to show any exigency. The usage restrictions had been in place for over a year. Anthropic's removal was scheduled over six months. The court noted a telling contradiction: three days before designating Anthropic a supply chain threat, Secretary Hegseth had alternatively threatened to invoke the Defense Production Act to compel Anthropic to provide services — meaning the company was simultaneously an "unacceptable national security threat" and essential to national security.

Administrative Procedure Act

The court found the supply chain designation likely violated the APA on multiple grounds.

Contrary to law — statutory mismatch. Section 3252 defines "supply chain risk" as the risk that "an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a national security system. The legislative history confirms the statute targets covert sabotage — counterfeit code, malicious components — by hostile foreign actors. The court found that publicly announcing contractual usage restrictions during negotiations bears no relation to the conduct Congress intended to address. The court stated it was "deeply troubling" that the government took the position that any vendor who "pushes back" on or "questions" DoW could be designated its "adversary."

Procedural failures. Section 3252 requires the Secretary to (1) determine in writing that "less intrusive measures are not reasonably available" and (2) report to Congress with "a discussion of less intrusive measures that were considered and why they were not reasonably available." The court found that while Secretary Hegseth's documents repeated the conclusory statement that less intrusive measures were unavailable, nothing in the administrative record discussed what measures were considered or why they were insufficient. The six identical congressional notices contained no such discussion either, as the government conceded at oral argument.

Additionally, DoW regulations require a risk assessment by the Under Secretary of Defense for Intelligence. The actual risk assessment came from Under Secretary Emil Michael — who led the contract negotiations with Anthropic — not from the intelligence official the regulations specify.

Arbitrary and capricious — pretextual justification. The court found the government's proffered reasons appeared pretextual. The timing was damning: all supporting documents were dated March 2–3, 2026, days after Secretary Hegseth publicly ordered the designation. Moreover, on March 3 — the same day the designation was finalized — Under Secretary Michael exchanged emails with Amodei reviewing draft usage terms, writing: "After reviewing with our attorneys and seeing your last draft (thanks for being fast), I think we are very close here." The court found it "exceedingly difficult" to square this correspondence with Michael's contemporaneous characterization of Anthropic as a "hostile" company presenting an "unacceptable national security threat."

The Injunction and What Comes Next

The court enjoined the government from enforcing the Presidential Directive, the Hegseth Directive, and the Supply Chain Designation against Anthropic across 17 named federal agencies and DoW. The injunction does not require the government to use Claude — only to refrain from implementing the ban while the case proceeds.

The court granted the government a seven-day administrative stay to seek an emergency appeal, which the government indicated it intends to pursue. The government was ordered to file a compliance report by April 6 detailing how it has implemented the injunction.

The court set a nominal bond of $100, finding no evidence of harm to the government from the injunction since the relief does not compel the government to purchase Anthropic's products.

Why This Case Matters

This opinion draws a clear line between choosing vendors and punishing them.

Government procurement is discretionary — but it is not weaponizable. DoW was free to stop using Claude. It could have chosen a competing AI vendor. What it could not do was use procurement authorities designed for foreign adversaries to blacklist a domestic company government-wide as retaliation for speech. The court noted that one amicus brief described the challenged actions as "attempted corporate murder."

Public advocacy by government contractors is protected. The opinion affirms that a company does not forfeit its First Amendment rights by entering the government contracting space. Contractors may publicly disagree with government policy — even during active contract negotiations — without risking punitive retaliation.

Section 3252 has limits. The court's interpretation of 10 U.S.C. § 3252 establishes that the supply chain risk authority is limited to its statutory purpose: addressing covert sabotage by hostile actors. It cannot be repurposed as a tool for punishing vendors who take unwelcome public positions.

Due process applies to domestic companies. While Section 3252 may often be applied without pre-deprivation process (because its typical targets are foreign actors), the court held that applying it to a domestic company with significant government business requires meaningful notice and an opportunity to respond.

Chilling effects on AI safety discourse. Multiple amicus briefs — from AI researchers, military leaders, small developers, and investors — warned that the challenged actions threatened to silence the professionals best positioned to identify risks in AI systems. The opinion acknowledges this broader harm to the public interest.

Read the full opinion (PDF)

This post is not legal advice and is merely offered for informational purposes. Consult an attorney for a recommendation appropriate for your specific circumstances.

Need Legal Assistance in Puerto Rico?

Riefkohl Law provides experienced legal counsel across a wide range of practice areas. Explore our resources:

Call (787) 236-1657 or schedule a consultation to discuss your legal needs.

Previous
Previous

Can a Remainder Beneficiary Challenge Trustees for Poor Investment Returns? Dill v. Offray and the Limits of Prudent Investor Claims

Next
Next

A Partner Promised an Employee an Ownership Stake. Was It Enforceable?