Ten days ago, Anthropic was the Pentagon's AI partner of choice. Claude was the only large language model cleared for use on classified military networks. The company held a $200 million defense contract signed last July and had forward-deployed engineers embedded across multiple military programs.
On Monday, Anthropic sued the government that built it up.
The company filed two federal lawsuits against the Trump administration, one in U.S. District Court for the Northern District of California and another in the D.C. Circuit Court of Appeals, alleging that the Pentagon's decision to label it a "supply chain risk" was unconstitutional retaliation for refusing to let the military use Claude without ethical guardrails. The 48-page California complaint calls the government's actions "unprecedented and unlawful" and warns they are "harming Anthropic irreparably."
For context: This article is the third in our series covering the Anthropic-Pentagon standoff. It picks up where our previous coverage of the February 27 deadline left off. Our first article covered the initial rupture in mid-February.
From Deadline to Blacklist
When the Pentagon's 5:01 PM deadline expired on February 27, the question was whether Defense Secretary Pete Hegseth would actually follow through on his threats. He did.
Within hours of the deadline passing, President Trump posted on Truth Social: "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" He gave agencies like the Pentagon, which rely heavily on Claude for classified operations, six months to complete the phase-out.
The same day, Hegseth announced Anthropic would be designated a supply chain risk, a classification normally reserved for foreign adversaries like Huawei that pose potential sabotage threats to U.S. military systems. No American company had ever received the label.
Then came OpenAI. Hours after the administration cut ties with Anthropic, OpenAI announced its own $200 million Pentagon deal to replace Claude on classified networks. The timing was not subtle. And neither was Dario Amodei's reaction.
The Leaked Memo That Made Everything Worse
On the same Friday the deadline expired, Amodei sent a 1,600-word memo to Anthropic employees. The Information obtained the full document.
In it, Amodei called OpenAI's Pentagon deal "maybe 20% real and 80% safety theater." He referred to OpenAI's public messaging as "straight up lies" and accused Sam Altman of "presenting himself as a peacemaker and dealmaker." He called OpenAI staff "gullible" and their public supporters "Twitter morons."
He also pointed to what he saw as the real reason the administration targeted Anthropic: "The reasons the Trump administration didn't like Anthropic was that we haven't donated to Trump (while OpenAI/Greg have donated a lot)." OpenAI president Greg Brockman and his wife gave $25 million to the MAGA Inc super PAC. Altman personally donated $1 million to Trump's inauguration.
The memo leaked within days.
On March 6, Amodei published an apology on Anthropic's website: "It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views." He called the memo an "out-of-date assessment" written in the heat of a chaotic afternoon.
But the damage was done. The memo gave the administration more ammunition, and it gave critics a reason to question whether Anthropic's principled stand was really about ethics or about politics.
The "Whoa Moment"
While Amodei was writing his memo, the Pentagon was having its own reckoning.
Emil Michael, the Under Secretary of Defense for Research and Engineering, told Fortune about the moment defense leadership realized how dependent they had become on Anthropic's technology. It started when Anthropic asked whether Claude had been used during the January raid on Venezuela that captured Nicolas Maduro.
Michael's reaction: "I'm like, holy shit, what if this software went down...and we left our people at risk?"
He described it as "a whoa moment for the whole leadership at the Pentagon." The concern was not hypothetical. Claude was embedded in real-time military operations. Other AI providers had not deployed engineers to classified environments at the same scale. Losing Anthropic meant losing capability that could not be immediately replaced.
Michael told Bloomberg the Pentagon was "moving on" and that the dispute would not be resolved in court. He initiated parallel deals with OpenAI and Elon Musk's xAI to build redundancy, telling reporters: "I need redundancy."
But moving on proved harder than expected. The supply chain risk designation required defense contractors to certify they were not using Claude. Lockheed Martin announced it would "follow the President's and the Department of War's direction" and look to other LLM providers. Boeing said it had no active Anthropic contracts. Ten defense-focused portfolio companies at one venture capital firm began actively replacing Claude within days of the announcement.
The Formal Designation
On March 4, Anthropic received the official letter. The Pentagon confirmed the supply chain risk designation was effective immediately.
The scope was narrower than the initial rhetoric suggested. It applied only to Claude's use "as a direct part of" Department of Defense contracts, not to all uses by companies that happen to do business with the Pentagon. Amazon, Google, and Microsoft announced they would continue offering Claude to commercial clients, carving out defense-related work.
But for Anthropic, even the narrower scope carried enormous financial consequences.
CFO Krishna Rao, in a declaration filed with the lawsuit, warned that "across Anthropic's entire business, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars." The company projected $150 million in immediate annual recurring revenue losses from existing and expected Pentagon contracts. Rao added that Anthropic could lose 50% to 100% of its revenue from defense contractors and companies dependent on the Defense Department. If the designation stands, he said, the damage would be "almost impossible to reverse."
For a company that recently raised $30 billion at a $380 billion valuation and is projecting $18 billion in 2026 revenue, the direct Pentagon losses are survivable. The reputational contamination is the real threat. The words "supply chain risk" carry a specific connotation in the defense world: foreign adversary. Being placed in the same category as Huawei tells every government contractor, ally nation, and enterprise buyer something the word "blacklist" does not fully convey.
The Five-Count Lawsuit
Anthropic's California complaint raises five distinct claims.
| Count | Legal Basis | Core Argument |
|---|---|---|
| 1 | Administrative Procedure Act | The Pentagon failed to follow congressionally mandated procedures when making the designation |
| 2 | First Amendment retaliation | The designation punishes Anthropic for expressing its views on AI safety policy |
| 3 | First Amendment retaliation | The government cannot "employ the power of the State to punish or suppress disfavored expression" |
| 4 | Fifth Amendment due process | Anthropic received no notice, no opportunity to respond, and no procedural safeguards |
| 5 | APA (unauthorized sanctions) | Federal agencies exceeded statutory authority in subsequent contract cancellations |
The company is asking the court to undo the supply chain risk designation, block its enforcement, and require federal agencies to withdraw directives ordering contractors to drop Anthropic.
The legal theory rests on a specific claim: the supply chain risk statute (10 USC 3252) exists to protect the government from foreign threats, not to punish American companies for disagreeing with the executive branch. Anthropic argues the Pentagon failed to obtain the required joint recommendations, failed to make written national security determinations, and failed to provide proper notice to congressional committees. In short, they skipped the process.
The First Amendment argument is equally pointed. Anthropic's complaint states: "The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance." The company cites the Supreme Court's 2024 ruling in Moody v. NetChoice, which established that algorithmic content moderation can constitute protected speech.
Simultaneously, Anthropic filed a separate petition in the D.C. Circuit Court of Appeals requesting direct review of the Pentagon's determination under FASCSA, the Federal Acquisition Supply Chain Security Act, which provides a judicial review pathway that the narrower military statute does not.
The Industry Lined Up Again
The amicus brief arrived on the court's docket within hours of the filing.
Thirty-seven engineers, researchers, and scientists from Google and OpenAI signed a legal brief supporting Anthropic's challenge. Among the signatories: Jeff Dean, Google's chief scientist and head of its AI research division. All signed in their personal capacities, not as representatives of their employers.
The brief's core argument: "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry." If the Pentagon was "no longer satisfied with the agreed-upon terms of its contract with Anthropic," the brief continued, it could have "simply canceled the contract and purchased the services of another leading AI company." Using the supply chain risk label, they argued, was a punishment, not a procurement decision.
The support extended beyond the brief. OpenAI's head of robotics, Caitlin Kalinowski, resigned over her own company's Pentagon deal. In her public resignation statement, Kalinowski wrote that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization" were lines that "deserved more deliberation than they got."
A coalition of 35 former military officials, industry advocates, and private sector leaders sent a letter to Congress on March 5 urging lawmakers to investigate. They asked Congress to use its oversight authority and consider legislation ensuring the supply chain risk designation is reserved for "protecting the United States from foreign threats, not disciplining American companies for disagreeing with the executive branch."
Senator Ron Wyden of Oregon pledged to "pull out all the stops" to fight the designation. Four senators from the Armed Services Committee and the top defense appropriators from both parties co-signed an appeal urging both sides to return to negotiations.
The Other Side
The Pentagon's position has been consistent: this is about operational control, not speech.
Defense officials say a private company cannot insert itself into the military's chain of command by dictating which lawful applications are permitted and which are not. In a national security emergency, the military needs technology that works without a vendor's permission slip.
Emil Michael put it bluntly in an interview with Pirate Wires: the Pentagon is "not the FBI" and "not DHS." The concern about mass surveillance, he argued, is misplaced because the Department of Defense does not conduct domestic surveillance. That is a law enforcement function.
There is also the political dimension Amodei himself acknowledged. Anthropic has not donated to Trump. OpenAI's leadership has. Whether the administration's response would have been different if the money had flowed differently is a question nobody in Washington will answer on the record, but everyone in Silicon Valley is asking privately.
Critics of Anthropic's position note the company's own history with ethical gray areas. In September 2025, Anthropic paid $1.5 billion to settle a class-action lawsuit after a court found it had downloaded over 7 million pirated books to train Claude. The "principled stand" narrative, these critics argue, is selective.
And the market has its own view. While Anthropic is fighting the Pentagon in court, Claude hit number one on the App Store. Daily sign-ups have broken all-time records every day since the crisis began. Free active users are up 60% since January. Paid subscribers have more than doubled this year. The #CancelChatGPT movement, fueled by users angry at OpenAI's Pentagon deal, has driven a consumer surge that no marketing campaign could have produced.
Principle or positioning. Depending on whom you ask, it is both.
How We Got Here
The Bottom Line
An American AI company built the most capable model on the Pentagon's classified networks. It drew two ethical lines: no autonomous weapons without human oversight, and no mass surveillance of Americans. For those two conditions, the Trump administration branded it a national security threat, banned it from the federal government, and placed it in the same regulatory category as Huawei.
Anthropic's lawsuit will test whether the First Amendment protects a company's right to set ethical limits on its own technology, even when the customer is the United States military. The five-count complaint claims the Pentagon skipped required procedures, retaliated against protected speech, and denied due process. If the courts agree, it would establish that the supply chain risk statute cannot be weaponized against domestic companies for policy disagreements. If they do not, every AI company in America will know the price of saying no.
The case also exposes a deeper irony. The Pentagon designated Anthropic a supply chain risk because it refused to give the military unrestricted access to Claude. But as Emil Michael admitted, Claude was so embedded in classified operations that losing it created the exact supply chain vulnerability the designation is supposed to prevent. The Pentagon's attempt to punish dependence on a single AI vendor revealed just how dependent it had become.
Meanwhile, Claude is number one on the App Store. Thirty-seven researchers from Anthropic's competitors filed a legal brief in its defense. A senior OpenAI executive resigned on principle over her own company's Pentagon deal. And defense contractors are scrambling to replace a model the Pentagon itself chose because nothing else was as good.
As Caitlin Kalinowski wrote in her resignation letter: the lines around surveillance and lethal autonomy "deserved more deliberation than they got." Whether they get that deliberation now depends on a federal judge in San Francisco.
Sources
- NPR: Anthropic Sues the Trump Administration Over 'Supply Chain Risk' Label (Mar 9, 2026)
- PBS News: Anthropic Sues in Federal Court to Reverse Trump Administration's Supply Chain Risk Designation (Mar 9, 2026)
- CNBC: Anthropic Sues Trump Administration Over Pentagon Blacklist (Mar 9, 2026)
- Lawfare: Anthropic Sues Defense Department Over Supply Chain Risk Designation (Mar 9, 2026)
- Fortune: Pentagon Official Recalls 'Whoa Moment' When Defense Leaders Realized How Indispensable Anthropic Is (Mar 7, 2026)
- TechCrunch: Anthropic CEO Dario Amodei Calls OpenAI's Messaging Around Military Deal 'Straight Up Lies' (Mar 4, 2026)
- Fortune: Anthropic CEO Apologizes for Leaked Memo, Confirms Pentagon Supply Chain Risk Designation (Mar 6, 2026)
- TechCrunch: OpenAI and Google Employees Rush to Anthropic's Defense in DOD Lawsuit (Mar 9, 2026)
- TechCrunch: OpenAI Robotics Lead Caitlin Kalinowski Quits in Response to Pentagon Deal (Mar 7, 2026)
- CNBC: Defense Tech Companies Are Dropping Claude After Pentagon's Anthropic Blacklist (Mar 4, 2026)
- Goodwin Law: Is Claude a Supply Chain Risk? What Federal Contractors Need to Know (Mar 2026)
- Reuters/Investing.com: Anthropic Executives Say Pentagon Blacklisting Could Hit Billions in Sales (Mar 9, 2026)
- Axios: Anthropic Got Blacklisted by the Pentagon. Then Claude Hit No. 1 in the App Store. (Mar 1, 2026)
- Fortune: Trump Orders U.S. Government to Stop Using Anthropic (Feb 27, 2026)
- Nextgov: Private Sector, Former Military Leaders Urge Congress to Intervene in Pentagon-Anthropic Dispute (Mar 5, 2026)
- LDS: The Pentagon Gave Anthropic a Friday Deadline. Anthropic Said No. (Feb 27, 2026)
- LDS: The Pentagon Threatened to Blacklist Anthropic and the AI Industry Is Watching (Feb 19, 2026)