The Pentagon Threatened to Blacklist Anthropic and the AI Industry Is Watching

DS
LDS Team
Let's Data Science
13 min readAudio
Listen Along
0:00 / 0:00
AI voice

Claude was used in the Venezuela raid. Anthropic pushed back. Now the Defense Department is reviewing the entire relationship.

By LDS Team

February 19, 2026

On January 3, 2026, the United States military launched a strike on Venezuela. Delta Force operators stormed Nicolas Maduro's compound in Caracas. Air defenses were suppressed across northern Venezuela. Maduro and his wife were captured and transported to New York City to face narcoterrorism charges.

Buried in the operation's classified infrastructure was an AI model: Anthropic's Claude.

Six weeks later, the Wall Street Journal reported that Claude had been used during the raid -- accessed through Anthropic's partnership with Palantir Technologies on Amazon's Top Secret Cloud. What happened next turned a classified military operation into the most consequential confrontation between an AI company and the US government in history.

How Claude Ended Up in a Military Raid

Claude did not arrive on the battlefield by accident. Anthropic spent the better part of two years building its way into the Pentagon's classified systems.

DateMilestone
Nov 2024Anthropic partners with Palantir and AWS to provide Claude to US intelligence and defense agencies
Jun 2025Anthropic launches Claude Gov for classified environments on AWS
Jul 2025Pentagon awards Anthropic a 200 million USD contract for military AI, alongside Google, OpenAI, and xAI
Jan 2026Claude used during Operation Absolute Resolve (Venezuela raid) via Palantir's platform

Anthropic was not dragged into defense work. It volunteered. The company was the first frontier AI lab to put models on classified networks and the first to provide customized models for national security customers. CEO Dario Amodei has repeatedly argued that democracies must maintain AI leadership to prevent authoritarian regimes from gaining a technological edge.

But Anthropic drew two lines. Two things Claude would not do, no matter who was asking.

Anthropic's Two Red Lines

Anthropic's position is not "no military use." It is "military use with guardrails." Specifically, the company has insisted on two hard limits:

1. No fully autonomous weapons. Claude cannot be used in weapons systems that fire without a human in the loop. A human must make the final decision to use lethal force.

2. No mass surveillance of Americans. Claude cannot be used for bulk domestic surveillance -- tracking citizens' locations, communications, or emotional states without consent.

These are not new positions. They are embedded in Anthropic's acceptable use policy and in Dario Amodei's public writing. In his January 26, 2026 essay "The Adolescence of Technology," Amodei wrote: "We should use AI for national defense in all ways except those which would make us more like our autocratic adversaries."

The Pentagon sees it differently.

The Rupture

The timeline of how a routine defense partnership turned into a public feud:

Jan 3, 2026
Operation Absolute Resolve
US forces capture Maduro in Venezuela. Claude is used during the operation via Palantir's AI platform on Amazon's Top Secret Cloud.
Early Jan 2026
The Palantir Check-In
During a routine meeting, an Anthropic official discusses the Venezuela operation with a Palantir executive. According to Semafor, the executive gathered that Anthropic disapproved and reported it to the Pentagon. Anthropic calls this account "false."
Jan 12, 2026
Hegseth Takes a Shot
Defense Secretary launches genai.mil with Google Gemini and xAI's Grok. States: "We will not employ AI models that won't allow you to fight wars." Semafor confirms he was referring to Anthropic.
Feb 12, 2026
Anthropic Raises 30 Billion USD
Anthropic closes a Series G at 380 billion USD valuation -- the largest AI fundraise in history. Days before the crisis goes public.
Feb 13, 2026
WSJ Breaks the Story
The Wall Street Journal reports that Claude was used during the Venezuela raid. The story spreads across every major outlet within hours.
Feb 16, 2026
The Threat
Axios reports Hegseth is "close" to designating Anthropic a "supply chain risk." Pentagon confirms the relationship is "under review." A senior official warns: "We are going to make sure they pay a price."
Feb 18, 2026
Pentagon Goes Public
Pentagon CTO Emil Michael publicly rejects Anthropic's restrictions: "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed."
Feb 19, 2026
Amodei Holds the Line
Speaking at the AI Impact Summit in New Delhi, Dario Amodei reaffirms: "These red lines are fairly important for us and for democracy."

The Nuclear Option: Supply Chain Risk Designation

The Pentagon's most aggressive threat is designating Anthropic a "supply chain risk." This label is normally reserved for foreign adversaries and hostile actors. Using it against a domestic American company would be unprecedented.

Here is what it would mean in practice:

If Anthropic is designated a supply chain riskImpact
Every Pentagon contractor must certify they do not use ClaudeThousands of companies affected
Companies doing business with the DoD would need to audit and remove ClaudeCostly, time-consuming compliance
Private sector customers may preemptively drop Claude to protect government contractsBroader commercial fallout
Anthropic's planned IPO could face significant headwindsValuation risk

Anthropic claims 8 of the 10 largest US companies use Claude. A supply chain risk designation would not just affect the 200 million USD Pentagon contract -- it could ripple across the entire enterprise software market.

Worth noting: A "supply chain risk" designation has never been used against a US company. The legal and political precedent of applying it to one of America's most valuable AI startups -- valued at 380 billion USD -- would likely trigger immediate legal challenges and Congressional scrutiny.

The Pentagon's Argument

The Pentagon's position is straightforward: AI companies should allow their models to be used for "all lawful purposes." If Congress has not banned an activity, the Pentagon argues, a private company should not restrict it.

Pentagon CTO Emil Michael framed it as a question of democratic authority: "Congress writes bills, the president signs them, agencies write regulations, and people comply. What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed. That is not democratic."

He compared the situation to Google's 2018 withdrawal from Project Maven, the Pentagon's military drone AI program, after employee protests. Google eventually re-engaged with defense contracts. He expressed hope Anthropic would do the same: "We want all our American champion AI companies to succeed. I want Anthropic, xAI, OpenAI, Google to succeed."

The Pentagon's practical concern is dependency. Anthropic is currently the only frontier AI model integrated into classified military systems. If Claude becomes unavailable during an urgent operation because of a usage policy dispute, the consequences could be severe. As he put it: "You can't have an AI company sell AI to the Department of War, and don't let it do Department of War things."

Anthropic's Argument

Anthropic's counterargument is equally direct: some uses of AI are dangerous enough to warrant restrictions regardless of their current legal status, because AI capabilities may outpace existing laws.

The company is not opposing military use of Claude. It is drawing a line around two specific scenarios -- autonomous weapons and mass surveillance -- that it believes pose existential risks to democratic governance. Anthropic's acceptable use policy explicitly prohibits using Claude to "track a person's physical location, emotional state, or communication without their consent."

An Anthropic spokesperson told The Hill: "Anthropic is committed to using frontier AI in support of US national security. That's why we were the first frontier AI company to put our models on classified networks." The company added that it is "having productive conversations, in good faith" with the Pentagon on the matter.

How the Rest of the AI Industry Responded

The Pentagon awarded identical 200 million USD contracts to four companies: Anthropic, Google, OpenAI, and xAI. Only Anthropic is pushing back on the terms.

CompanyPentagon ContractStance on "All Lawful Uses"
Anthropic200 million USD (Jul 2025)Insists on guardrails for autonomous weapons and mass surveillance
OpenAI200 million USD (Jul 2025)Removed explicit "military and warfare" ban from usage policy in January 2024. Partnered with defense firm Anduril in December 2024.
Google200 million USD (Jul 2025)Withdrew from Project Maven in 2018 after employee protests. Returned to Pentagon contracts by 2022.
xAI200 million USD (Jul 2025)Grok available on genai.mil. No public restrictions reported.

According to CNBC, one unnamed company has agreed to "all lawful uses" across all systems including classified networks. Two others have shown "some flexibility." Only Anthropic is holding firm on its two red lines.

Palantir -- the intermediary that triggered the dispute -- declined to comment. The defense contractor's general position is that it does not try to control how the US government uses its technology.

The Project Maven Parallel

Pentagon officials have explicitly compared this moment to Google's 2018 Project Maven crisis. The parallels are real, but the differences matter more.

In 2018, thousands of Google employees signed a petition demanding the company withdraw from a Pentagon AI program that analyzed drone footage. Google complied and walked away. By 2022, Google was back -- sharing a 9 billion USD Pentagon cloud contract with Amazon, Microsoft, and Oracle.

Anthropic's situation is fundamentally different. There is no employee revolt. Anthropic is not walking away from defense work. It is actively seeking to serve the military -- on its own terms. The dispute is not about whether AI should be used in warfare. It is about the specific conditions under which it should be used.

That distinction is exactly what makes this confrontation more significant than Project Maven. Google's withdrawal was a binary choice: in or out. Anthropic is trying to establish a third option: in, but with guardrails. Whether the Pentagon accepts that middle ground will set a precedent for the entire AI industry.

What Is Actually at Stake

This is not just a contract dispute. It is a test case for who controls the guardrails on military AI.

The Pentagon's position implies that existing law is sufficient to govern military AI use. If it is legal, it should be allowed. Companies should be vendors, not policymakers.

Anthropic's position implies that AI capabilities are advancing faster than law. Some uses -- autonomous weapons that kill without human approval, mass surveillance of citizens -- are dangerous enough that private companies have a responsibility to impose limits even when the law does not require them.

The stakes for each side:

If the Pentagon wins, every AI company gets the message: accept "all lawful uses" or risk losing government contracts and facing a supply chain risk designation that could hurt your commercial business. Safety guardrails become a competitive liability.

If Anthropic wins, AI companies establish a precedent for imposing ethical limits on government customers -- even the most powerful one. But it also means private companies, not elected officials, are setting the boundaries of military AI use.

Neither outcome is clean. The AI industry is watching because every company will eventually face some version of this question.

The Bottom Line

Anthropic built Claude into the Pentagon's classified systems. It took the 200 million USD contract. It put AI in the hands of the world's most powerful military. And then it said: there are two things we will not do.

No autonomous weapons without a human in the loop. No mass surveillance of Americans.

The Pentagon's response has been swift and aggressive -- a public review, threats of blacklisting, and the unprecedented possibility of designating an American AI company a supply chain risk. Anthropic has not backed down. Neither has the Pentagon.

As of today, the situation is unresolved. Anthropic says the conversations are "productive" and "in good faith." The Pentagon says the relationship is "under review." Behind the scenes, both sides are still negotiating.

What happens next will not just determine Anthropic's future in defense. It will define whether AI companies can set boundaries on how governments use their technology -- or whether, in the end, the customer always gets what they want.

Sources