On February 23, Anthropic published a report that reads less like a corporate blog post and more like an intelligence briefing.
The company disclosed that three Chinese AI laboratories -- DeepSeek, Moonshot AI, and MiniMax -- had been running industrial-scale campaigns to extract Claude's capabilities and use them to train their own models. The operation involved approximately 24,000 fraudulent accounts, over 16 million exchanges with Claude, and sprawling proxy networks designed to disguise the traffic as ordinary usage.
Anthropic's language was blunt: "These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions."
For national security reasons, Anthropic does not currently offer commercial access to Claude in China, or to overseas subsidiaries of Chinese companies. These labs were not supposed to have access at all.
What They Stole
The three companies were not just querying Claude at scale. They were systematically extracting specific capabilities -- reasoning patterns, coding techniques, and safety-relevant capabilities -- and feeding the outputs into their own training pipelines. This process is called model distillation: using a powerful model's outputs as training data for a cheaper, smaller model.
The breakdown by company is striking:
| Company | Exchanges | % of Total | Primary Targets |
|---|---|---|---|
| MiniMax | 13,000,000+ | ~81% | Agentic coding, tool use and orchestration |
| Moonshot AI | 3,400,000+ | ~21% | Agentic reasoning and tool use, coding and data analysis, computer vision, computer-use agent development |
| DeepSeek | 150,000+ | <1% | Reasoning, rubric-based grading tasks functioning as reward models, censorship-safe alternatives to sensitive queries |
| Total | 16,000,000+ |
MiniMax, a Shanghai-based company that IPO'd on the Hong Kong Stock Exchange on January 9, 2026 (its stock surged 109% on the first day), accounted for more than three-quarters of all the stolen interactions. Moonshot AI, the Beijing-based creator of the Kimi model series and backed by Alibaba, generated 3.4 million exchanges. DeepSeek -- the company behind January 2025's "Sputnik moment" when its R1 model briefly overtook ChatGPT as the most downloaded app in the US -- contributed the smallest volume but deployed the most technically sophisticated techniques.
How They Did It
The operation relied on what Anthropic called "hydra cluster" architectures -- sprawling networks of fake accounts distributed across Anthropic's API and third-party cloud platforms. The name is apt. Cut off one account, and another takes its place.
"In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder," Anthropic wrote.
The proxy services acted as intermediaries, reselling access to Claude and other frontier AI models at scale. They deliberately blended distillation-focused queries with mundane, unrelated requests to flatten any anomalous signals in Anthropic's monitoring systems.
The accounts were created through pathways most commonly exploited for fraud -- educational accounts, security research programs, and startup organizations. When Anthropic banned one account, a new one took its place.
DeepSeek's Censorship Trick
DeepSeek's campaign was the smallest by volume but the most revealing in purpose.
Beyond extracting reasoning capabilities and chain-of-thought data, DeepSeek used Claude for something no Western AI company would openly admit to needing: building a censorship engine.
Anthropic reported that it "also observed tasks in which Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek's own models to steer conversations away from censored topics."
In other words, DeepSeek was using an American AI model to teach its own model how to suppress politically sensitive information for the Chinese government.
The technical methodology was equally striking. DeepSeek's prompts asked Claude to "imagine and articulate the internal reasoning behind a completed response and write it out step by step." This effectively turned Claude's answers into chain-of-thought training data -- the most valuable ingredient for building reasoning models. Anthropic said it was able to "trace these accounts to specific researchers at the lab" through request metadata.
DeepSeek did not respond to requests for comment.
MiniMax: The 24-Hour Pivot
MiniMax's campaign was the largest and most aggressive. It was also the one Anthropic caught in real time.
When Anthropic released a new model during MiniMax's active campaign, MiniMax "pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system." MiniMax was not just stealing from Claude. It was tracking Anthropic's release schedule and immediately targeting each new model as it launched.
Anthropic said it detected MiniMax's campaign "before MiniMax released the model it was training," giving Anthropic "unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch."
MiniMax did not respond to requests for comment.
Moonshot AI: Hundreds of Accounts, Multiple Pathways
Moonshot AI, the Beijing-based lab behind the Kimi model series, targeted Claude's agentic reasoning, tool use, coding, computer-use agent development, and computer vision capabilities.
The company "employed hundreds of fraudulent accounts spanning multiple access pathways." In a later phase, Moonshot adopted a more targeted approach, "attempting to extract and reconstruct Claude's reasoning traces" -- the internal step-by-step logic that makes Claude's responses coherent.
Moonshot AI did not respond to requests for comment.
How Anthropic Caught Them
Anthropic described a multi-layered detection system:
- IP address correlation linking accounts that appeared independent
- Request metadata showing shared payment methods, synchronized timing, and identical prompt patterns
- Infrastructure indicators including classifiers and behavioral fingerprinting systems identifying distillation patterns in API traffic, with the "volume, structure, and focus of the prompts" differing from normal usage
- Corroboration from industry partners who "observed the same actors and behaviors on their platforms"
For DeepSeek specifically, the trail led directly to individual researchers at the lab. For MiniMax, the real-time detection meant Anthropic could watch the entire distillation lifecycle unfold -- from data extraction through model training to planned release.
Anthropic Was Not the Only Target
OpenAI raised the alarm first.
On February 12, 2026 -- eleven days before Anthropic's public report -- OpenAI sent a formal memo to the U.S. House Select Committee on the CCP accusing DeepSeek of distillation. The memo stated that OpenAI had observed "accounts associated with DeepSeek employees developing methods to circumvent OpenAI's access restrictions" and accessing models "through obfuscated third-party routers and other ways that mask their source."
OpenAI's warning was specific: "DeepSeek's next model (whatever its form) should be understood in the context of its ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs."
Google confirmed the pattern as well. Google Threat Intelligence Group (GTIG) and Google DeepMind disclosed they had identified and disrupted model extraction activity involving more than 100,000 prompts targeting Gemini's reasoning capabilities. Google's report notably framed the threat as global rather than China-specific, noting attacks came from researchers and private sector companies from around the world.
Three of America's largest AI companies. All targeted. All reporting the same pattern.
The National Security Argument
Anthropic framed the distillation attacks not just as a corporate threat but as a national security risk.
"Illicitly distilled models lack necessary safeguards, creating significant national security risks," the report stated. "If distilled models are open-sourced, this risk multiplies as these capabilities spread freely beyond any single government's control."
The argument: when a company like DeepSeek extracts Claude's reasoning capabilities and strips away the safety training, the resulting model can be used for offensive cyber operations, disinformation campaigns, and mass surveillance -- without the guardrails that Anthropic spent years building.
Anthropic tied the attacks directly to the US-China technology competition: "Distillation attacks undermine those controls by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means."
Representative John Moolenaar (R-MI), chairman of the House Select Committee on the CCP, responded to OpenAI's earlier memo with a characteristically blunt assessment: "This is part of the CCP's playbook: steal, copy, and kill."
The Other Side of the Argument
Not everyone found Anthropic's report convincing on its own terms.
The China Academy, an intellectual content network dedicated to helping global audiences understand how China sees the world, argued the report was "tailored for an audience of one: Washington." Their analysis noted that DeepSeek -- the company that received top billing in every headline -- accounted for less than 1% of the total interactions. MiniMax, which generated 81% of the traffic, was barely mentioned in most media coverage. The framing, critics argued, was designed to maximize political impact during the heated US-China AI rivalry rather than to accurately represent the data.
Legal analysts pointed out that the legal pathway for action is murky. AI-generated outputs are not protected by copyright under US law. The Chinese labs violated Anthropic's terms of service, but that makes this a breach-of-contract dispute, not intellectual property theft. As Benjamin Jensen, Director of the Futures Lab at CSIS, testified before Congress: "Is distillation IP theft? There is a difference of opinion this committee must address."
The hypocrisy question was raised by multiple outlets. In September 2025, Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit after a court found the company had downloaded over 7 million pirated books from sources including Library Genesis and the Pirate Library Mirror to train Claude. Anthropic also faces separate litigation over unauthorized Reddit content scraping. Critics -- including Elon Musk, who called Anthropic "MisAnthropic" and "guilty of stealing training data at massive scale" -- argued that the company was objecting to having done to it what it had done to others.
Anthropic's defenders draw a distinction: the Chinese labs agreed to specific terms of service and then systematically circumvented them using 24,000 fake accounts and commercial proxy networks. No individual author ever signed a contract with Anthropic that Anthropic then violated using fake identities at industrial scale.
Ali Ghodsi, CEO of Databricks, offered perhaps the most pragmatic assessment: "This distillation technique is just so extremely powerful and so extremely cheap, and it's just available to anyone."
What Happens Next
Anthropic outlined a four-part response: improved detection classifiers, tighter access controls on commonly exploited account pathways, product, API, and model-level countermeasures to reduce the efficacy of outputs for illicit distillation, and intelligence sharing with other AI labs, cloud providers, and "relevant authorities."
The company did not specify which authorities it contacted or whether formal legal complaints were filed. As of March 1, none of the three Chinese companies have issued public statements. No lawsuits have been announced.
But Anthropic's report carried an unmistakable urgency: "These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region." Elsewhere in the report, the company added: "But no company can solve this alone."
The security researcher Gal Elbaz, co-founder and CTO of Oligo Security, put the risk more starkly: "The scary part is, you can take all of the power and unleash it, because you don't have anyone that actually enforces those guardrails on the other side."
The Bottom Line
Three Chinese AI laboratories created 24,000 fake accounts and ran 16 million queries against Claude to extract its capabilities -- its reasoning, its coding ability, its chain-of-thought logic. One company pivoted within 24 hours to start mining a new model the moment Anthropic released it. Another used Claude to generate censorship-safe responses to questions about political dissidents, building a tool for information suppression using an American AI. A single proxy network managed 20,000 accounts simultaneously, mixing stolen queries with legitimate traffic to avoid detection.
Anthropic does not sell Claude in China. These labs circumvented regional access restrictions, violated terms of service, and operated at a scale that required commercial proxy infrastructure and coordinated teams. OpenAI and Google reported the same pattern against their own models. All three companies. All the same playbook.
The distillation technique itself is not illegal under current US law. There is no statute that specifically criminalizes using one AI model's outputs to train another. The strongest legal claim is a terms-of-service violation -- a contract dispute, not espionage. Congress is still debating whether to change that.
In the meantime, the capabilities have already been extracted. The models trained on that data are being built or have already launched. And the safety guardrails that Anthropic spent years developing are not part of the package.
As Anthropic wrote: "The window to act is narrow." Whether anyone acts in time is a different question.
Sources
- Anthropic: Detecting and Preventing Distillation Attacks (Official Report) (Feb 23, 2026)
- TechCrunch: Anthropic Accuses Chinese AI Labs of Mining Claude (Feb 23, 2026)
- CNBC: Anthropic Joins OpenAI in Flagging Distillation Campaigns by Chinese Firms (Feb 24, 2026)
- Fortune: Anthropic Claims 3 Chinese Companies Ripped It Off (Feb 24, 2026)
- CNN: US AI Giant Anthropic Alleges China Rivals Are Cheating (Feb 24, 2026)
- Bloomberg: Anthropic Accuses DeepSeek, MiniMax, Moonshot of Illicit AI Model Distillation (Feb 23, 2026)
- The Hacker News: Anthropic Says Chinese AI Firms Used 16M Queries to Mine Claude (Feb 24, 2026)
- CyberScoop: Anthropic Accuses Chinese Labs of Distillation Cyber Risk (Feb 24, 2026)
- VentureBeat: Anthropic Says DeepSeek, Moonshot, and MiniMax Used 24,000 Fake Accounts (Feb 24, 2026)
- The Register: Anthropic Misanthropic Toward Chinese AI Labs (Feb 24, 2026)
- Google Cloud Blog: GTIG AI Threat Tracker -- Distillation, Experimentation, and AI for Adversarial Use (Feb 2026)
- The China Academy: Anthropic's China Allegations -- Tailored for an Audience of One (Feb 2026)
- CSIS: Protecting Our Edge -- Trade Secrets and the Global AI Arms Race (May 2025)