In a bold move, Anthropic, the leading AI research firm behind the powerful Claude model, has accused three Chinese AI companies of engaging in a massive and illicit distillation attack. The firm’s blog post, published on Sunday, details the sophisticated methods used by DeepSeek, Moonshot, and MiniMax to siphon capabilities from Claude, generating over 16 million exchanges through approximately 24,000 fraudulent accounts.
The Anatomy of the Attack
Distillation, a legitimate technique in AI, involves training a smaller model on the outputs of a larger, more capable one. However, Anthropic highlights the malicious use of this method, where competitors can acquire advanced capabilities at a fraction of the time and cost required to develop them independently. The attacks focused on scraping Claude for advanced tasks such as agentic reasoning, coding, data analysis, and even computer vision.
Geopolitical Implications
Beyond the intellectual property concerns, Anthropic warns of the broader geopolitical risks associated with these attacks. The firm argues that foreign labs distilling American models could integrate these capabilities into military, intelligence, and surveillance systems, potentially enabling authoritarian governments to deploy advanced AI for cyber operations, disinformation campaigns, and mass surveillance.
Identifying the Culprits
Anthropic identified the trio of companies through a combination of IP address correlation, request metadata, infrastructure indicators, and corroboration from industry partners. DeepSeek, Moonshot, and MiniMax, all based in China, have multi-billion-dollar valuations, with DeepSeek being the most internationally recognized of the three.
Protecting Against Future Attacks
In response to these attacks, Anthropic plans to enhance its detection systems, share threat intelligence, and tighten access controls. The firm also calls for greater collaboration among domestic industry participants and lawmakers to combat these threats. ‘No company can solve this alone. Distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers,’ Anthropic stated.
Looking Ahead
The incident underscores the growing tension in the global AI landscape, where intellectual property and national security concerns intersect. As AI continues to evolve, the need for robust protections and international cooperation becomes increasingly evident. Anthropic’s proactive stance sets a precedent for how companies can address and mitigate the risks of AI distillation attacks, ensuring the integrity and security of their innovations.
