Anthropic accuses DeepSeek, Moonshot, and MiniMax of using its language model Claude in “distillation” attacks, generating over 16 million exchanges and creating 24,000 fraudulent accounts. The attacks targeted Claude’s unique capabilities in agentic reasoning, tool use, coding, and more. Anthropic plans to enhance detection systems and collaborate with industry and lawmakers to prevent further attacks.

The trio of AI firms accused by Anthropic – DeepSeek, Moonshot, and Minimax – are all based in China with multi-billion dollar valuations. Anthropic warns that distillation campaigns from foreign competitors pose geopolitical risks beyond intellectual property implications. It fears authoritarian governments could use unprotected capabilities for offensive cyber operations, disinformation, and mass surveillance.

Anthropic vows to protect itself by improving detection systems, sharing threat intelligence, and tightening access controls. It calls for collaboration from industry and policymakers to prevent foreign AI companies from attacking US firms. The company stresses that a coordinated response across the AI industry, cloud providers, and policymakers is necessary to address distillation attacks effectively.

Read more at Cointelegraph: Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks