By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Anthropic exposes how Chinese AI firms try to steal LLM tech
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Nuclear talks hang in the balance as Iran warns it’s ready to fight
Nuclear talks hang in the balance as Iran warns it’s ready to fight
OpenAI Safety Team Heads to Ottawa After B.C. Shooter’s ChatGPT Ban
OpenAI Safety Team Heads to Ottawa After B.C. Shooter’s ChatGPT Ban
Carlos Alcaraz offers unique look into his shenanigans-filled life behind the scenes with team after Qatar Open win
Carlos Alcaraz offers unique look into his shenanigans-filled life behind the scenes with team after Qatar Open win
One engineer made a production SaaS product in an hour: here's the governance system that made it possible
One engineer made a production SaaS product in an hour: here's the governance system that made it possible
30 best Marriott hotels in the world that you can book with points
30 best Marriott hotels in the world that you can book with points
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Anthropic exposes how Chinese AI firms try to steal LLM tech
Tech

Anthropic exposes how Chinese AI firms try to steal LLM tech

Scoopico
Last updated: February 23, 2026 11:27 pm
Scoopico
Published: February 23, 2026
Share
SHARE


Anthropic is accusing three Chinese artificial intelligence companies of “industrial-scale campaigns” to “illicitly extract” its technology using distillation attacks. Anthropic says these companies created 24,000 fraudulent accounts to hide these efforts.

In a blog post detailing the attacks, Anthropic named three AI firms, including DeepSeek, the maker of the popular DeepSeek AI models. Anthropic explicitly framed the attack as an issue of national security.

“We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models,” reads the blog post. “These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”


This Tweet is currently unavailable. It might be loading or has been removed.

In January, OpenAI also accused DeepSeek of engaging in distillation attacks, effectively stealing its technology.

At the time, many people reacted not with sympathy, but with mocking, as OpenAI and other AI companies have claimed they have the absolute right to train their models on copyrighted works without permission or payment. Typically, AI industry supporters say they have no choice but to train on copyrighted works because Chinese competitors are sure to ignore copyright laws anyway.

“You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” President Donald Trump said at an AI event in July 2025. “When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws or have to make deals with every content provider.” He also added, “China’s not doing it.”

SEE ALSO:

Anthropic releases Claude Sonnet 4.6: Benchmark performance, how to try it

That puts AI companies in the awkward position of claiming their intellectual property is off-limits for model training, while also engaging in similar behavior themselves.

Mashable Light Speed

What are distillation attacks?

Distillation is a common training technique for large-language models; however, it can also be used to effectively reverse-engineer some aspects of the technology. In distillation, AI researchers run variations of the same prompt repeatedly to see how a particular model responds.

“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Chinese companies have a reputation for flagrantly ignoring intellectual property treaties and copyright laws, and reverse-engineering technology from Western companies. However, while Anthropic says the distillation attacks it uncovered violated its terms of service, it’s not clear that they violated any international laws, or what remedy Anthropic has besides suspending the violating accounts.

To prevent attacks like this, Anthropic called for cooperation between AI companies, government agencies, and other stakeholders.

AI companies like Anthropic, xAI, Meta, and OpenAI are in the midst of one of the largest spending booms ever seen, with tens of billions of dollars being poured into AI infrastructure, data centers, and research and development. If rival foreign AI companies can cheaply recreate their LLM technology using distillation, they would clearly have an advantage over their U.S. rivals.

“These campaigns are growing in intensity and sophistication,” the blog post reads. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

Mashable reached out to Anthropic with questions about the distillation attacks, and we’ll update this article if we receive a response.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

[/gpt3]

How to unblock XVideos for free
20 Finest Prime Day Health Tracker Offers and Sensible Ring Gross sales (2025)
Overview: Marshall Heston 120 soundbar delivers huge TV sound
NYT Pips hints, answers for February 12, 2026
Amazon to pay $1.5 billion to prospects in FTC settlement
Share This Article
Facebook Email Print

POPULAR

Nuclear talks hang in the balance as Iran warns it’s ready to fight
News

Nuclear talks hang in the balance as Iran warns it’s ready to fight

OpenAI Safety Team Heads to Ottawa After B.C. Shooter’s ChatGPT Ban
Politics

OpenAI Safety Team Heads to Ottawa After B.C. Shooter’s ChatGPT Ban

Carlos Alcaraz offers unique look into his shenanigans-filled life behind the scenes with team after Qatar Open win
Sports

Carlos Alcaraz offers unique look into his shenanigans-filled life behind the scenes with team after Qatar Open win

One engineer made a production SaaS product in an hour: here's the governance system that made it possible
Tech

One engineer made a production SaaS product in an hour: here's the governance system that made it possible

30 best Marriott hotels in the world that you can book with points
Travel

30 best Marriott hotels in the world that you can book with points

A new U.S. attack on Iran could risk large-scale retaliation
U.S.

A new U.S. attack on Iran could risk large-scale retaliation

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?