Tech »  Topic »  Anthropic Claims Chinese AI Firms ‘Distilled’ Claude to Train Their Models

Anthropic Claims Chinese AI Firms ‘Distilled’ Claude to Train Their Models


Anthropic claims Chinese AI firms distilled Claude to train rival AI models, raising concerns about model extraction, security risks, and AI distillation abuse.

In AI, distillation refers to training a new AI model by learning from the outputs of an existing model instead of using original training data.

Questions about how AI models can be copied and replicated are moving from theory into active security debates after Anthropic, the developer of the Claude AI chatbot, accused several companies of attempting to extract knowledge from the Claude language model. In a recent blog post, the company said it detected coordinated activity aimed at using Claude outputs to train competing systems, a practice known as model distillation.

Anthropic describes distillation as a widely used training technique where a large model acts as a teacher for smaller models. The method can reduce costs and speed up development by allowing developers to learn from ...


Copyright of this story solely belongs to hackread.com . To see the full text click HERE