Anthropic will start training Claude on user data - but you don't have to share yours
zdnet.com
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Anthropic updated its AI training policy.
- Users can now opt in to having their chats used for training.
- This deviates from Anthropic's previous stance.
Anthropic has become a leading AI lab, with one of its biggest draws being its strict position on prioritizing consumer data privacy. From the onset of Claude, its chatbot, Anthropic took a stern stance about not using user data to train its models, deviating from a common industry practice. That's now changing.
Users can now opt into having their data used to train the Anthropic models further, the company said in a blog post updating its consumer terms and privacy policy. The data collected is meant to help improve the models, making them safer and more intelligent, the company said in the post.
Also: Anthropic's Claude Chrome browser ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE