Anthropic warns that its Claude AI is being 'weaponized' by hackers to write malicious code
techradar.com
- Anthropic's Threat Intelligence Report outlines the acceleration of AI attacks
- AI is now fueling all parts of the cyberattack process
- One such attack has been identified at 'vibe hacking'
One of the world’s largest AI companies, Anthropic, has warned that its chatbot has been ‘weaponised’ by threat actors to “to commit large-scale theft and extortion of personal data". Anthropic’s Threat Intelligence Report details ways in which the technology is being used to carry out sophisticated cyberattacks.
Weaponized AI is making hackers faster, more aggressive, and more successful - and the threat report outlines that ransomware attacks which previously would have required years of training can now be crafted with very few technical skills.
These cyberattacks are lucrative for hackers, with AI now being used for fraudulent activity like stealing credit card information and identity theft, with attackers even using AI to analyze stolen ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE