Anthropic finds alarming 'emerging trends' in Claude misuse report
zdnet.com
On Wednesday, Anthropic released a report detailing how Claude was misused during March. It revealed some surprising and novel trends in how threat actors and chatbot abuse are evolving and the increasing risks that generative AI poses, even with proper safety testing.
Security concerns
In one case, Anthropic found that a "sophisticated actor" had used Claude to help scrape leaked credentials "associated with security cameras" to access the devices, the company noted in the announcement.
Also: How a researcher with no malware-coding skills tricked AI into creating Chrome infostealers
In another case, an individual with "limited technical skills" could develop malware that normally required more expertise. Claude helped this individual take an open-source kit from doing just the basics to more advanced software functions, like facial recognition and the ability to scan the dark web.
Anthropic's report suggested this case shows how generative AI can effectively ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE