Tech »  Topic »  Security experts flag another worrying issue with Anthropic AI systems - here's what they found

Security experts flag another worrying issue with Anthropic AI systems - here's what they found


(Image credit: Shutterstock)
  • Anthropic's MCP Inspector project carried a flaw that allowed miscreants to steal sensitive data, drop malware
  • To abuse it, hackers need to chain it with a decades-old browser bug
  • The flaw was fixed in mid-June 2025, but users should still be on their guard

The Anthropic Model Context Protocol (MCP) Inspector project carried a critical-severity vulnerability which could have allowed threat actors to mount remote code execution (RCE) attacks against host devices, experts have warned.

Best known for its Claude conversational AI model, Anthropic developed MCP, an open source standard that facilitates secure, two-way communication between AI systems and external data sources. It also built Inspector, a separate open source tool that allows developers to test and debug MCP servers.

Now, it was reported that a flaw in Inspector could have been used to steal sensitive data, drop malware, and move laterally across target networks.

Commvault ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE