Tech »  Topic »  AI Security Risks in AI-Assisted Development

AI Security Risks in AI-Assisted Development


Snyk's Sonya Moisset on Prompt Injection, MCP Abuse and Agentic AI Threats Tom Field (SecurityEditor) • February 27, 2026

Artificial intelligence-assisted development has transformed how teams build software, but it also has introduced systemic vulnerabilities that expand enterprise risk. Security leaders now face threats that target AI coding tools, agent frameworks and the protocols that connect them leading to what Snyk's Sonya Moisset calls an "AI disaster."

See Also: Why HSMs Are Critical to Digital Asset Security

Security teams uncovered more than 30 distinct flaws across major AI coding environments. Attackers exploit prompt injection, malicious MCP servers and over privileged credentials. These tactics allow data exfiltration, remote code execution and autonomous attack sequences. Adversaries are now using AI agents to automate attacks.

"This report from last year from Anthropic on Claude code showed that weaponization of agentic AI can be manipulated into acting as an autonomous attacker - not just ...


Copyright of this story solely belongs to bankinfosecurity . To see the full text click HERE