Attackers Exploit LLM Guardrails to Breach Enterprise APIs
bankinfosecurityMenlo's Farassat and Google's Lees on How Attackers Are Bypassing Legacy Firewalls Tom Field (SecurityEditor) • January 29, 2026

Attackers have shifted from classic exploits to using semantic attacks against artificial intelligence platforms to evade traditional firewalls. Exploits of large language models are weakening legacy network and application defenses built for signatures and static rules - forcing security teams to rethink protection.
"We're seeing attackers try to use prompt injection to jailbreak the LLMs," said Ramin Farassat, chief product officer at Menlo Security. "They're trying to trick the LLMs to bypass their safety filters so they can extract and leak sensitive data and execute unauthorized API commands."
Traditional firewalls were built to act as gatekeepers, checking credentials at the perimeter and allowing traffic to pass once it appeared legitimate, but that approach is ineffective in AI environments, where threats arrive through natural language rather than recognizable technical signatures ...
Copyright of this story solely belongs to bankinfosecurity . To see the full text click HERE

