PromptPwnd Vulnerability Exposes AI driven build systems to Data Theft
hackread.comAikido Security exposes a new AI prompt injection flaw in GitHub/GitLab pipelines, letting attackers steal secrets. Major companies affected.
Researchers at the software security company Aikido Security have reported a new type of vulnerability that could compromise how major firms build their software. They’ve named this issue PromptPwnd, and it centres on a specific type of attack called prompt injection, where AI agents like Gemini, Claude Code, and OpenAI Codex are being used inside automated systems like GitHub Actions and GitLab CI/CD.
Why AI Automation is Suddenly Risky
For your information, these automated CI/CD pipelines use AI to speed up tasks like managing bug reports. The flaw begins when AI agents receive outside text (like a bug report title), allowing an attacker to slip secret instructions into the prompt. This technique, called prompt injection, confuses the AI agent, causing it to mistake the attacker’s text ...
Copyright of this story solely belongs to hackread.com . To see the full text click HERE

