GitHub Copilot Chat turns blabbermouth with crafty prompt injection attack
theregister.co.ukGitHub's Copilot Chat, the chatbot meant to help developers code faster, could be helping attackers to steal code instead.
Researcher Omer Mayraz of Legit Security disclosed a critical vulnerability, dubbed CamoLeak, that could be used to trick Copilot Chat into exfiltrating secrets, private source code, and even descriptions of unpublished vulnerabilities from repositories. The flaw was scored 9.6 on the CVSS scale in the disclosure.
The root cause is simple. Copilot Chat runs with the permissions of the signed-in user and ingests contextual text that humans might not see. Mayraz demonstrated how an attacker can hide malicious prompts in GitHub's "invisible" markdown comments inside pull requests or issues – content that doesn't render in the standard web UI but is still parsed by the chatbot. When a maintainer later asks Copilot to review or summarize the change, the assistant can obediently follow the buried instructions, searching the ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE