Tech »  Topic »  OpenAI putting bandaids on bandaids as prompt injection problems keep festering

OpenAI putting bandaids on bandaids as prompt injection problems keep festering


Security researchers at Radware say they've identified several vulnerabilities in OpenAI's ChatGPT service that allow the exfiltration of personal information.

The flaws, identified in a bug report filed on September 26, 2025, were reportedly fixed on December 16.

Or rather fixed again, as OpenAI patched a related vulnerability on September 3 called ShadowLeak, which it disclosed on September 18.

ShadowLeak is an indirect prompt injection attack that relies on AI models' inability to distinguish between system instructions and untrusted content. That blind spot creates security problems because it means miscreants can ask models to summarize content that contains text directing the software to take malicious action – and the AI will often carry out those instructions.

ShadowLeak is a flaw in the Deep Research component of ChatGPT. The vulnerability made ChatGPT susceptible to malicious prompts in content stored in systems linked to ChatGPT, such as Gmail, Outlook, Google Drive ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE