Hackers Exploit SSRF Flaw in Custom GPTs to Steal ChatGPT Secrets
gbhackersA cybersecurity researcher has uncovered a server-side request forgery (SSRF) vulnerability in OpenAI’s ChatGPT.
The flaw, hidden in the Custom GPTs feature, allowed attackers to potentially access sensitive cloud infrastructure secrets, including Azure management API tokens.
Disclosed through OpenAI’s bug bounty program, the issue was swiftly patched, but it underscores the persistent dangers of SSRF in cloud-based AI services.

While building a custom GPT, a premium ChatGPT Plus tool for creating tailored AI assistants, the researcher noticed the “Actions” section.
This feature lets users define external APIs via OpenAPI schemas, enabling the GPT to fetch data from user-specified URLs and incorporate it into responses. Examples include querying weather APIs for location-based info.
However, the ability to provide arbitrary URLs triggered the researcher’s “hacker instinct,” prompting a probe for an SSRF vulnerability.
SSRF occurs when an application unwittingly forwards user-supplied requests to unintended destinations, often internal ...
Copyright of this story solely belongs to gbhackers . To see the full text click HERE

