OpenAI’s ChatGPT Atlas Browser Found Vulnerable to Prompt Injections
extremetech.comOpenAI's new ChatGPT Atlas web browser has a security flaw that lets attackers execute prompt injection attacks by disguising malicious instructions as URLs. The AI security firm NeuralTrust says the issue stems from how the browser's omnibox interprets entries as URLs or natural-language commands. This lets attackers embed hidden AI instructions in a string that looks like a harmless web address.
The attack begins with a URL-like input with "https" and a fake domain name. When a user pastes this string into Atlas's omnibox, the browser fails to validate it as a proper URL and instead reads it as an instruction to the AI agent. The agent proceeds to carry out the hidden command.
These commands can send users to phishing websites—or worse. Security researchers say that false "copy link" buttons or embedded prompts can, in extreme cases, lead to file deletion in connected services like ...
Copyright of this story solely belongs to extremetech.com . To see the full text click HERE

