HashJack attack shows AI browsers can be fooled with a simple ‘#’
theregister.co.ukCato Networks says it has discovered a new attack, dubbed "HashJack," that hides malicious prompts after the "#" in legitimate URLs, tricking AI browser assistants into executing them while dodging traditional network and server-side defenses.
Prompt injection occurs when something causes text that the user didn't write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them. AI browsers, a relatively new type of web browser that uses AI to try and guess user intent and take autonomous actions, have so far proven to be particularly vulnerable to indirect prompt injection – in their quest to be helpful, they sometimes end up helping attackers ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

