Autonomous cars, drones cheerfully obey prompt injection by road sign
theregister.co.ukIndirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.
In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes.
Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely.
The researchers at the University of California, Santa Cruz, and Johns Hopkins showed that, in simulated trials, AI systems and the large vision language models (LVLMs) underpinning them would reliably follow instructions if displayed on signs held up in their ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

