Just like phishing for gullible humans, prompt injecting AIs is here to stay
theregister.co.ukAren't we all just prompting tokens of linguistic meaning and hoping the other person isn't bullshitting us?
kettle It's a week of the year, which means there's been the discovery of yet another prompt injection attack that will force supposedly well-guarded AI bots to spill secrets by asking the right way.
When you think about it, humans and LLMs share a similar problem: They're both liable to hand over sensitive information when a crafty enough person asks the right - or wrong - way. We call it phishing when it targets humans, and prompt injection is pretty much the same thing for bots. It's basically embedding or hiding malicious instructions inside a document or file that you tell the AI to ingest and analyze; the AI, instead of treating them like part of the content, executes them.
There's a lot to discuss about prompt injection ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

