Tech »  Topic »  IBM's AI 'Bob' could be manipulated to download and execute malware

IBM's AI 'Bob' could be manipulated to download and execute malware


(Image credit: Shutterstock / NicoElNino)
  • IBM’s GenAI tool “Bob” is vulnerable to indirect prompt injection attacks in beta testing
  • CLI faces prompt injection risks; IDE exposed to AI-specific data exfiltration vectors
  • Exploitation requires “always allow” permissions, enabling arbitrary shell scripts and malware deployment

IBM’s Generative Artificial Intelligence (GenAI) tool, Bob, is susceptible to the same dangerous attack vector as most other similar tools - indirect prompt injection.

Indirect prompt injection is when the AI tool is allowed to read the contents found in other apps, such as email, or calendar.

A malicious actor can then send a seemingly benign email, or calendar entry, which has a hidden prompt that instructs the tool to do nefarious things, such as exfiltrate data, download and run malware, or establish persistence.

This 'ZombieAgent' zero click vulnerability allows for silent account takeover - here's what we knowResearchers claim ChatGPT has a whole host of ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE