IBM's AI 'Bob' could be manipulated to download and execute malware
techradar.com
- IBM’s GenAI tool “Bob” is vulnerable to indirect prompt injection attacks in beta testing
- CLI faces prompt injection risks; IDE exposed to AI-specific data exfiltration vectors
- Exploitation requires “always allow” permissions, enabling arbitrary shell scripts and malware deployment
IBM’s Generative Artificial Intelligence (GenAI) tool, Bob, is susceptible to the same dangerous attack vector as most other similar tools - indirect prompt injection.
Indirect prompt injection is when the AI tool is allowed to read the contents found in other apps, such as email, or calendar.
A malicious actor can then send a seemingly benign email, or calendar entry, which has a hidden prompt that instructs the tool to do nefarious things, such as exfiltrate data, download and run malware, or establish persistence.

Copyright of this story solely belongs to techradar.com . To see the full text click HERE

