AI chatbot users beware - hackers are now hiding malware in the images served up by LLMs
techradar.com
- Malicious prompts remain invisible until image downscaling reveals hidden instructions
- The attack works by exploiting how AI resamples uploaded images
- Bicubic interpolation can expose black text from specially crafted images
As AI tools become more integrated into daily work, the security risks attached to them are also evolving in new directions.
Researchers at Trail of Bits have demonstrated a method where malicious prompts are hidden inside images and then revealed during processing by large language models.
The technique takes advantage of how AI platforms downscale images for efficiency, exposing patterns that are invisible in their original form but legible to the algorithm once resized.


Hidden instructions in downscaled images
The idea builds on a 2020 ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE