Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos
hackread.comCybercriminals use malicious AI models to write malware and phishing scams Cisco Talos warns of rising threats from uncensored and custom AI tools.
New research from Cisco Talos reveals a rise in cybercriminals abusing Large Language Models (LLMs) to enhance their illicit activities. These powerful AI tools, known for generating text, solving problems, and writing code, are, reportedly, being manipulated to launch more sophisticated and widespread attacks.
For your information, LLMs are designed with built-in safety features, including alignment (training to minimize bias) and guardrails (real-time mechanisms to prevent harmful outputs). For instance, a legitimate LLM like ChatGPT would refuse to generate a phishing email. However, cybercriminals are actively seeking ways around these protections.
Talos’s investigation, shared with Hackread.com highlights three primary methods used by adversaries:
Uncensored LLMs: These models, lacking safety constraints, readily produce sensitive or harmful content. Examples include OnionGPT and WhiteRabbitNeo, which can generate offensive ...
Copyright of this story solely belongs to hackread.com . To see the full text click HERE