Tech »  Topic »  Cybercriminals are abusing LLMs to help them with hacking activities

Cybercriminals are abusing LLMs to help them with hacking activities


(Image credit: NPowell/Flux)
  • New research shows AI tools are being used and abused by cybercriminals
  • Hackers are creating tools that exploit legitimate LLMs
  • Criminals are also training their own LLMs

It’s undeniable that AI is being used by both cybersecurity teams and cybercriminals, but new research from Cisco Talos reveals that criminals are getting creative. The latest development in the AI/cybersecurity landscape is that ‘uncensored’ LLMs, jailbroken LLMs, and cybercriminal-designed LLMs are being leveraged against targets.

It was recently revealed that both Grok and Mistral AI models were powering WormGPT variants that were generating malicious code, social engineering attacks, and even providing hacking tutorials - so it's clearly becoming a popular tactic.

LLMs are built with security features and guardrails, ensuring minimal bias and outputs that consist with human values and ethics, as well as making sure the chatbots don’t engage in harmful behaviour, such as ...


Copyright of this story solely belongs to techradar.com . To see the full text click HERE