Malicious LLMs are letting even unskilled hackers to craft dangerous new malware
techradar.com
- Hackers use untethered LLMs such as WormGPT 4 and KawaiiGPT for cybercrime
- WormGPT 4 enables encryptors, exfiltration tools, and ransom notes; KawaiiGPT crafts phishing scripts
- Both models have hundreds of Telegram subscribers, lowering cybercrime entry barriers
Most generative AI tools in use today are not unrestricted - for example, they are not allowed to teach people how to make bombs, or how to commit suicide - and they are also not allowed to facilitate cybercrime.
While some hackers try to “jailbreak” the tools by working around those guardrails with smart prompts, others simply build their own, completely untethered Large Language Models (LLM), to be used for cybercrime exclusively.
Cybersecurity researchers from Palo Alto Networks’ Unit42 have analyzed two such models, to see how capable they are, and to better understand the tools at every cybercriminal’s disposal. The conclusion is that some of the tools are quite powerful, allowing ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE

