Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%
gbhackersThe cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs).
According to KELA’s annual “State of Cybercrime” report, discussions about exploiting popular LLMs such as ChatGPT, Copilot, and Gemini surged by 94% compared to the previous year.
Jailbreaking Techniques Proliferate on Underground Forums
Cybercriminals have been actively sharing and developing new jailbreaking techniques on underground forums, with dedicated sections emerging on platforms like HackForums and XSS.
These techniques aim to bypass the built-in safety limitations of LLMs, enabling the creation of malicious content such as phishing emails and malware code.
One of the most effective jailbreaking methods identified by KELA was word transformation, which successfully bypassed 27% of safety tests.
This technique involves replacing sensitive words with synonyms or splitting them into substrings to evade detection.
Massive Increase in Compromised LLM Accounts
The report revealed a ...
Copyright of this story solely belongs to gbhackers . To see the full text click HERE