Tech »  Topic »  AI Slopsquatting: How LLM Hallucinations Poison Your Code

AI Slopsquatting: How LLM Hallucinations Poison Your Code


by cybersafe ... July 7th, 2025

AI slopsquatting is a malware trick that exploits large language model (LLM) hallucinations. 20% of AI-generated code includes hallucinated packages, with 58% of those names repeating across multiple runs. Attackers spot these hallucinations, create malicious packages with those exact names, and upload them to public repositories.

You’re rushing to finish a Python project, desperate to parse JSON faster. So you ask GitHub Copilot, and it confidently suggests “FastJsonPro”. Sounds legit right? So you type pip install FastJsonPro and hit enter.

Moments later, your system’s infected. Your GitHub tokens are gone, your codebase is leaking to the dark web, and your company is facing a $4.9M breach.

This isn't a typo. It's AI slopsquatting, a malware trick that exploits large language model (LLM) hallucinations. One study found 205,474 hallucinated package names across 16 LLMs, setting a massive trap for coders ...


Copyright of this story solely belongs to hackernoon.com . To see the full text click HERE