AI Slopsquatting: How LLM Hallucinations Poison Your Code
hackernoon.comAI slopsquatting is a malware trick that exploits large language model (LLM) hallucinations. 20% of AI-generated code includes hallucinated packages, with 58% of those names repeating across multiple runs. Attackers spot these hallucinations, create malicious packages with those exact names, and upload them to public repositories.


You’re rushing to finish a Python project, desperate to parse JSON faster. So you ask GitHub Copilot, and it confidently suggests “FastJsonPro”. Sounds legit right? So you type pip install FastJsonPro and hit enter.
Moments later, your system’s infected. Your GitHub tokens are gone, your codebase is leaking to the dark web, and your company is facing a
This isn't a typo. It's AI slopsquatting, a malware trick that exploits large language model (LLM) hallucinations.
Copyright of this story solely belongs to hackernoon.com . To see the full text click HERE