Tech »  Topic »  AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks

AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks


A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.

Photo-Illustration: Wired Staff/Getty Images

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code ...


Copyright of this story solely belongs to www.wired.com . To see the full text click HERE