AI is quietly poisoning itself and pushing models toward collapse - but there's a cure
zdnet.com
- When AI LLMs "learn" from other AIs, the result is GIGO.
- You will need to verify your data before you can trust your AI answers.
- This approach requires a dedicated effort across your company.
According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted.
Model collapse
You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."
Also: 4 new ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE

