Tech »  Topic »  Is your AI model secretly poisoned? 3 warning signs

Is your AI model secretly poisoned? 3 warning signs


Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

  • Model poisoning weaponizes AI via training data.
  • "Sleeper agent" threats can lie dormant until a trigger is activated. 
  • Behavioral signals can reveal that a model has been tampered with.

AI researchers have for years warned about model collapse, which is the degeneration of AI models after ingesting AI slop. The process effectively poisons a model with unverifiable information, but it's not to be confused with model poisoning, a serious security threat that Microsoft just published new research about. 

Also: More workers are using AI than ever - they're also trusting it less: Inside the frustration gap

While the stakes of model collapse are still significant -- reality and facts are worth preserving -- they pale in comparison to what model poisoning can lead to. Microsoft's new research cites three giveaways you can ...


Copyright of this story solely belongs to zdnet.com . To see the full text click HERE