Tech »  Topic »  NIST releases finalized guidelines on protecting AI from attacks

NIST releases finalized guidelines on protecting AI from attacks


The final guidance for defending against adversarial machine learning offers specific solutions for different attacks, but warns current mitigation is still developing. 

The final version of the National Institute of Standards and Technology’s guide to combatting artificial intelligence-powered cyberattacks was released on Monday, featuring updated definitions of attacks and mitigation terms as well as recent threat mitigation method developments.

Differentiating adversarial machine learning attacks by predictive and generative AI systems, the report brings standardization to the emerging adversarial machine learning threat landscape. 

“AI is useful but vulnerable to adversarial attacks. All models are vulnerable in all stages of their development, deployment, and use,” NIST’s Apostol Vassilev, a research team supervisor and one of the authors of the adversarial machine learning publication, told Nextgov/FCW. “At this stage with the existing technology paradigms, the number and power of attacks are greater than the available mitigation techniques.” 

Some of the ...


Copyright of this story solely belongs to nextgov.com . To see the full text click HERE