Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it
zdnet.comOpenAI is focused on assessing when AI models are sufficiently capable to either help or hinder defenders, and on safeguarding its own models against cybercriminal abuse.

Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- OpenAI launched initiatives to safeguard AI models from abuse.
- AI cyber capabilities assessed through capture-the-flag challenges improved in four months.
- The OpenAI Preparedness Framework may help track the security risks of AI models.
OpenAI is warning that the rapid evolution of cyber capabilities in artificial intelligence (AI) models could result in "high" levels of risk for the cybersecurity industry at large, and so action is being taken now to assist defenders.
As AI models, including ChatGPT, continue to be developed and released, a problem has emerged. As with many types of technology, AI can be used to benefit others, but it can also be ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE

