Tech »  Topic »  Frontier AI safety tests may be creating the very risks they're meant to stop

Frontier AI safety tests may be creating the very risks they're meant to stop


Frontier AI safety testing is becoming a security nightmare of its own, with a new RUSI report warning that the process of granting outsiders access to inspect powerful AI models is itself creating new security risks.

The paper, published Tuesday by London-based think tank Royal United Services Institute (RUSI), warns that the rapidly expanding system of third-party AI evaluations is riddled with inconsistent standards, vague terminology, weak access controls, and security assumptions that would make most enterprise infosec teams break out in hives.

The report focuses on a growing problem facing governments and AI companies alike: meaningful safety testing requires outsiders to access advanced models, but every new access pathway creates another opportunity for theft, tampering, espionage, or abuse.

That gets especially risky when the systems in question are being evaluated for capabilities related to cyberattacks or chemical and biological weapon development.

"The security risks associated with this access, from ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE