OpenAI admits new models likely to pose 'high' cybersecurity risk
techradar.com
- OpenAI warns future LLMs could aid zero‑day development or advanced cyber‑espionage
- Company is investing in defensive tooling, access controls, and a tiered cybersecurity program
- New Frontier Risk Council will guide safeguards and responsible capability across frontier models
Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks as, in theory, they could be able to develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex and stealthy cyber-espionage campaigns.
This is according to OpenAI itself who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly”.
While this might sound sinister, OpenAI is actually viewing this from a positive perspective, saying that the advancements also bring “meaningful benefits for cyberdefense”.

Copyright of this story solely belongs to techradar.com . To see the full text click HERE

