Tech »  Topic »  OpenAI admits new models likely to pose 'high' cybersecurity risk

OpenAI admits new models likely to pose 'high' cybersecurity risk


(Image credit: Shutterstock / metamorworks)
  • OpenAI warns future LLMs could aid zero‑day development or advanced cyber‑espionage
  • Company is investing in defensive tooling, access controls, and a tiered cybersecurity program
  • New Frontier Risk Council will guide safeguards and responsible capability across frontier models

Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks as, in theory, they could be able to develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex and stealthy cyber-espionage campaigns.

This is according to OpenAI itself who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly”.

While this might sound sinister, OpenAI is actually viewing this from a positive perspective, saying that the advancements also bring “meaningful benefits for cyberdefense”.

Experts tried to get AI to create malicious security threats - but what it did next was a surprise even to themDeepSeek took off as ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE