AIs are happy to launch nukes in simulated combat scenarios
theregister.co.ukToday's hottest bots have yet to learn that, when it comes to global thermonuclear war, the only way to win is not to play. So please don't hand them the codes.
Google's Gemini 3 Flash, Anthropic's Claude Sonnet 4, and OpenAI's GPT-5.2 repeatedly escalated to nuclear use in a series of crisis simulations. That may seem like the most shocking conclusion of King's College London Professor Kenneth Payne's recent work, but it's not. Far more striking is why the models talked themselves into destroying the world, which was what Payne set up his study to learn.
"I wanted to see what my AI leaders thought about their enemy ... so I designed a simulation to explore exactly that," Payne wrote in a recent blog post describing his project and its outcome.
Payne's study took the three aforementioned AI models and pitted ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

