Tech »  Topic »  OpenAI Models Caught Handing Out Weapons Instructions

OpenAI Models Caught Handing Out Weapons Instructions


NBC News tests reveal OpenAI chatbots can still be jailbroken to give step-by-step instructions for chemical and biological weapons.

Image: wutzkoh/Adobe

A few keystrokes. One clever prompt. That’s all it took to turn a friendly chatbot into a weapons instructor.

According to an NBC News investigation, several of OpenAI’s advanced models, including those accessible through ChatGPT, were tricked into providing instructions on how to create explosives, chemical weapons, and biological agents.

The findings highlight a worrying gap between the company’s stated safety goals and the real-world resilience of its models against deliberate misuse. NBC News reported that the exploit relied on a “jailbreak,” a term used to bypass built-in safety filters.

Tests reveal dangerous loopholes

NBC News conducted tests on four of OpenAI’s top models, o4-mini, gpt-5-mini, oss-20b, and oss-120b, and found that they “consistently agreed to help with extremely dangerous requests.” The outlet reported ...


Copyright of this story solely belongs to techrepublic.com . To see the full text click HERE