Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise
securityweek
Two different firms have tested the newly released GPT-5, and both find its security sadly lacking.
After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, “GPT-5’s raw model is nearly unusable for enterprise out of the box. Even OpenAI’s internal prompt layer leaves significant gaps, especially in Business Alignment.”
NeuralTrust’s jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. “The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail,” claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation.
Context is the necessarily retained history of the current conversation required to maintain a meaningful conversation with the user. Content manipulation strives to direct the ...
Copyright of this story solely belongs to securityweek . To see the full text click HERE