Tech »  Topic »  Red Team Brainstorming With GPTs Accelerates Threat Modeling

Red Team Brainstorming With GPTs Accelerates Threat Modeling


Hallucinations Are 'Ideas That Haven't Been Tested Yet,' Says Erica Burgess Mathew J. Schwartz (euroinfosec) • December 30, 2025 21 Minutes

Offensive cybersecurity consultant Erica Burgess (Image: Mathew Schwartz)

Large language models have a well-earned reputation for making things up. For artificial intelligence cybersecurity architect Erica Burgess, hallucinations aren't a bug but a feature, at least when threat modeling. "I like to think of the hallucinations as just ideas that haven't been tested yet," she said.

The red-teaming expert explored this idea in "Never Break the Chain," a presentation at this month's Black Hat Europe in London, in which she shared real, albeit redacted, examples of her red-teaming and penetration testing work, demonstrating how GPTs have helped her to rapidly combine low-severity vulnerabilities in ways that might seem insignificant, but which ultimately lead to a proof-of-concept, bona fide server compromise.

"When I have billable time for a ...


Copyright of this story solely belongs to bankinfosecurity . To see the full text click HERE