Tech »  Topic »  Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare


Credit: VentureBeat using DALL-E

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

Insider threats are among the most devastating types of cyberattacks, targeting a company’s most strategically important systems and assets. As enterprises rush out new internal and customer-facing AI chatbots, they’re also creating new attack vectors and risks.

How porous AI chatbots are is reflected in the recently published research, ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs. Researchers were able to jailbreak five state-of-the-art (SOTA)  large language models (LLMs), including Open AI’s ChatGPT-3.5, GPT-4, Gemini, Claude, and Meta’s Llama2 using ASCII art.  

ArtPrompt is an attack strategy researchers created that capitalizes on the poor performance of LLMs in recognizing ASCII art to bypass guardrails and safety measures. The researchers ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE