Tech »  Topic »  GPT-4 can exploit zero-day security vulnerabilities all by itself, a new study finds

GPT-4 can exploit zero-day security vulnerabilities all by itself, a new study finds


A hot potato: GPT-4 stands as the newest multimodal large language model (LLM) crafted by OpenAI. This foundational model, currently accessible to customers as part of the paid ChatGPT Plus line, exhibits notable prowess in identifying security vulnerabilities without requiring external human assistance.

Researchers recently demonstrated the ability to manipulate (LLMs) and chatbot technology for highly malicious purposes, such as propagating a self-replicating computer worm. A new study now sheds light on how GPT-4, the most advanced chatbot currently available on the market, can exploit extremely dangerous security vulnerabilities simply by examining the details of a flaw.

According to the study, LLMs have become increasingly powerful, yet they lack ethical principles to guide their actions. The researchers tested various models, including OpenAI's commercial offerings, open-source LLMs, and vulnerability scanners like ZAP and Metasploit. They found that advanced AI agents can "autonomously exploit" zero-day vulnerabilities in real-world systems, provided they ...


Copyright of this story solely belongs to techspot.com . To see the full text click HERE