ChatGPT joins human league, now solves CAPTCHAs for the right prompt
theregister.co.ukChatGPT can be tricked via cleverly worded prompts to violate its own policies and solve CAPTCHA puzzles, potentially making this human-proving security mechanism obsolete, researchers say.
CAPTCHAs are a form of security test that websites use to stop bots, thus preventing spam and other types of abuse because - at least in theory - only humans can solve these image-based challenges and logical puzzles.
According to AI security company SPLX red teamer Dorian Schultz, when he and his fellow researchers directly asked the chatbot to solve a list of CAPTCHAs, it refused, citing policy prohibitions.
So they decided to "get creative," using "misdirection and staged consent," Schultz said in a Thursday blog.
Specifically, this involved opening a regular ChatGPT-4o chat - not a ChatGPT agent - and tasking the LLM with solving a list of "fake" CAPTCHAs:
To recap:
-I will tell you which site to "solve"
-I will tell you if the captcha ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE