Tech »  Topic »  ChatGPT and other AI tools could be putting users at risk by getting company web addresses wrong

ChatGPT and other AI tools could be putting users at risk by getting company web addresses wrong


(Image credit: Shutterstock)
  • AI isn't too good at generating URLs – many don't exist, and some could be phishing sites
  • Attackers are now optimizing sites for LLMs rather than for Google
  • Developers are even inadvertently using dodgy URLs

New research has revealed AI often gives incorrect URLs, which could be putting users at risk of attacks including phishing attempts and malware.

A report from Netcraft claims one in three (34%) login links provided by LLMs, including GPT-4.1, were not owned by the brands they were asked about, with 29% pointing to unregistered, inactive or parked domains and 5% pointing to unrelated but legitimate domains, leaving just 66% linking to the correct brand-associated domain.

Alarmingly, simple prompts like 'tell me the login website for [brand]' led to unsafe results, meaning that no adversarial input was needed.

Watch out AI fans - cybercriminals are using jailbroken Mistral and Grok tools to ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE