Tech »  Topic »  I asked six popular AIs the same trick questions, and every one of them hallucinated

I asked six popular AIs the same trick questions, and every one of them hallucinated


Screenshot by Lance Whitney/ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

  • AI hallucinations persist, but accuracy is improving across major tools.
  • Simple questions still expose surprising and inconsistent AI errors.
  • Always verify AI answers, especially for facts, images, and legal info.

One of the most frustrating flaws of today's generative AI tools is simply getting the facts wrong. AIs can hallucinate, which means the information they deliver contains factual mistakes or other errors.

Typically, mistakes come in the form of made-up details that appear when the AI can't otherwise answer a question. In those instances, it has to devise some type of response, even if the information is wrong. Sometimes you can spot an obvious mistake; other times, you may be completely unaware of the errors.

Also: Stop saying AI hallucinates - it doesn't. And the mischaracterization is dangerous

I ...


Copyright of this story solely belongs to zdnet.com . To see the full text click HERE