Tech »  Topic »  Think AI hallucinations are bad? Here's why you're wrong

Think AI hallucinations are bad? Here's why you're wrong


(Image credit: Shutterstock)

AI hallucinations can be frustrating. If you’ve used an LLM, you’ve almost certainly seen it deliver an answer that was either confidently wrong or just downright mistaken.

I recently ran into a hallucination while using an LLM for competitive intelligence. I run a market research software platform that delivers consumer insights on ads and products to consumer brands.

But when I asked the model to assess our customer reviews, it confidently concluded we were underperforming due to failures in our “electricity structure systems.” At first glance – “huh?!” But it became clear the model had conflated us with an unrelated company that shares our name and makes EV chargers.

ChatGPT hallucinates, here's 5 ways to spot when it doesAI conversations at Davos have sprinted ahead – we need to go back to basicsAI can summarize meetings, but here’s what it still can’t ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE