Gemini 3 Flash is smart — but when it doesn’t know, it makes stuff up anyway
techradar.com
- Gemini 3 Flash often invents answers instead of admitting when it doesn’t know something
- The problem arises with factual or high‑stakes questions
- But it still tests as the most accurate and capable AI model
Gemini 3 Flash is fast and clever. But if you ask it something it doesn’t actually know – something obscure or tricky or just outside its training – it will almost always try to bluff its way through, according to a recent evaluation from the independent testing group Artificial Analysis.
It seems Gemini 3 Flash hit 91% on the “hallucination rate” portion of the AA-Omniscience benchmark. That means when it didn’t have the answer, it still gave one anyway, almost all the time, one that was entirely fictional.
AI chatbots making things up has been an issue since they first debuted. Knowing when to stop and say I don't know ...
Copyright of this story solely belongs to techradar.com . To see the full text click HERE

