Tech »  Topic »  OpenAI says models are programmed to make stuff up instead of admitting ignorance

OpenAI says models are programmed to make stuff up instead of admitting ignorance


AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models.

The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."

Language models are primarily evaluated using exams that penalize uncertainty

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying.

As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE