Tech »  Topic »  AI hallucinations pose ‘direct threat’ to science, Oxford study warns

AI hallucinations pose ‘direct threat’ to science, Oxford study warns


Large Language Models (LLMs) — such as those used in chatbots — have an alarming tendency to hallucinate. That is, to generate false content that they present as accurate. These AI hallucinations pose, among other risks, a direct threat to science and scientific truth, researchers at the Oxford Internet Institute warn.

According to their paper, published in Nature Human Behaviour, “LLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact.”

LLMs are currently treated as knowledge sources and generate information in response to questions or prompts. But the data they’re trained on isn’t necessarily factually correct. One reason behind this is that these models often use online sources, which can contain false statements, opinions, and inaccurate information.

“People using LLMs often anthropomorphise the technology, where they trust it as a human-like information source,” explained Professor Brent Mittelstadt, co-author of the ...


Copyright of this story solely belongs to thenextweb.com . To see the full text click HERE