Just add humans: Oxford medical study underscores the missing link in chatbot testing
venturebeat
Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans. GPT-4 could correctly answer U.S. medical exam licensing questions 90% of the time, even in the prehistoric AI days of 2023. Since then, LLMs have gone on to best the residents taking those exams and licensed physicians.
Move over, Doctor Google, make way for ChatGPT, M.D. But you may want more than a diploma from the LLM you deploy for patients. Like an ace medical student who can rattle off the name of every bone in the hand but faints at the first sight of real blood, an LLM’s mastery of medicine does not always translate directly into the real world.
A paper by researchers at the University of Oxford found that while LLMs could correctly identify relevant conditions 94 ...
Copyright of this story solely belongs to venturebeat . To see the full text click HERE