Can AI be trusted? The question pops up wherever AI is used or discussed—which, these days, is everywhere.
It's a question that even some AI systems ask themselves.
Many machine-learning systems create what experts call a "confidence score," a value that reflects how confident the system is in its decisions. A low score tells the human user that there is some uncertainty about the recommendation; a high score indicates to the human user that the system is at least sure of its decisions. Savvy humans know to check the confidence score when deciding whether to trust the recommendation of a machine-learning system.
Scientists at the Department of Energy's Pacific Northwest National Laboratory have put forth a new way to evaluate an AI system's recommendations. They bring human experts into the loop to view how the ML performed on a set of data ...
Copyright of this story solely belongs to phys.org . To see the full text click HERE