VHA lacks ‘formal mechanism’ for mitigating clinical AI chatbot risks, watchdog says
nextgov.com
VA’s OIG said it is concerned “about VHA’s ability to promote and safeguard patient safety without a standardized process for managing AI-related risks.”
The Department of Veterans Affairs’ use of generative artificial intelligence tools in clinical settings represents “a potential patient safety risk,” the agency’s watchdog warned in a new report.
The analysis, released on Thursday by VA’s Office of the Inspector General, found that uses of generative AI chatbots across the Veterans Health Administration — VA’s healthcare arm — lacked the necessary oversight to mitigate potential risks resulting from these tools’ output. AI-powered chatbots have been documented to produce inaccuracies, such as omitting relevant data or generating false information.
OIG said it examined two chatbots used by VHA: VA GPT, “a general use AI chat tool created by VA,” and Microsoft 365 Copilot chat, “a general use AI chat tool that the VA provides to all ...
Copyright of this story solely belongs to nextgov.com . To see the full text click HERE

