Tech »  Topic »  Healthcare Chatbots Provoke Unease in AI Governance Analysts

Healthcare Chatbots Provoke Unease in AI Governance Analysts


AI Failures May Hide in Ways that Safety Tests Don't Measure Rashmi Ramesh (rashmiramesh_) • January 9, 2026

Image: Tex vector/Shutterstock

When an AI chatbot tells people to add glue to pizza, the error is obvious. When it recommends eating more bananas - sound nutritional advice that could be dangerous for someone with kidney failure - the mistake hides in plain sight.

See Also: AI Browsers: the New Trojan Horse?

That's a risk now poised to reach hundreds of millions of users with little or no regulatory oversight.

OpenAI days ago launched ChatGPT Health, allowing users to connect medical records and wellness apps for personalized health guidance. The company said more than 230 million people ask ChatGPT health questions weekly, with 40 million daily users seeking medical advice (see: ChatGPT Health: Top Privacy, Security, Governance Concerns).

Google has partnered with health data platform b.well, suggesting similar products may follow ...


Copyright of this story solely belongs to bankinfosecurity . To see the full text click HERE