‘I do not believe AI should do therapy’ – I asked a psychologist what worries the people trying to make AI safer
techradar.com
AI doesn’t feel safe right now. Almost every week, there's a new issue. From AI models hallucinating and making up important information to being at the center of legal cases accused of causing serious harm.
As more AI companies position their tools as sources of information, coaches, companions and even stand-in therapists, questions about attachment, privacy, liability and harm are no longer theoretical. Lawsuits are emerging and regulators are lagging behind. But most importantly, many users don’t fully understand the risks.
So what does someone whose job is to help AI companies make better choices actually worry about? I spoke to psychologist and AI risk advisor Genevieve Bartuski of Unicorn Intelligence Tech Partners. She works with founders, developers and investors building AI products in health, mental health and wellness, helping them think more carefully about ethical and responsible design.
Copyright of this story solely belongs to techradar.com . To see the full text click HERE

