Even OpenAI CEO Sam Altman thinks you shouldn't trust AI for therapy
zdnet.com
Therapy can feel like a finite resource, especially lately. As a result, many people -- especially young adults -- are turning to AI chatbots, including ChatGPT and those hosted on platforms like Character.ai, to simulate the therapy experience.
But is that a good idea privacy-wise? Even Sam Altman, the CEO behind ChatGPT itself, has doubts.
In an interview with podcaster Theo Von last week, Altman said he understood concerns about sharing sensitive personal information with AI chatbots, and advocated for user conversations to be protected by similar privileges to those doctors, lawyers, and human therapists have. He echoed Von's concerns, saying he believes it makes sense "to really want the privacy clarity before you use [AI] a lot, the legal clarity."
Also: Bad vibes: How an AI agent coded its way to disaster
Currently, AI companies offer some on-off settings for keeping chatbot conversations out of training ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE