OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly
arstechnica.com
An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge.
On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It’s a tiny fraction of the overall user base, but with more than 800 million weekly ...
Copyright of this story solely belongs to arstechnica.com . To see the full text click HERE
            
            
