Tech »  Topic »  ChatGPT Can Now Alert 'Trusted Contacts' When Self‑Harm Enters The Chat

ChatGPT Can Now Alert 'Trusted Contacts' When Self‑Harm Enters The Chat


OpenAI has introduced a new safety feature to ChatGPT that lets users add "trusted contacts" to their profiles. These contacts will receive alerts if a user's ChatGPT exchange points to signs of self-harm—one of many changes OpenAI is making to its best-known product after multiple suicides that allegedly involved the use of ChatGPT. The company says it hopes the new feature will bridge online chats and offline support by connecting people in crisis with someone they personally trust.

The feature is now available in ChatGPT's settings and requires both the user and their trusted contact to opt in. Users can provide the name, email, and phone number of a trusted individual, who then gets an invitation that explains what it means to be a contact and how alerts may work. OpenAI also says that automated systems look for signs of serious self-harm or suicide. If found, they ...


Copyright of this story solely belongs to extremetech.com . To see the full text click HERE