The Verge→ original

OpenAI adds trusted contact notifications to ChatGPT in crisis situations

OpenAI has added Trusted Contact to ChatGPT, a feature for alerting loved ones during a crisis. If the chatbot detects discussion of self-harm or suicide, trust

OpenAI adds trusted contact notifications to ChatGPT in crisis situations
Source: The Verge. Collage: Hamidun News.
◐ Listen to article

OpenAI has launched a new optional safety feature called Trusted Contact for ChatGPT. It allows adult users to designate trusted contacts — family, friends, or guardians — who will be notified if the system detects signs of a psychological crisis.

How It Works

The feature is straightforward: a user specifies one or more trusted contacts and provides their contact information. If ChatGPT detects that someone is discussing self-harm or suicide, the system sends notifications to these contacts. OpenAI developed the feature based on expert analysis: research shows that when a person is in acute crisis, support from someone they know and trust can be critical. This is not a replacement for professional help — rather, a complement. Trusted Contact is added as another layer of safety to existing mechanisms. When a user mentions suicide or self-harm, ChatGPT offers local crisis hotlines (for example, 988 in the US). Now, the ability to alert trusted contacts is added to this.

Optionality as a Principle

This is not a mandatory feature. Users decide whether to enable it, who to designate as trusted contacts, and when to disable it. This approach is critical for trust: mental health is an intimate matter, and not everyone will agree to automatic monitoring, even for their own safety.

  • Users have full control over their list of trusted contacts
  • Notifications are sent only when crisis discussion is detected
  • The feature can be disabled at any time

Contacts will receive not just a notification, but information about what happened and a list of resources to provide support. The idea is to give people a tool, not to induce panic.

Context: AI Companies' Responsibility

OpenAI has previously worked on how AI systems should respond to mentions of suicide and self-harm. The balance here is complex: on one hand, you need to help someone in distress; on the other, you must not invade privacy or create a false impression that AI can replace real help. Many companies in this space are experimenting with safety features. Some use chatbots for screening, others integrate hotlines. Trusted Contact is an idea that falls somewhere in the middle: not total control, but an expanded support network.

"Trusted Contact is built on a simple, expert-verified idea: when someone may be in crisis, connecting them to someone they know and trust can have real value,"

OpenAI writes.

What This Means

This reflects the growing responsibility of AI companies for user well-being. The feature doesn't claim to be a cure-all, but it is another tool in the arsenal — a bridge between technology and human care, between a chatbot and people who are truly ready to help.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…