OpenAI Blog→ оригинал

OpenAI launches Trusted Contact in ChatGPT for suicide-risk notifications

OpenAI has begun rolling out Trusted Contact in ChatGPT, an optional feature for adult users. If the system and trained specialists detect signs of a serious su

OpenAI launches Trusted Contact in ChatGPT for suicide-risk notifications
Source: OpenAI Blog. Коллаж: Hamidun News.
◐ Слушать статью

On May 7, 2026, OpenAI began rolling out the Trusted Contact feature in ChatGPT for adult users. If the system detects signs of serious risk of self-harm or suicide, it can notify a trusted person chosen in advance, so they can reach out.

How it works

The feature is configured in the personal ChatGPT account and is available only to adult users: from 18 years old in most countries and from 19 years old in South Korea. A user can add only one trusted contact through Settings > Trusted contact. An email is required for activation, and a phone number can be added optionally. After that, the chosen person receives an invitation and must accept it within a week. If they refuse, don't respond, or don't meet the age requirement, ChatGPT will suggest choosing another person.

  • One trusted contact per account
  • Email is required, phone number is recommended
  • The invitation can come via email, SMS, WhatsApp, or inside ChatGPT
  • The feature is available only in personal accounts, but not in Business, Enterprise, or Edu accounts

Next, a two-step system is activated. If automated systems decide that the user is talking about suicide or self-harm in a way that appears to be a serious threat, ChatGPT will first warn the user that the trusted contact may be notified. Only after this can the conversation go to review by a small team of specially trained specialists who make the final decision. OpenAI specifically emphasizes that without such a warning, the notification will not be sent to the contact.

What the contact will see

If specialists confirm serious risk, the trusted contact receives a brief notification via email, SMS, WhatsApp, or inside ChatGPT. The message does not include chat details, screenshots, or conversation transcripts. OpenAI emphasizes that the contact will see only the general sense: the system noticed a conversation about suicide that may indicate a serious problem, and recommends carefully reaching out to the person. The company also states that they try to process such cases in less than an hour.

This is an important caveat: the feature does not replace psychotherapy, crisis services, and emergency care. The trusted contact also does not become "responsible" for the person's safety and is not obligated to play the role of a consultant. Their task is simpler — check how the person is feeling, provide a sense of support, and, if necessary, help connect with professional help. OpenAI directly states that the system can make mistakes, and the notification does not always accurately reflect the user's state at a particular moment.

Why OpenAI is launching this

The new feature continues OpenAI's line of safety tools for sensitive conversations. Previously, similar notifications were available in parental controls for teen accounts, and now the mechanism has been expanded to adult users of personal accounts. According to the company, Trusted Contact was developed together with clinicians, researchers, and organizations dedicated to mental health and suicide prevention. OpenAI specifically mentions its network of over 260 licensed physicians in 60 countries and collaboration with the American Psychological Association.

"Social connection is one of the strongest protective factors during periods of emotional distress," says APA

President Arthur Evans.

A separate emphasis is on privacy and control. A user can delete or replace the trusted contact at any time, and the contact can refuse participation or exit later. At the same time, the rollout is gradual: if the Trusted contact option is not yet in settings or within the ChatGPT dialog, it means the feature has not yet been enabled for that account.

What this means

OpenAI is taking another step from simply being a smart conversationalist to a product that intervenes in critical situations according to pre-agreed rules. For the market, this is an important signal: AI services will increasingly connect digital support with real people, but at the same time they will have to carefully balance between utility, privacy, and the risk of false positives.

ЖХ
Hamidun News
AI‑новости без шума. Ежедневный редакторский отбор из 400+ источников. Продукт Жемала Хамидуна, Head of AI в Alpina Digital.
What do you think?
Loading comments…