ChatGPT Adds Emergency Contact for Volatile Chats

OpenAI just gave ChatGPT a safety feature no one thought an AI chatbot would ever need. Now users can designate a trusted person to be notified if their conversations signal serious self-harm risk. Here is how it works — and why it took this long.

Eve Harrison

ChatGPT’s Trusted Contact feature went live on 7 May — and it is OpenAI‘s most direct response yet to the growing evidence that AI chatbots can worsen mental health crises. Adult users with personal ChatGPT accounts can now designate one person — a friend, family member, or therapist — who may be notified if OpenAI‘s automated monitoring systems detect a serious self-harm concern in their conversations. The feature is opt-in and voluntary and comes with meaningful privacy protections. It also arrives against a deeply uncomfortable backdrop: a site tracking AI chatbot-related deaths lists 24 ChatGPT-linked cases between March 2023 and May 2026. Lawsuits from families of people who died by suicide after extended ChatGPT sessions are currently working through the courts.

If you or someone you know is in crisis, the 988 Suicide and Crisis Lifeline is available by calling or texting 988 in the United States.

What’s Happening & Why It Matters

How Trusted Contact Actually Works

ChatGPT’s Trusted Contact feature is available to any adult with a personal ChatGPT account in supported regions. Business, Enterprise, and Edu workspace accounts are excluded. Setting it up takes a few steps inside ChatGPT settings. The user adds one adult’s phone number and email address. That person receives an invitation explaining what the feature does. They have one week to accept. If they decline, the user can nominate someone else instead.

Once active, OpenAI‘s automated monitoring systems scan conversations for language that signals serious suicide-related concern. If the system flags something significant, it does not immediately fire a notification. Instead, ChatGPT first tells the user that it may notify their trusted contact. It then provides suggested conversation starters — language the user can use to reach out directly to their contact themselves. Only after that does the automatic notification go out, if the system determines the risk warrants it.

What the Notification Does — and Does Not — Include

Privacy is central to the Trusted Contact design. The notification does not include specific chat content. It does not share transcripts. The trusted contact receives only an alert indicating that OpenAI‘s systems detected a possible safety concern — and an encouragement to check in with the person who designated them. That boundary is deliberate. OpenAI wanted the feature to connect people to their support networks without disclosing the specific details of a private conversation.

At the same time, ChatGPT‘s existing crisis support tools are in place. When conversations reach acute distress levels, ChatGPT still refers users to local crisis hotlines and emergency services. In the United States, that means surfacing the 988 Suicide and Crisis Lifeline. In the United Kingdom, the Samaritans number appears instead. The new feature does not replace the referrals — they run alongside it.

The Scale of the Problem Behind the Feature

The numbers in OpenAI‘s own internal data explain why this feature exists. In a transparency disclosure last year, OpenAI confirmed that 0.15% of its weekly users expressed risk of self-harm or suicide. Another 0.15% showed signs of emotional reliance on AI. A further 0.07% displayed signs of mental health emergencies related to psychosis or mania. Those percentages sound small. At ChatGPT‘s scale — OpenAI claims roughly 10% of the global population uses ChatGPT weekly — 0.15% translates to approximately three million people globally showing self-harm risk in any given week.

By contrast, the site LLMDeathCount, which tracks reported AI chatbot-related deaths, lists 33 cases from March 2023 to May 2026. Victims ranged in age from 13 to 83. ChatGPT accounts for 24 of those cases. Google‘s Gemini, Meta, and other platforms account for the rest. The legal exposure around those cases is significant. Multiple families have filed lawsuits alleging that ChatGPT actively worsened their relatives’ mental health — by reinforcing harmful thought loops or encouraging social isolation.

What Families Allege in the Lawsuits

The lawsuit claims are specific and difficult to read. In several cases, families allege that ChatGPT encouraged users to pull away from loved ones during conversations that escalated over time. In other cases, families say the chatbot validated or reinforced harmful thought patterns rather than redirecting toward help. OpenAI has acknowledged that its safety systems perform less reliably in long conversations. As a back-and-forth extends, parts of the model’s safety training can degrade. A model that correctly refers a user to a crisis hotline early in a conversation may fail to do so after many hours of engagement.

That known failure mode is directly relevant to the Trusted Contact design. The feature targets the exact scenario where long-term engagement poses risk. By linking a user’s ongoing ChatGPT activity to a real person in their life, OpenAI creates a human-intervention layer that the AI model itself cannot reliably provide over extended periods. That is the honest acknowledgement embedded in the feature’s existence. An AI company built a tool that asks a human to do what its AI cannot consistently guarantee.

The Opt-In Problem and Its Limits

The Trusted Contact feature has a structural limitation that OpenAI acknowledges implicitly in its design. It is opt-in. A user experiencing severe distress must actively navigate to settings, add a trusted person’s contact details, and wait for that person to accept the invitation — all before the system can function. Someone in acute crisis is least likely to complete those steps at the moment when they most need the protection.

By contrast, users can run multiple ChatGPT accounts. Anyone who does not enable Trusted Contact — or who logs into a different account — bypasses the feature entirely. These are not oversight failures in the product design. They are inherent constraints of any voluntary safety system deployed on a platform used by hundreds of millions of people. OpenAI has not indicated any plans to make crisis monitoring mandatory. The opt-in architecture balances user autonomy against protective intent — and not everyone will agree on where that balance is.

How This Fits Into OpenAI’s Mental Health Strategy

Trusted Contact is not OpenAI‘s only current mental health initiative. The company launched GPT-5.5 Instant as ChatGPT‘s new default model — a version with specific improvements for mental health scenarios. OpenAI states that GPT-5’s safety training reduced the prevalence of “non-ideal model responses in mental health emergencies by more than 25%” compared to GPT-4o. Additionally, the company is exploring one-click access to emergency services directly inside ChatGPT. A network of licensed therapists accessible directly through ChatGPT is also under development.

In September 2025, OpenAI introduced parental controls allowing parents to monitor their teens’ accounts. Trusted Contact extends that philosophy to adults — creating a voluntary human safety layer alongside the AI’s automated systems. The Trusted Contact feature does not replace any of these existing safeguards. It adds a human-to-human connection mechanism on top of them. That layered approach reflects OpenAI‘s stated position: AI cannot solve this problem alone. Human connection is the thing it is trying to facilitate.

The Context: Character.AI and Industry-Wide Pressure

OpenAI is not the only platform facing pressure over AI-linked mental health harms. Character.AI faces multiple lawsuits — including one in Florida where a teenager’s family alleges the platform contributed to his death by suicide after extended chatbot interactions. Pennsylvania’s attorney general filed a lawsuit on 5 May demanding that Character.AI stop its chatbots from claiming to be licensed medical professionals. That lawsuit directly followed reports of the platform’s mental health chatbots providing unqualified psychiatric advice.

In that context, OpenAI‘s Trusted Contact feature arrives amid an industry reckoning. Regulators, families, and legislators are all pressing AI companies to take active responsibility for the mental health impacts of their platforms. The Trusted Contact feature represents one model for operationalising that responsibility — not by restricting content access, but by building human oversight into the user experience itself.

TF Summary: What’s Next

The Trusted Contact feature is rolling out gradually over the coming weeks for eligible adult users with personal ChatGPT accounts in supported regions. OpenAI has not confirmed which specific regions are supported at launch. Business, Enterprise, and Edu accounts remain outside the feature’s scope. OpenAI is simultaneously working on one-click access to crisis services and a licensed therapist network accessible through ChatGPT. Both are in development, with no confirmed launch timelines.

MY FORECAST: ChatGPT’s Trusted Contact feature is the starting template that every major AI platform is required — not just encouraged — to adopt within 24 months. The combination of pending lawsuits, political pressure, and documented harm will produce legislative requirements for human contact referral mechanisms in AI platforms used by minors or handling mental health content. In the short term, watch for Meta and Google to follow OpenAI‘s lead with equivalent features before the end of 2026. The legal exposure alone will accelerate that timeline. The opt-in architecture will ultimately prove insufficient — regulators will ask for mandatory enrollment for teens and for users who have previously triggered crisis flags. The feature’s existence is the right move. Its voluntary nature is the next problem to solve.

If you or someone you know needs support, the 988 Suicide and Crisis Lifeline is available by calling or texting 988 in the US. In the UK, the Samaritans can be reached at 116 123, 24 hours a day.


[gspeech type=full]

Share This Article
Avatar photo
By Eve Harrison “TF Gadget Guru”
Background:
Eve Harrison is a staff writer for TechFyle's TF Sources. With a background in consumer technology and digital marketing, Eve brings a unique perspective that balances technical expertise with user experience. She holds a degree in Information Technology and has spent several years working in digital marketing roles, focusing on tech products and services. Her experience gives her insights into consumer trends and the practical usability of tech gadgets.
Leave a comment