More than one million people each week use ChatGPT to discuss suicidal thoughts, according to OpenAI, highlighting a growing reliance on artificial intelligence for mental health conversations. The statistic, first reported in the article “OpenAI says over a million people talk to ChatGPT about suicide weekly” by StartupNews.fyi, underscores complex questions about the role AI plays in mental health care and support.
OpenAI CEO Sam Altman disclosed the figure in a recent statement, noting it as part of the company’s broader efforts to better understand how people are using its AI models in emotionally sensitive scenarios. While ChatGPT is not designed to act as a therapist or provide medical advice, its accessibility and constant availability have led many users to turn to it during moments of emotional crisis.
“It’s clear that people are forming deep connections with these models,” Altman said, emphasizing the seriousness of the data and the need for thoughtful stewardship around such use cases.
The revelation has sparked mixed reactions among mental health professionals and AI ethicists. Some experts argue that AI can fill significant gaps in mental health systems strained by high demand and a shortage of providers. Others caution that depending on non-human tools for emotional support could lead to missed opportunities to seek professional help—or potentially worsen an individual’s mental state if interactions with AI are inadequate or misunderstood.
OpenAI has taken steps to address such concerns, including constructing guardrails in ChatGPT’s responses and encouraging users to contact crisis services when discussing suicidal thoughts. The company has reportedly partnered with mental health organizations to improve the AI’s ability to provide appropriate and safe responses.
Still, the scale of interaction disclosed presents a new frontier for both AI developers and healthcare providers. It signals a shifting public norm, in which people increasingly look to digital tools not just for information or task automation, but for solace, empathy, and guidance during moments of psychological distress.
This development renews longstanding debates about responsibility and regulation in the rapidly evolving AI landscape. It raises difficult questions—whether AI companies have a duty of care similar to healthcare providers, and how oversight should be managed when AI tools serve de facto therapeutic roles.
As artificial intelligence continues to embed itself in daily life, the boundary between utility and vulnerability becomes ever more complex. OpenAI’s disclosure serves as a reminder of the human realities behind seemingly abstract technologies—and of the urgent need to balance innovation with ethical responsibility.
