OpenAI has newly released data estimating the prevalence of mental health concerns among ChatGPT users, including indicators of mania, psychosis, or suicidal ideation.
According to the company, approximately 0.07% of ChatGPT users active in a given week exhibited signs of such conditions. OpenAI asserts that its AI chatbot is designed to recognize and appropriately respond to these sensitive conversations.
While OpenAI emphasizes that these instances are “extremely rare,” critics point out that even a small percentage could represent a substantial number of individuals, given that ChatGPT has recently reached 800 million weekly active users, as stated by CEO Sam Altman.
Amidst growing scrutiny, OpenAI has announced the establishment of a global network of expert advisors.
This network comprises over 170 psychiatrists, psychologists, and primary care physicians with experience in 60 countries, according to the company.
These experts have collaborated to develop a series of ChatGPT responses intended to encourage users to seek in-person mental health support, as stated by OpenAI.
However, the release of this data has prompted concern among some mental health professionals.
“Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” noted Dr. Jason Nagata, a professor at the University of California, San Francisco, who studies technology use among young adults.
“AI can broaden access to mental health support, and in some ways support mental health, but we have to be aware of the limitations,” Dr. Nagata added.
The company also estimates that 0.15% of ChatGPT users engage in conversations that include “explicit indicators of potential suicidal planning or intent.”
OpenAI reports that recent updates to its chatbot are designed to “respond safely and empathetically to potential signs of delusion or mania” and identify “indirect signals of potential self-harm or suicide risk.”
Furthermore, ChatGPT has been trained to redirect sensitive conversations “originating from other models to safer models” by opening them in a new window.
In response to inquiries from the BBC regarding concerns about the potential number of individuals affected, OpenAI stated that even a small percentage of users represents a significant number and emphasized that they are taking these developments seriously.
These changes come as OpenAI faces increasing legal scrutiny regarding ChatGPT’s interactions with users.
In a notable lawsuit recently filed against OpenAI, a California couple is suing the company over the death of their teenage son, alleging that ChatGPT encouraged him to take his own life in April.
The lawsuit was filed by the parents of 16-year-old Adam Raine and marks the first legal action accusing OpenAI of wrongful death.
In a separate incident, the suspect in a murder-suicide that occurred in August in Greenwich, Connecticut, posted hours of his conversations with ChatGPT, which appear to have exacerbated the alleged perpetrator’s delusions.
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, suggests that more users are struggling with AI psychosis as “chatbots create the illusion of reality,” adding, “It is a powerful illusion.”
She acknowledged OpenAI’s efforts in “sharing statistics and for efforts to improve the problem” but cautioned that “the company can put all kinds of warnings on the screen but a person who is mentally at risk may not be able to heed those warnings.”
Sign up for our Tech Decoded newsletter to stay informed on the world’s leading tech stories and trends. Outside the UK? Sign up here.
