NewsNational

Actions

OpenAI collaborates with physicians to improve ChatGPT's mental health response

Have you asked ChatGPT, “Should I break up with my boyfriend?” An update to the platform could elicit a different response.
Stuttgart,,Germany,-,12-29-2022:,Mobile,Phone,With,Website,Of,Us
Posted

OpenAI, the operator of ChatGPT, announced that it is unveiling new mental health features after saying that its "model fell short in recognizing signs of delusion or emotional dependency."

The changes come as researchers warn that programs such as ChatGPT pose risks to mental health.

In a letter from OpenAI, the company states that it worked with over 90 physicians to create a model that responds to critical moments. OpenAI also said it will provide "gentle reminders during long sessions to encourage breaks."

"We don’t always get it right," OpenAI said. "Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment."

RELATED STORY | White House's AI plan aims to have the US 'out-innovating the rest of the world'

Earlier this summer, Stanford University researchers released a study showing that many Americans were turning to large language models instead of contacting therapists.

“LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,” said Nick Haber, an assistant professor at the Stanford Graduate School of Education. “But we find significant risks, and I think it’s important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.”

OpenAI said it is "developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed."

OpenAI has more information on its website.