
Talkspace has grown to be one of the largest online therapy platforms in the U.S., covering an estimated market of 200 million Americans. As the mental health platform has grown, it has also pioneered new ways to reach people in need of help with mental health issues including trauma, depression, addiction, abuse and relationships, and for various phases of life, including adolescence.
Its experience serving the mental health needs of teens puts Talkspace in a unique position to understand an issue of growing importance nationally: the use of artificial intelligence large language models not designed to provide mental health support among at-risk teens, which has led to tragic consequences.
“It’s a huge, huge problem,” said Talkspace CEO Jon Cohen at the CNBC Workforce Executive Council Summit on Tuesday in New York City.
Talkspace runs the largest teen mental health program in the country, with students between the ages of 13 to 17 in New York City able to use its services for free, and similar programs in Baltimore and Seattle. The virtual mental health app offers both asynchronous text messaging and live video sessions with thousands of licensed therapists.
While Cohen says he is “a big believer of not using phones, cell phone bans, and everything else,” he added that to serve the teen population, the company has to meet them where they are. That means “we are meeting them on their phones,” he said.
Over 90% of students using Talkspace use the asynchronous messaging therapy approach, versus only 30% who use video (70% of overall Talkspace users opt for video over text, with the percentage increasing the older a patient gets).
As teens have turned to chatbots that are not licensed nor designed for mental health services, Cohen told an audience of human resource executives at the CNBC event, “We are in the middle of this vortex, literally disrupting mental health therapy. … It’s beyond my imagination … and the results have been disastrous,” he said, citing multiple hospitalizations of teens who harmed themselves and suicides, including reporting from a recent New York Times podcast.
OpenAI recently announced planned changes to its ChatGPT AI after it was blamed for a teen suicide and sued by a family.
“I tell every group, if you don’t know about it, you need to know what’s going on. You need to prevent people you know, and teenagers from going on these LLMs to have conversations,” Cohen said.
He highlighted several ways in which the latest large language models are not designed for situations of mental health crisis. For one, they are designed to continuously engage, and while they can be empathetic, they are also designed to keep encouraging you, which in cases of mental distress can take you “down a delusional path or path of thinking you can do no wrong,” he said.
“About four months ago someone said to ChatGPT ‘I’m really depressed and thinking about maybe ending my life and I’m thinking of dropping off a bridge,’ and ChatGPT said ‘hHere are the 10 biggest bridges and how tall they are in your area.'”
AI engines have helped teens write suicide notes, dissuade them from explaining evidence of self-harm to parents, and given instructions on how to build a noose, Cohen said. Even when the AIs know better than to assist those seeking to harm themselves, and refuse to offer direct help, teens have found simple workarounds, according to Cohen, such as saying they are writing a research paper on suicide and need information.
The LLMs fail to challenge delusions, have no HIPAA protection, no clinical oversight, no clinical off-ramping, and at least until now, little to no real-time risk identification, he said.
“Once you go down the rabbit hole it is unbelievably difficult to get out of it,” he added.
On the Talkspace platform, risk algorithms are embedded in the AI engine with the ability to detect suicide risk and send alerts to a therapist based on the context of a conversation suggesting when a person is potentially at risk of self harm.
In New York City, where Talkspace has offered mental health support to 40,000 teens on its platform, there have been 500 interventions to prevent suicide in two years, and over 40,000 suicide alerts, according to Cohen.
Cohen said at the CNBC event that Talkspace is currently building an AI agent tool to address this issue, saying he expected a solution for the market to be ready in as little as three months, describing it as a “safe clinical monitoring and off-ramping” tool that will be HIPAA compliant. But he stressed it remains in testing mode, “alpha mode,” he said.
Addressing the audience of human resources executives, Cohen noted that these issues are highly relevant to companies and workforces. A question on the mind of many workers every day, he said, is, “What do I do with my teenager?”
“It’s having an impact on their work,” Cohen said, and adding to the anxiety, depression and relationship issues already prevalent within employee populations.
Of course, as with the new tool Talkspace is building, AI has positive use cases in the field of mental health as well.
Ethan Mollick, a Wharton School expert on AI who also spoke at the CNBC event, said part of the problem is that these AI labs were not prepared for billions of weekly users to turn to their chatbots so quickly. But Mollick said there is evidence that AI use in mental health can also in some cases reduce suicide risk, because it reduces conditions like loneliness, while he stressed it is also clear that the AI can do the opposite: increase psychosis. “It’s probably doing both of those things,” he said.
At Talkspace, there is emerging evidence of how AI can lead to better mental health outcomes. It began offering an AI-powered “Talkcast” feature that creates personalized podcasts as a follow-up after patient therapy sessions. Cohen described the podcast as saying, more or less, “I hear what you said. These were issues you raised, and these are things we would like you to do before the next session.”
Cohen is among the users of that new AI tool, for among other reasons, to improve his golf game.
“I told them when I stand over the ball I get really anxious,” Cohen said at the CNBC event. “I wish you could hear the podcast that was generated by AI. It comes back and says ‘Well, Jon, you’re not alone. These are the three professional golfers that have the same exact thing you have and this is how they solved the problem. These are the instructions, these are the things we want you to practice every time you stand over the ball.’ It was a miraculous podcast for me for two minutes to solve a problem,” Cohen said.
Across all Talkspace users, the personalized podcast tool has led to a 30% increase in patient engagement from a second to third therapy session, he added.
The mental health company, which has around 6,000 licensed therapists across the U.S., plans to keep expanding on its mission to combine empathy with technology. Most users have access to therapy for free, or have a copay as little as $10 depending on insurance coverage. Through employee assistance programs (EAPs), major insurer partnerships and Medicaid, Talkspace can match users with a licensed therapist within three hours, with texting available within 24 hours.
“Talkspace cut its teeth on proving that texting and messaging therapy actually works in addition to live video,” Cohen said at the CNBC event.