Connect with us

Medtech

AI Algorithm Alerts Therapists to Suicide Risk in Patients

What You Should Know: In light of Suicide Prevention Awareness Month, Talkspace announced the results of its unique AI algorithm over the last three…

Published

on

This article was originally published by HIT Consultant

What You Should Know:

  • In light of Suicide Prevention Awareness Month, Talkspace announced the results of its unique AI algorithm over the last three years of identifying individuals at risk of self-harm or suicide. Using machine learning capabilities, the Talkspace platform can detect language patterns consistent with high-risk behaviors that place individuals at risk for self-harm.
  • The analysis runs real-time on messages sent by patients in their secure and encrypted virtual therapy room and triggers an urgent alert to the therapist. While Talkspace is not a crisis response service, an alert that an individual is displaying signs of suicidal ideation allows the provider to respond with appropriate care. A subset of anonymized, consenting clients flagged for risk suggests the model is 83% accurate.

Machine Learning Driven Language Patterns Analyzed for High-Risk Behaviors

“Technology will never replace that uniquely human interaction that occurs between provider and patient. However, we will prioritize machine learning capabilities that offer clinical assistance to improve the ability of our therapists to deliver the highest quality of care,” said Jon Cohen, MD, CEO of Talkspace. “In light of escalating suicide rates in the midst of a growing mental health crisis, developing and scaling technological aids for early intervention is mission critical for Talkspace.”

The natural language processing (NLP) model was developed by Talkspace in partnership with researchers at NYU Grossman School of Medicine and trained on anonymized, client-consented therapy transcripts to distinguish messages displaying suicidal risk from those without. Research published in Psychotherapy Research, titled “Just in time crisis response: Suicide alert system for telemedicine psychotherapy settings” (NYU Grossman School of Medicine; Bantilan, N., Malgaroli, M., Ray, B., & Hull, T.D. (2020)), presents evidence that the suicide and risk detection algorithm identified risk from non-risk content with 83% accuracy when compared to a human expert evaluating that same material.

Since 2019, when it was introduced onto the platform, Talkspace’s proprietary NLP model has flagged approximately 32,000 Talkspace members whose written messages to their therapists have shown signs of suicidality or risk of self-harm. Of those flagged individuals who continued to receive care through Talkspace, more than 50% demonstrated improved outcomes. According to an internal provider feedback survey, 83% of Talkspace mental health providers find the feature to be useful in providing clinical care and mitigating clinical risk. Talkspace will continue to develop AI technology with the goal of supporting mental health providers, enhancing quality of care, and improving outcomes for patients.

medicine

aids


machine learning

Medtech

ETF Talk: AI is ‘Big Generator’

Second nature comes alive Even if you close your eyes We exist through this strange device — Yes, “Big Generator” Artificial intelligence (AI) has…

Continue Reading
Medtech

Apple gets an appeals court win for its Apple Watch

Apple has at least a couple more weeks before it has to worry about another sales ban.

Continue Reading
Medtech

Federal court blocks ban on Apple Watches after Apple appeal

A federal appeals court has temporarily blocked a sweeping import ban on Apple’s latest smartwatches while the patent dispute winds its way through…

Continue Reading

Trending