New app aims to predict whether people with psychosis are worth hiring

1
550

Editor’s Note: Part of MIA’s core mission is to present a scientific critique of the existing paradigm of care. Each week we will be republishing MIA’s latest blog on the evidence supporting the need for radical change. This week’s summary is below.

 

In the third season of the dystopian sci-fi show Westworld, Caleb (played by Aaron Paul) is a veteran diagnosed with PTSD whose mental health treatment is conducted by app and algorithm. His options are dictated by a sophisticated artificial intelligence that predicts the best options for him and delivers them instantaneously.

But when that AI determines that Caleb is at risk of dying by suicide—years down the road—it systematically cuts him off from work, relationships (ever wonder why you can’t find a match on that dating app?), and all of the other aspects of life that might be protective. Because the AI thinks he will fail, it never even gives him the chance to succeed.

Is this just far-fetched dystopian science fiction?

In a new article in JAMA Psychiatry, researchers offer just such an app. Their goal:

“To develop an individual-level prediction tool using machine-learning methods that predicts a trajectory of education/work status or psychiatric hospitalization outcomes over a client’s next year of quarterly follow-up assessments. Additionally, to visualize these predictions in a way that is informative to clinicians and clients.”

The research, funded by a grant from the National Institute of Mental Health, was led by Cale N. Basaraba, MPH at the New York State Psychiatric Institute.

The data came from 1298 people who were enrolled in OnTrackNY, a system of 20 programs across New York for people with recent-onset (first episode) psychosis. The algorithm was trained using data from around 80% of the people, and “internally validated” on that same data. Then it was also tested on the remaining 20%—the data that had not been included in its training—to see how well it might predict for people that hadn’t been part of its creation (“externally validated”). It was also tested for new patients that came in after its creation.

So, how successful was it?

Using just a person’s baseline data, the algorithm was able to predict whether the person would be in school or employed three months later with 79% accuracy for the internal dataset, and 78% accuracy for the external data. By 12 months, that fell to 70% in the internal dataset, 67% for new clients, but strangely increased to 81% for the external dataset. In general, not a terrible showing—but not particularly good either.

However, using the baseline and three-month data, the algorithm was able to predict six-month work/school status with 85% accuracy in the internal dataset, 79% accuracy in new clients, and 99% accuracy for the external dataset—a near-perfect prediction. This fell to 77% for predicting one-year outcomes, but is an incredible showing for this algorithm.

It was much worse at predicting hospitalization, though. Using baseline data, the algorithm predicted hospitalizations by the three-month mark with 58% accuracy in the internal dataset, 55% for new clients, and 42% accuracy in the external dataset—managing to perform even worse than random chance (50%).

Given more time-points of data, it was able to predict three-month outcomes at slightly better accuracy, ranging around 70% in the internal dataset but remaining low in the others.

The real question, though, is what the algorithm adds. The top predictors it used seem self-explanatory: Those who already had a job, who were younger, and who were higher functioning were more likely to end up employed.

What is the purpose of such an algorithm? It doesn’t identify these risk factors—these factors are all the inputs to the algorithm, identified and then entered by the treatment team. The output is a score, a number that turns these risk factors into a likelihood of success. It tells clinicians simply this: what is the chance that this person is worth taking a risk on? Should we bother to help this person gain employment, set up a dating app, find housing?

If the algorithm determines that it’s unlikely that they will find employment, why would a clinician try to help them find a job?

If the algorithm determines that they’re just going to end up involuntarily hospitalized, do you think a clinician would help them set up a dating app.

And if it predicts they’ll be unemployed and in and out of the hospital, will a clinician try to help them find a home?

This app is coming. The researchers write that they have made plans to field test their app and have it evaluated by focus groups.

But what of the ethical implications of algorithm-driven mental health care? What of the possibility—as in Westworld—of a self-fulfilling prophecy of failure?

“Unfortunately, the ethical considerations of incorporating these tools are rarely acknowledged in published prediction articles,” the researchers write.

SHARE
Previous articleHow to make Mental Health Act interviews more dialogical
Next articleNo, one in five children do not have a ‘mental disorder’
Peter Simons was an academic researcher in psychology. Now, as a science writer, he tries to provide the layperson with a view into the sometimes inscrutable world of psychiatric research. As an editor for blogs and personal stories at Mad in America, he prizes the accounts of those with lived experience of the psychiatric system and shares alternatives to the biomedical model.

1 COMMENT