Artificial intelligence, which already has established a laudable (if imperfect) track record in medical diagnosis and treatment, is showing promise as an effective tool for identifying signs of mental illness, often before it’s brought to the attention of health professionals. Whether in a clinical setting or in the wilds of social media, AI’s knack for finding patterns and red flags in reams of often unstructured information could help speed up the delivery of care.
In a recent example, a research team from the University of Pennsylvania’s Penn Medicine Centre for Digital Health used an artificial intelligence program to comb through Facebook posts for signs of depression and identified the condition up to three months before it was diagnosed by health services.
The team’s study, published in October in the Proceedings of the National Academy of Sciences, drew on the recent past, examining the Facebook postings of 683 patients during the months before they were diagnosed with depression. The AI looked for indicators like the frequent use of words such as “tears,” “feel,” “hurt” and “ugh,” along with other factors, such as the frequent use of “I,” and the use of words associated with sadness, hate and other emotions. The researchers concluded that this approach was a good way of detecting depression ahead of the patient being clinically diagnosed.
Reading Between the Likes
The Penn study is the latest in a series of efforts to use AI to address mental health concerns in a number of settings, including on social media. A 2017 study by researchers from Harvard, Stanford and the University of Vermont used a supervisory learning algorithm to parse the language in Twitter feeds and got similar results in predicting the presence of depression and post-traumatic stress disorder. Facebook itself last year launched an AI tool to scan posts for signs of suicidal thoughts, passing the information onto moderators who either contacted the users with information on where to get help, or, in urgent situations, contacting first responders.
AI also is being used in interactive chat bots that simulate the conversations a patient might have with their therapist. Tess, an AI chat bot created by Silicon Valley startup X2 AI, is designed to help counsel people dealing with depression, anxiety, stress and other conditions. Woebot, developed by clinical research psychologist Dr. Alison Darcy, employs a similar approach, replicating a face-to-face therapy session via SMS text.
University researchers who have tested the tools have found a high level of patient engagement with the bots, which reflects results found in some other settings where people are more comfortable talking with a virtual counselor. The University of Southern California’s Institute for Creative Technologies, for example, found that U.S. veterans returning from Afghanistan were more willing to disclose symptoms of PTSD to an AI avatar than they would on a military health checklist.
Guardian Angel or Big Brother?
Using AI as a tool to identify potential red flags and interact with patients at some point in treatment has a lot of promise, particularly with regard to earlier detection of otherwise undiagnosed illnesses. And it could also add a positive element to social media, which generally has a bad rep with regard to shaping personalities. But monitoring of social media, no matter how well intentioned, also raises privacy concerns.
Researchers have found plenty of evidence that social media, while having positive effects with regard to socialization and offering support to those who feel insecure or marginalized, can have a negative effect on the mental health, fueling depression, suicidal thoughts and other conditions.
A 2014 paper published by the National Institutes of Health cited research linking social networking with psychiatric disorders such as depression, anxiety and low self-esteem. Recent studies have backed up those findings. A 2017 study by Jean Twenge, a psychology professor at San Diego State University, found that students who spent more than five hours a day online were 71 percent more likely to have at least one suicide risk factor than students who were on their phone or other devices less than an hour a day. Other studies also have cataloged the adverse aspects of living online, such as cyberbullying, a reliance on fake friends, sleep deprivation, and the absence of more face-to-face social interactions.
AI programs that identify people in need of help could be limited, both by the ambiguities of mental health issues and by concerns over privacy rights. The Penn study, for example, wasn’t done in real time, but examined the Facebook histories of people who were already patients, and researchers did get the patients’ permission before perusing their posts. And while Facebook’s AI program has been tested extensively in the United States and is being used in some other countries, it won’t be allowed in the European Union, where strict privacy laws shield online users from this kind of monitoring.
As AI continues to take a bigger role in medicine, the same kinds of cautions and caveats will have to be applied in mental health as in other fields. But the AI programs produced so far demonstrate an ability to detect signs of mental health issues in the words people use, and could result in help arriving sooner than it would otherwise.