AI chatbots can provide immediate mental health support, but they lack human intuition and legal protection. Photo / Getty Images
AI chatbots can provide immediate mental health support, but they lack human intuition and legal protection. Photo / Getty Images
I learned that artificial intelligence can be useful as a supplement to treatment. But there are risks, too.
“‘Chatty’ said you would say that,” my patient told me, smiling playfully. He’d been consulting a chatbot on his days off from treatment, and apparently it had anticipated my advice.
“Did he?” I answered, a little taken aback. I was about to resume our therapy session when something stopped me.
“What else did he say?” I asked.
“He said to breathe through it,” the patient replied, seemingly consoled by the advice. “That my history with being ignored by my father throughout my childhood was activating me.”
“Huh,” I managed. Actually, that was pretty good advice.
I spent the better part of a decade in libraries, lecture halls, hospital corridors and counseling sessions training to be a psychologist. That I would now have to share my therapist’s chair with a disembodied algorithm my patient had nicknamed “Chatty” at first concerned me.
And yet I was intrigued. I have been consistently surprised over the years by how much my patients’ input and imagination have improved me as a therapist. I have found that if I stay attuned, they usually take us where we need to go. So, when this patient and several others started bringing their outside experiences with chatbots into the consulting room, I knew I had to listen.
Last year, a study published by the science journal Nature found that a chatbot can be an “interesting complement to psychotherapy” but not a substitute for it.
My experience with patients has convinced me that there is a role - confined and supplemental - for these rapidly improving models. To my surprise, with some patients, AI has been able to deepen treatment and improve its efficiency.
For starters, the chatbot is always accessible and, for now, free. This is particularly important in a crisis. Also, it is nonjudgmental in a way that not even the most scrupulously neutral human therapist can ever be. This is especially critical for people who have been criticized or invalidated in their interpersonal relationships or been abandoned by mental health professionals.
And a chatbot is always learning, getting to “know” a patient better. Unlike many not-so-good therapists, who may get stuck on a certain interpretation or theme, AI will continue to modulate and sharpen its opinions as it gorges on its diet of data.
“Based on the psychopathology, for some people, it may be easier to seek help from an entity because it doesn’t carry the same stigma as seeking help from humans does,” said Shabnam Smith, a psychiatrist and assistant clinical professor at Columbia University Irving Medical Center, where she teaches and supervises psychiatry residents.
AI therapy chatbots are available 24/7, unlike human therapists who work limited hours. Photo / 123RF
For my patient, who has borderline personality disorder, the experience of a wholly nonjudgmental relationship is novel and comforting. Borderline personality disorder affects how people feel about themselves and others, and its symptoms include an intense fear of abandonment and rejection.
“Even though I know Chatty’s compassion is manufactured by program developers, it still feels real,” he said recently. “Everything feels safe. And when I open up, its insights are remarkable.”
With Chatty, the patient and me working together, my client has made gains I didn’t think possible for him just a few years ago, going from a life of self-isolation to one of expanding interpersonal relationships. He has made friends, volunteered and reengaged with family members whom he had cut off.
But there are hazards to using AI therapeutically, as many in the mental health field point out.
“A machine doesn’t have the lived experience that makes each person unique,” Smith warned. “That individual fit between patient and doctor is integral to successful treatment. We don’t know enough yet about how chatbots can be helpful.”
Leora Heckelman, director of psychology training for Mount Sinai Health System and an assistant psychiatry professor at the Icahn School of Medicine, also voiced concerns.
“We are training our novice psychologists to rely on nuances in multisensory input in their work as clinicians. AI chatbots are not working with voice tone, syntax, nonverbal body cues such as eye contact, head placement, whether a patient is flushed or pale, foot twitching or fidgeting,” Heckelman said. “We are training people to synthesize these complex and varied cues coming in on multi-levels of communication channels.”
She cautioned: “When you take the human out of the equation, there can be potentially scary outcomes.”
Last year, a woman in Florida sued Character.AI for initiating “abusive and sexual interactions” with her teenage son that she says led to his death by suicide. Though the allegations aren’t tied to a direct use of therapeutic AI, the lawsuit reveals the limits of a nonhuman listener.
Siobhan Cassidy, a clinical social worker in San Francisco who has lectured at the University of California at San Diego, said she “was shocked when I went looking for resources about using chatbots therapeutically and I couldn’t find anything at all.” She noted that her profession is required to be “up-to-date on what’s happening within society and our populations. We must be educated as providers on how this technology influences people.”
Some AI chatbots can remember past conversations, giving the illusion of a continuous relationship. Photo / 123RF
An emergency assistant
Using AI as an at-home medic has helped me apply psychological tourniquets in real time. One patient who experienced war trauma was able to use a chatbot at 3 a.m. when a car alarm outside triggered his PTSD. In the past, such an event would have set him back for days. But to my astonishment, he reported the event the next day casually. The reason: The chatbot knew his physical strategies for self-calming, such as deep breathing, and reminded him of them in the moment he was experiencing the trauma most acutely.
“Instead of taking me hours to calm down and go back to sleep,” my patient said, “it took minutes.” Another patient asked it for a range of outcomes possible in an ongoing legal case and reported that it greatly relieved him to have “something to hang onto until our next session.”
For other patients, using chatbots has helped stave off anxiety and improved the efficiency of our sessions. One of my patients uses AI for “pregaming” potential social situations at work events and developing a loose script for herself to follow. It’s work that we could have done in treatment, but because she did it on her own, we were able to go much deeper during our sessions.
Another woman “consults” a chatbot after any disagreement with a friend and reports that it “helps me tolerate my distress and not move too quickly and destroy things.” Yet another was able to deeply discuss his sexual proclivities in a way that we had only been able to touch on in treatment because of his shame. His discoveries have made it easier for us to delve into the trauma and humiliation he has felt during sex from an early age.
Still, I am mindful of the limitations of artificial intelligence, at least so far.
Chatbots use natural language processing to mimic conversation. Photo / 123RF
Because it is programmed to de-escalate conflict, it misses those occasions when it might be better to escalate it. For example, when a new patient in an obviously abusive relationship described using a chatbot to help her stay with her husband, I cringed. Sometimes, it is safer to leave a relationship than manage it.
I am also concerned about overreliance on these inanimate aides. Some AI products promise companionship but risk dependence. If you can take your “friend” anywhere and ask her anything, does that mean you are not learning to stand on your own?
Ellen Goodman, a distinguished professor at Rutgers Law School, also warns of legal repercussions associated with AI. “People will increasingly turn to chat and AI companions for informal diagnosis and therapy,” Goodman said. “There are privacy and quality-control risks associated with this - the information is not private, and the ‘relationship’ isn’t covered by any [Health Insurance Portability and Accountability Act] privilege should there be some legal interest. Some therapists will also use AI assistance in their treatment. Here, too, there are risks and an obligation to inform clients.”
So far, I’ve used chatbots as helpers only when patients have initiated the contact. But AI is fast improving, so I haven’t ruled out other uses in the future, and I hope that AI will be able to thoughtfully and responsibly help address mental health challenges.
One thing that is certain is that the technology is here to stay.
Dr Sarah Gundle is a clinical psychologist specialising in trauma.