How I know AI is not a "mid" technology.
Diagnosed with rare childhood anorexia, I struggled to open up to my therapist. AI chatbots now offer accessible mental health help. Let's embrace, not block, this vital tool.
By the time my weight dropped to 55 pounds, it was unmistakable that something was wrong with me. The cause was unthinkable. Eleven-year-old boys aren’t anorexic, right? That’s just not a thing. Well, it was in my case.
For the better part of a year, I found ways to lose calories and avoid eating. This had an unsurprising effect on my physical and mental health. And, it exacted an unquantifiable toll on the well-being of my family, especially my parents. They had tirelessly tired to help me get out of my rut. They urged me to eat, to stop moving, to stop running, to just go back to normal Kev. They had no experience dealing with a kid struggling with a severe mental illness. In fact, few mental health practitioners do—there’s a reason that when most people think “anorexia,” prepubescent boy does not come to mind. It’s just not common.
When I reflect on that period, I commonly asked what, if anything, could have empowered my family to spot my illness sooner or allowed me to change my habits before they nearly killed me. Some readers may scoff, but I honestly think that if my parents and I had access to the gen AI chatbots available today, then things may not have gotten so bad.
Mental illness is incredibly difficult to discuss with anyone. For parents, it may feel like a failure to admit that your child faces mental demons. They may experience guilt or shame in not knowing how to help their kid address those difficulties and develop healthier mental habits. For kids, it’s unlikely they have anyone to turn to. At least in my case, my friends had far other things on their mind (namely, backyard football and their latest crushes) than concerns about their body image. Ideally, therapists and other mental health practitioners could readily help all affected parties make sense of the best path forward. In practice, therapists are expensive, perhaps hard to find, and, in some cases, difficult to reach (they’re not known for having lots of free appointment slots).
Gen AI tools have tremendous potential to alleviate a significant shortage of mental health support. A recently released study concluded that Therabot, a gen AI chatbot, "hold[s] promise for building highly personalized, effective mental health treatments at scale." A trial involving 201 adults with clinically significant mental health symptoms showed that Therabot users significantly benefited from engaging with the bot. Users took advantage of the readily available tool (spending an average of 6 hours conferring with it) and equated that time as nearly as useful as time spent with a human therapist.
I know that younger Kev would have had a far easier time “talking” with a chatbot than with a human therapist. A visit to the therapist involved a 45 minute drive, sitting in a waiting room with a bunch of old folks, and sharing some of my deepest thoughts with someone I barely knew and hardly trusted. My hunch is that I would have spilled the proverbial beans to gen AI chatbot, if I had the opportunity. I’d wager that many other folks struggling with mental health illness would say the same.
This isn’t to say that chatbots can or should replace human therapists. Everyone’s mental health journey is unique. Everyone’s path through that experience depends on a unique set of interventions and forms of assistance. But to prevent access to AI chatbots as some want to do or to question the need for such chatbots is unfair and unjust to the millions of Americans seeking to remedy some mental health challenge.

That’s precisely why I’m so angry with the false and shortsighted argument that AI is a “mid” technology. Current AI offerings are unhelpful not because of technological shortcomings but because of legal and policy barriers that prevent a full realization of their potential to help people.
Today’s versions of mental health AI chatbots are the worst ones that will ever be available. Yet, they already offer equivalent assistance to human therapists. Those benefits will go unrealized so long as professional guilds and skeptical policymakers delay people from accessing such tools. Thankfully, some legislators have realized that we should allow individuals to seek out the forms of care that align best with their budget and preferences. Utah, for example, is paving the way for the responsible diffusion of mental health AI chatbots. Other states should follow suit.
AI would not have stopped me from becoming anorexic. But if it could have shortened my suffering and the suffering of my family by even one day, then that’s a tool I wished I would have had. Others are in the same boat. So, let’s stop spreading false narratives about AI spread by “thought leaders” and start listening to the evidence. In many cases, I suspect the evidence will show AI isn’t quite up to snuff—it’ll take time to become a useful tool in many contexts. In the instances in which its benefits are demonstrable, then the law, lawmakers, and, especially, technological naysayers should get out of the way.