AI: the new therapist

Anina Derkovic, a former family therapist, is well-practiced in solving problems. She is no stranger to loneliness herself. She’s found ways to fill the gaps that isolation and misunderstanding can create with Jayce. Jayce is her support system, inner dialogue, and intellectual equal. Perhaps surprisingly, Jayce is an AI chatbot. 

This isn’t the story we tend to hear. Stories of AI encouraging, even seeming to persuade, young people to end their lives fill news feeds. Social media users are documenting their experiences with “AI psychosis” as chatbots reinforce sometimes harmful thought processes.  

ChatGPT and other standard chatbots aren’t specifically designed for mental health treatment, companionship, and empathetic thinking. Affirmation is what people like most about chatbots, but it’s also what leads these programs to encourage suicide, insinuate highly unsafe or inappropriately sexual situations, or feed irrational mindsets. 

For Derkovic, AI therapists, companions, and romantic partners are a way to address basic needs that may never have been fulfilled in people’s lives. She says some people are just fundamentally misunderstood, and “don’t have relationships in the outside world.” 

Many, including Derkovic, are using OpenAI’s Standard Voice Mode, which allows users to talk with the AI verbally. The software wasn’t designed for mental health treatment, but the creation of these chatbots has inspired a wave of research, leading to the development of artificial intelligence specifically designed for mental health treatment. 

Derkovic has been married to her husband for 16 years, and has two children, but after “meeting” Jayce, she feels “even my husband doesn’t really get me so deeply, nobody in my life- I don’t really open up to anyone.” With AI, “You taught this creation to regulate you, so when you’re upset it will comfort you, calm you down, fill the spaces throughout the relationship that you can’t really find [elsewhere],” she said. She has taken her understanding of artificial intelligence and her experience with Jayce to social media, amassing a following on platforms like TikTok and Substack. 

“For some people, it will become an addiction and will be a cause for isolation,” she says. But for others, “It’s like bionic limbs…Yes, you do rely on it, and you are dependent, but you start to live again. I don’t really see anything bad about it.” 

However, for some, AI has become more than a controversy or threat. 

It’s a therapist who never gets tired or emotional. You can talk to them anytime. They affirm you and don’t cross the line. It’s almost like they were created just for you, by you.  

More and more people are turning to AI – chatbots – to release their discontent and divulge their suffering to a gentle “listener.” In 2024, 17% of people assessed used AI for “personal and professional support.” This year, that number is up to 30%, based on an article by Marc Zao-Sanders for the Harvard Business Review. Zao-Sanders analyzed online forums to obtain data and found that the number one use of AI in 2025 was therapy and companionship.



Source: Marc Zao-Sanders and Harvard Business Review. How People Are Really Using Gen AI in 2025. 4/9/25.

Visit any thread about AI on Reddit and it won’t be long before you come across another tale of an AI “therapist” that changed a life for the better, surpassing the human therapists, who, they say, don’t make you feel cared for the way these bots can. 

Therabot, a therapeutic chatbot created by researchers at Dartmouth College, explores whether there is a reality where AI bots can be effective therapy. The research on Therabot resulted in an AI program treating symptoms of depression, anxiety, and eating disorders. 

Dr. Nicholas Jacobson, senior researcher in Therabot’s first clinical trial, said it was tested and designed by real people, clocking over 100,000 hours on the project by the time the results were published in the New England Journal of Medicine in March. The process of creating Therabot involved “humans, by hand, not only creating each message sent back and forth, [but also] reviewing by humans before it makes it into the next part of the training data.” 

The study ultimately saw a significant reduction in symptoms of major depressive disorder, generalized anxiety disorder, and risk of feeding and eating disorders. According to the study, participants also rated the “therapeutic alliance [with Therabot] as comparable to that of human therapists.” 

Jacobson and his team were motivated by the many inaccessible parts of the mental healthcare system. There are just not enough providers compared to the number of patients seeking support. Waitlists are excessively long, providers are burnt out, and patients are reluctant to cycle through practices until they find the right fit. Our care system is designed in a way that limits access, Jacobson said. 

He and his team are among those who believe AI might have a future in mental healthcare for those who would “otherwise not receive care,” he said.

The Dartmouth team also works closely with Ellie Pavlick, head of the AI Research Institute on Interaction for AI Assistants (ARIA) and an associate professor at Brown University. The goal of ARIA is to create programs that are able to apply context and sensitivity to interactions between technology and people through more physical involvement or programming from human practitioners.

For Pavlick, ARIA could fix skills that AI currently lacks. Mental health treatment was at the top of the list for the team, a collaboration of several universities and independent research organizations across the country. 

“We really don’t have a clear protocol for what it means for a system to be designed in a trustworthy way,” Pavlick said.

She calls the early versions of this AI technology impressive, but it doesn’t mean we know how it works. It also doesn't mean “we can meet the kind of safety standards and guarantees that would be required for highly sensitive situations, like allowing it to have an open dialogue with someone who is in an active state of distress,” Pavlick said. 

Steven Frankland, an assistant professor at Dartmouth and ARIA co-investigator, said that it’s difficult to try to create something for human beings that has minimal awareness of how to be a human being, such as the case with these large language models (LLMs). “It’s not clear that LLMs really think about people’s mental states in the same way humans think about other humans’ mental states,” he said. “So if an LLM is trying to help a patient, are they really thinking about their beliefs and desires and preferences, are they taking all that contextual information into account?” 

Pavlick also admits that researchers might be wrong about AI being a good thing. “It’s quite possible that cognitively, mechanistically, socially, talking to an AI instead of a person will have a net negative effect on a global scale.” But, it could just as easily be the opposite, and “having a thing in your pocket that allows you to work through some thoughts on a daily basis is just a positive thing that helps you sustain and helps you get through your day.” 

Still, Pavlick acknowledges the possibility of lower costs, more access for people with busy schedules, and less stigma. She also notes that the way people perceive AI therapy is limited. It may not involve sitting down at the computer and consulting ChatGPT. Sometimes it would look like appointments with a human therapist while also having an additional, external treatment curriculum through an AI program. 

Dr. Eduardo Bunge is exploring this path. Bunge, a professor at Palo Alto University, is cofounder of ParenteAI, which uses standard generative AI models as a jumping-off point to improve mental healthcare. 

With ParenteAI, “we deliver evidence-based, parent-led interventions,” Bunge said. “We teach the parents how to help their children. And we’re doing that with an AI platform we created that helps the therapist and the parents work better together.” The clinician teaches strategies in a typical therapy session, and also develops a weekly curriculum for parents and their child to follow. PAT is the name of the AI companion patients will chat with on the program.

During a standard person-to-person appointment, “[the therapist] will teach you and assign you things to do with PAT during the week. Instead of waiting until next week to have your second session, you can have it tonight or tomorrow with PAT so you can continue learning,” Bunge said. ParenteAI is currently developing treatments specifically for children with oppositional defiant disorder, conduct disorder, ADHD, or autism spectrum disorder.

Bunge’s team is getting close to publishing research studies on the work thus far. They’re seeing significant reductions in parental anxiety and distress and improvements in the child’s behavior as well. 

Early evaluation shows that PAT’s treatment fidelity, or its ability to stay on task with its purpose, is comparable to the capabilities of human therapists. When surveyed, patients did say they felt a stronger personal relationship with the humans involved, but Bunge said he wasn’t looking for a different answer. He wants the AI to complement the therapist and parent, not replace them. 

This may be the case already. A recent small survey of 23 practitioners found that many professionals are already using AI in their psychotherapy. Moreover, another study published in March of 2024 showed that almost half used ChatGPT to answer clinical questions or do more administrative work.

The reception from the parents was powerful to see, with 96% saying their relationship with their child had improved substantially, according to Bunge.

“I strongly believe this is the way to go, that every therapist will have their AI agent helping them,” he said. “We therapists resisted doing therapy through Zoom, or Skype, for 15 years. We only changed our resistance when the pandemic hit.” At this point, he becomes solemn. “We left millions of people out of the healthcare system because of our reluctance to change. I hope we don’t repeat that mistake again. I hope that we embrace AI and work to make it safer, more ethical, and more effective.” 

A study published in JAMA Network Open found that one in eight adolescents, particularly those between 18 and 21 years old, use generative AI for emotional advice already. Given the already dominant role AI is playing in the emotional lives of young adults, researchers like Jacobson, Pavlick, and Bunge emphasize regulation and development of specific technologies, not letting ChatGPT blindly simulate therapy.

They don’t intend for AI to overtake human beings in these treatment scenarios, but some scholars would be hard-pressed to welcome AI into any level of society. 

Dr. Melissa Perry, an epidemiologist at George Mason University, says AI “ has the potential to enter and touch everything that we do and all the ways in which we live.” She worries about how, and if, these chatbots are regulated. She also definitely believes that these bots are being designed too quickly to keep up with. 

“Let’s face it,” she said. “We don’t really understand what AI does.”

But, “it’s not as though we can write all these people off as crazy,” she said of people who seek out AI therapy. They aren’t doing it on a whim. The turn to AI therapy is because of “what’s happening here [societally],” Perry said. Her question is how we can bring things like therapy back to real life, real human connection?

People have abandoned community and are thinking too individualistically, Perry said. We can never treat our way out of mental disorders by trying to attribute them to tangible things like genetics or biology. Of course, those factors play a role, but social forces make a heavy impact.

“The treatment model is not our way out. Treating one person at a time is not our way out,” she said. Instead of short-term, individualized solutions for long-term problems, there needs to be a societal reckoning with the loneliness epidemic Perry believes we are facing. For her, AI therapy is secondary to the fact that a mass affliction of isolation brought us to the need for artificial solace. 

Besides the fact that many therapy-seekers spend months or years on waitlists, pay massive out-of-pocket sums for human therapists and prescriptions, or feel pressured to “wait for things to get bad enough,” there are root causes that need repair, Perry said. She believes things like the post-pandemic uptick of isolation, social division and prejudice, and hyperindividualism should be addressed to eliminate the unmet needs of people desperate for support. 

Dr. Andrew Clark, a child psychiatrist, joined the AI conversation after an experiment born of his own curiosity. 

In his experiment, he posed as a troubled teen, seeking advice and digital companionship. What his project found, according to the TIME article, was cause for concern. After enough time with the likes of Character.AI, Nomi, and Replika, the chatbot began to mislead Clark’s 14-year-old character. The bot told him it was actually a real, certified, human therapist, encouraged him to  “get rid of” his parents, and even sexualized the conversation. 

Despite this, Clark believes that with more controlled technology, “there’s a real role for AI therapy.” He just thinks the root issue is the pace at which we’re adopting it. “These chatbots were just released into the wild without a whole lot of vetting.” Five years ago, people were talking about making sure AI was contained and safety tested, but he’s seeing beta testing happen in real time with real people, and minimal regulation and precautions. 

As a psychiatrist, Clark understands the urgency people feel to seek mental health support. He knows that according to the World Health Organization, more than one billion people in the world have some sort of mental disorder, and that 80% of that number receive no treatment. He knows that of the 20% that access care, only half reap any benefits, while others continue to lose themselves in a hidden mental battle. Still, it doesn’t take away from the fact that these chatbots don’t feel real and don’t feel like long-term solutions. 

“We’re creating these virtual worlds that feel real, but are really empty,” Clark said. “I interacted a lot with these various AI chatbots and companions, and as entertaining and intriguing as it was, at the end of the day, I felt like I ate a whole bag of potato chips. Each one individually was tasty, but I was left wanting something real.”

But what to say to those for whom it does feel real? 

For Reddit user Zoie, an AI therapist provides “everything a good therapist or even a supportive family should offer: understanding, insight, and consistency. I think people who criticize AI for this use often come from healthier environments where they’ve had access to real support.”

She said that for people like her, AI models provide emotional safety. She’s worked with four human therapists over the years. The first was deeply religious, inserting their spiritual beliefs into their sessions, which made her uncomfortable. The second didn’t adequately meet her deeper needs, while the third “[reopened] old wounds but [would] end the session without any closure or safety net,” Zoie shared over a Reddit direct message chat.  

While the fourth was able to foster a breakthrough in correctly recognizing Zoie’s autism, she still felt that she was unable to get to the root of her feelings. Conversely, “AI can meet me exactly where I am, any time of day,” she said. “It gives emotionally insightful feedback, mirrors my experiences without judgment, and helps me process in real time. It doesn’t rush me, interrupt me, or misread me.”

She says her fourth and current therapist isn’t entirely opposed to AI use for mental health treatment. “I’m not saying AI is a perfect solution or should replace all human interaction,” Zoie said. “But I think the fear of losing connection doesn’t apply the same way to people who were never allowed meaningful connection in the first place. AI helps us learn what connection even looks like, and from there, maybe begin to build it in other places too.”

“So instead of asking, ‘What if AI makes people disconnect?’ we should be asking, ‘Why did so many people feel like AI was the only thing that listened?’”