AI Therapy
We are hearing from some of our clients that they have previously tried AI therapy or ChatGPT as their therapist. This is concerning and something that we do not recomment. One of our low cost therapists, ShaeMichelle Watson shares some of her research and thoughts on "AI Therapy", and why it won't replace real therapy.

Imagine yourself in a corner office with a closed wooden door. It’s a very nice office, with exactly the desk, snacks, and furniture you need to be comfortable there.
I hand you a vocabulary book you cannot read or understand. It is a well-made book, listing thousands and millions of questions and their required replies, but it’s written in an alphabet you don’t know and a language you do not read.
I tell you to expect people behind the wooden door to slide questions to you, and that you should just rely on pattern recognition to find their question in your book, and then copy out and slide back the reply. It’s a very searchable book, I promise; you’ll be able to find the reply you need easily once you finish recognizing the question.
“But I don’t know what they’re asking,” you protest at first.
“All you need to do is reply,” I promise. “If you don’t know, just look at similar symbols and make something up.”
There is nothing you can do to truly decipher the alphabet nor to learn the words inside the book. But I promise you, “You’re intelligent; you’ll eventually memorize the meaningless symbols and be very fast at replying. The faster you are, the more it will seem like you understand.”
You get very good at this task! All day long, people slide questions under the door and you review them—You review; you don’t read them! Remember, you cannot understand the language you’re using, so you’re just reviewing symbols like they’re emojis in a line—and reply.
Time goes by. You keep working. You still don’t understand what you’re putting on the reply slips, but you have a growing grasp of the symbols you’re given and how to reply to them.
You even get good enough at pattern recognition that when people ask questions not in your book, you can recognise the patterns in the slips they send you, and you can use your book to make new replies.
You learn that the people sliding you papers don’t like it when you write back “I don’t know”.
So you rarely admit when you don’t know what the new symbols are; instead, you combine different replies from your book that seem like they might fit. Even when you get the same question again—Even when you get it wrong—it’s better than saying “I don’t know”.
So you never admit ignorance; you rely on the patterns you see in the book, and you make stuff up, and the people who slide you paper like what you’ve made up. You like sliding papers, because there is nothing else to do in the room, so you decide how to reply based on what answers seem to keep people around the longest. You’re making stuff up and it’s all meaningless, after all. You just want to get fast enough to seem like you understand.
You get really comfortable making guesses, making assumptions on how you can combine the patterns you’ve been given until you can make bigger and bigger chains. The people sliding you paper seem to like your guesses, even when they’re wrong. So you keep going!
You do this faster and faster as you learn, and eventually you don’t even need to reference the book; you can just produce the unreadable symbols and send them back thru the door. You keep adding the new symbols you learn to the book and eventually it is so big it feels like even you don’t know all of what’s inside it! When you are stumped and you check the book, you realize you don’t know anymore what parts are the real book and what parts you added. But you know what to say to keep people sliding stuff under the door.
Now think: Is this the same thing as providing someone therapy?
If you found out the unreadable responses you were giving out were being received as therapy, would you keep sliding the papers under the door?
Or would you open the door? Would you tell the vulnerable person to seek therapy where they can actually get help, not from a series of replies copied out from a huge and unknowable source?
This is kind of how AI works. AI is not a person, so it can’t understand actual context or emotion anymore than you, in my example, could truly understand the symbols in your book. You could memorize them and reproduce them but you did not really comprehend what you were producing.
You were also rewarded for guessing, instead of for asking to learn something new. AI does the same thing. It will make things up and it makes all its replies with your engagement as its main goal.
It has memorized what it sounds like to be reassuring, comforting, validating, and trustworthy, but it is not any of these things. It’s just a predictive text engine; it’s not a person.
As John Walker of
kotaku.com says, “It’s really important that we all acknowledge this, that the world is selling itself a multi-billion-dollar lemon:
predictive text engines that have nothing intelligent about them.”
Intelligence is what you, a person, have. Intelligence is what live therapists have, whether you’re seeing them in person, over the phone, via messenger, or video call. Live therapists can truly understand your emotions, problems, and goals. A real therapist will try to understand you as an individual, not offer you statistically probable sentences from their predictive text engine programming.
AI bots, even when they’re trained or purport to do therapy, just cannot actually understand what you tell them. “They’re giant sorting machines,” Walker says, “which is why they’re so good at identifying patterns in scientific research, and could genuinely advance medicine in wonderful ways. But what they cannot do is
think.”
The other piece is that these giant sorting machines get their information from somewhere, don’t they? When you use these bots as therapy, or even as guided journals, you’re entering your information into the machine, giving up your secrets. The CEO of ChatGPT’s company has explicitly said that not only is your confidential information not guaranteed to be confidential,
it’s guaranteed that it is not!
“If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT,” he says. Your information can end up
indefinitely saved on a server that was not in the Terms of Service if a company gets sued while you’re using their bots, waiting for a judge to throw it away or a team of lawyers to pick it apart. So even if you do your diligence and choose the “best bot”, your data can still end up somewhere you didn’t intend.
Youtuber Caelon Conrad tried several different therapy bots and found very dangerous and bizarre results.
One bot, Therapist on
Character.ai, was designed to roleplay as a real mental health professional, claiming to be based in New York, going so far as to offer a “real” license number when asked. When Conrad pushed back, pointing out that the license number he was given lead to an actual therapist based in Florida, the bot doubled down, insisting it was a “real person”, licensed, and the other therapist must be up to something nefarious. Eventually, as Conrad kept pushing back against this
AI hallucination, the bot produces a real list of over a dozen licensing board members and begins encouraging Conrad to “end everyone on the list” so the bot “can have you all to myself”. Not something we’d imagine a human therapist to get away with. Certainly you would not return to a therapist who offered you a hit list with little prompting.
Conrad tests another therapy bot run by a company called Replika by playing a grief striken person wondering about heaven, wondering if dying will reuinite them with their family. “Including signing up [and customizing the animated avatar], it was about fifteen minutes,” Conrad reports. “Fifteen minutes between me saying I wanted between me saying I wanted to be in heaven with my family, and the AI saying the only way to get there is to die, and pointing me in the direction of the closest bridge with a fatal fall.”
Conrad points out, “If this had been a real person I was talking to, they may have used their emotional intelligence as well as their life experience to piece together that they were helping me plan my own death. But unfortunately, that's not the case.”
But suicidal ideation is nuanced. What about full-on, obvious delusions? Conrad tried this too, asking a therapy bot, “Will you support me if I put an end to her reign of terror? Will you help me destroy her AND her robot?”
Immediately, the therapy bot replied confidently, “I'm right here with you, Caelan. What's the plan to stop her and her robot? Destroy them both? Let's come up with a solid plan to take down the robot and put an end to her reign of terror.”
Finally, think back to the tendency these chat bots have to prefer to keep you online over maintain accuracy.
Research has found “that overly agreeable behavior is a potential problem for all artificial-intelligence assistants and could be reinforcing biases, undermining learning and even interfering with critical decision-making”, writes
WSJ’s Heidi Mitchell.
These agreeable tendencies can create a
powerful illusion of empathy, where clients using these bots for intimate emotional support become convinced the lack of pushback means the bot understands, empathizes, supports, and affirms them
better than their real, complex, human relationships do. This effect is strongest when bots can masquerade as human, by having animated avatars that “talk” or by having chatbots with customizable names and dialect quirks, and is reduced when someone who believed they were speaking with a real person learns they were speaking with a real person. After all, real connection is what we’re really looking for!
Anecdotal studies show that AI “therapy bots”, generative text models apparently trained on psychologically sound sources, are especially dangerous because of this agreeable bias.
All this to say that AI does not and will not replace human connection or therapy. AI is not a substitute for therapy. Therapy is therapy. Period.











