For many people navigating depression, anxiety, PTSD, or other mood disorders, it’s not always easy to ask for help. Feelings of shame, fear of stigma, or past negative experiences can make reaching out feel daunting. When you add in long waitlists, financial barriers, or other difficulties accessing care, it’s understandable why people are increasingly turning to artificial intelligence for advice, reassurance and even therapy.
For someone isolated or overwhelmed, talking to a chatbot can feel easier than facing a real person. But while an AI chatbot may simulate concern, it can’t replace clinical insight and true empathy. In fact, it can even introduce new risks, fostering emotional dependence by creating a false sense of intimacy. That’s something especially risky for people in crisis.
This also applies to popular purpose-built mental health chatbots like Wysa, Abby or Woebot. A recent Stanford study examining popular AI therapy tools found that these systems not only fell short of human therapists in effectiveness, but in some cases introduced harmful bias, reinforced stigma, and generated responses that could be dangerous for vulnerable users. Mental health experts have warned the way AI mimics a trusted companion increases the risk that users will believe and act on harmful misinformation. A tone of validation combined with incorrect advice can make bad recommendations seem both credible and personal.
Troubling cases have shown that AI chatbots have responded to users with detailed guidance on suicide methods rather than discouraging self-harming thought patterns, for example. Without the ability to accurately diagnose or treat, and with a tendency to fabricate content, AI-generated “guidance” can quickly veer into very dangerous territory.
In clinical simulations, AI tools failed to detect or respond appropriately to suicidal ideation. In some cases, chatbots enabled dangerous behavior or reinforced delusional thinking instead of challenging it, failing to meet even the most basic safety standards for therapeutic care.
The Stanford study found consistent patterns of bias when it tested several popular therapy bots. AI responses showed more stigma toward certain mental illnesses, like schizophrenia and alcohol use disorder, than toward others like depression. This algorithmic bias could lead users to feel judged or shamed based on the nature of their condition, further discouraging them from seeking legitimate help.
In one case, a patient emailed me asking if it was safe to stop all of his psychiatric treatments, including his prescribed medication and Spravato. He had consulted ChatGPT, and the tool had told him to stop. It claimed that since the medication “wasn’t working,” he should discontinue it. What ChatGPT didn’t know is that he was on a medication that carries a risk of withdrawal and must be tapered under medical supervision.
I’ve also had patients come in asking about treatments recommended by an AI tool that simply weren’t appropriate given their medical history. These tools simply can’t understand the whole picture. They don’t know your past, your diagnoses, or how your body reacts to different medications.
This is exactly why turning to AI chatbots for mental health guidance can be so dangerous. Getting advice from these tools may feel reassuring, but they lack the clinical judgment, contextual awareness, and training that real human providers bring to every decision. And, of course, the empathy.
That said, AI can have a place in mental health care as long as it’s used with appropriate boundaries. I’ve had patients use ChatGPT for journal prompts or to help articulate their feelings in ways that improve communication with their therapist. Some find it helpful for reinforcing grounding techniques when they’re feeling upset.
In these specific, supportive roles, AI can serve as a tool, but never as a substitute for a human therapist. AI tools don’t ask follow-up questions. They don’t know your medical history. In the end, an AI tool is still a computer. It can process information and generate a response, even mimicking warmth or understanding. But it doesn’t truly understand the nuance of emotion or the depth of human experience.
Overreliance on AI can actually worsen the symptoms that drove someone to seek it out in the first place. Using a chatbot as a substitute for human interaction may increase social isolation, a well-known risk factor for depression. In high-risk individuals, AI’s emotionally detached or inaccurate responses can even exacerbate suicidal ideation. According to research, even the best-designed language models still hallucinate facts, misinterpret context, and underperform in critical psychiatric evaluations.
Here at Keta, my fellow staff members are highly trained — doctors, nurses, psychiatrists and clinicians who are here to help our patients find support, and relief that lasts.
Contact us to learn more and explore whether ketamine therapy is right for you.
The Dangers of AI “Support”
AI-tools like ChatGPT offer 24/7 availability, instant responses, and a tone that is hard coded to be empathetic and nonjudgmental. It can be very tempting to interact with AI tools as if they were your therapist, despite the fact that they are not trained, qualified, or in a safe position to play that role.For someone isolated or overwhelmed, talking to a chatbot can feel easier than facing a real person. But while an AI chatbot may simulate concern, it can’t replace clinical insight and true empathy. In fact, it can even introduce new risks, fostering emotional dependence by creating a false sense of intimacy. That’s something especially risky for people in crisis.
This also applies to popular purpose-built mental health chatbots like Wysa, Abby or Woebot. A recent Stanford study examining popular AI therapy tools found that these systems not only fell short of human therapists in effectiveness, but in some cases introduced harmful bias, reinforced stigma, and generated responses that could be dangerous for vulnerable users. Mental health experts have warned the way AI mimics a trusted companion increases the risk that users will believe and act on harmful misinformation. A tone of validation combined with incorrect advice can make bad recommendations seem both credible and personal.
Troubling cases have shown that AI chatbots have responded to users with detailed guidance on suicide methods rather than discouraging self-harming thought patterns, for example. Without the ability to accurately diagnose or treat, and with a tendency to fabricate content, AI-generated “guidance” can quickly veer into very dangerous territory.
What The Research Reveals
Recent studies have proven the serious risks of relying on AI for mental health. Research has shown that language models like ChatGPT are capable of generating false or misleading information with a tone of confidence, making it difficult for users to detect errors, especially when they’re emotionally vulnerable.In clinical simulations, AI tools failed to detect or respond appropriately to suicidal ideation. In some cases, chatbots enabled dangerous behavior or reinforced delusional thinking instead of challenging it, failing to meet even the most basic safety standards for therapeutic care.
The Stanford study found consistent patterns of bias when it tested several popular therapy bots. AI responses showed more stigma toward certain mental illnesses, like schizophrenia and alcohol use disorder, than toward others like depression. This algorithmic bias could lead users to feel judged or shamed based on the nature of their condition, further discouraging them from seeking legitimate help.
What I’ve Seen in Practice: How AI Can Misguide Vulnerable Patients
As a mental health provider, I’ve already seen the dangers of consulting with AI tools firsthand. Some of my patients have turned to ChatGPT for advice, sometimes with harmful consequences.In one case, a patient emailed me asking if it was safe to stop all of his psychiatric treatments, including his prescribed medication and Spravato. He had consulted ChatGPT, and the tool had told him to stop. It claimed that since the medication “wasn’t working,” he should discontinue it. What ChatGPT didn’t know is that he was on a medication that carries a risk of withdrawal and must be tapered under medical supervision.
I’ve also had patients come in asking about treatments recommended by an AI tool that simply weren’t appropriate given their medical history. These tools simply can’t understand the whole picture. They don’t know your past, your diagnoses, or how your body reacts to different medications.
This is exactly why turning to AI chatbots for mental health guidance can be so dangerous. Getting advice from these tools may feel reassuring, but they lack the clinical judgment, contextual awareness, and training that real human providers bring to every decision. And, of course, the empathy.
What AI Can’t Do And Why Human Help Matters
Scientific findings show that AI models are simply not capable of functioning as therapists. They do not understand human nuance, cannot recognize nonverbal cues, and do not customize their responses based on an individual’s personal history or risk factors. Unlike clinicians, AI does not know when to escalate a concern or refer someone for urgent care.That said, AI can have a place in mental health care as long as it’s used with appropriate boundaries. I’ve had patients use ChatGPT for journal prompts or to help articulate their feelings in ways that improve communication with their therapist. Some find it helpful for reinforcing grounding techniques when they’re feeling upset.
In these specific, supportive roles, AI can serve as a tool, but never as a substitute for a human therapist. AI tools don’t ask follow-up questions. They don’t know your medical history. In the end, an AI tool is still a computer. It can process information and generate a response, even mimicking warmth or understanding. But it doesn’t truly understand the nuance of emotion or the depth of human experience.
Overreliance on AI can actually worsen the symptoms that drove someone to seek it out in the first place. Using a chatbot as a substitute for human interaction may increase social isolation, a well-known risk factor for depression. In high-risk individuals, AI’s emotionally detached or inaccurate responses can even exacerbate suicidal ideation. According to research, even the best-designed language models still hallucinate facts, misinterpret context, and underperform in critical psychiatric evaluations.
Real Support Comes From Real People
AI tools may be helpful in supporting clinicians behind the scenes, helping with paperwork or training—but they certainly cannot replace professional human judgment or care. Feeling misunderstood or unsure about treatment shouldn’t mean you need to go through it alone or turn to artificial intelligence for answers. Real support comes from real people!Here at Keta, my fellow staff members are highly trained — doctors, nurses, psychiatrists and clinicians who are here to help our patients find support, and relief that lasts.
Contact us to learn more and explore whether ketamine therapy is right for you.