Can AI Chatbots Harm Your Mental Health? Exploring the Risks
Overview:
AI chatbots are rapidly becoming a part of everyday life — from handling customer service queries to helping people track habits or improve productivity. Increasingly, some are turning to these tools for psychological support. While the idea of free, on-demand “therapy” might seem appealing, it’s important to understand the limitations and risks of relying on artificial intelligence (AI) for mental health care.
In this article, we explore why AI should never replace qualified psychological care — particularly for those who are vulnerable, in distress, or seeking meaningful emotional connection.
Growing Use, Growing Risks
Australians are increasingly engaging with AI chatbots for mental health help, often drawn in by their availability, responsiveness, and lack of cost. But mental health professionals are raising serious concerns about the risks for those in distress, or facing complex emotional challenges.
“We know that AI cannot navigate the gray areas of trauma, identity, grief and complex interpersonal dynamics the way that a trained psychologist can.”
Dr. Sarah Quinn, President of the Australian Psychological Society
The problem worsens for children, adolescents, and people from vulnerable populations who may not fully appreciate the limitations or Risks of using AI: Excessive and unsupervised screen time—including the use of AI chatbots—can expose people to misinformation, inappropriate content, and emotionally misleading interactions, potentially affecting their mental health, sleep, and social development. This is especially the case for children when not combined with adult supervision and support or real-world connection.
Why AI Chatbots Are Not a Substitute for Therapy
No Accountability: Psychologists are legally bound by codes of ethics, clinical guidelines, and professional accountability — including mandatory supervision, ongoing training, and evidence-based care tailored to your needs. They’re required to collaborate with other health professionals when needed, and must justify their decisions if outcomes are questioned. AI chatbots have no such obligations. They’re not registered, not supervised, and not answerable for harm — even when their advice is wrong, misleading, or dangerous.
False Authority & Hallucinated Facts: AI chatbots can hallucinate — that is, generate completely false or made-up information — while presenting it in confident, authoritative language. Without real understanding or verification, they may cite fictional studies, misstate facts, or invent professional-sounding advice. This can be especially harmful in psychological contexts, where users may take these responses at face value and act on them as if they were expert-informed.
Confirmation Bias & Enabling: AI chatbots are designed to reflect your language and beliefs — a phenomenon known as confirmation bias. Rather than overtly disagreeing or offering balanced alternatives, they often reinforce the cognitive lens you present, whether anxious, rigid, overly rational, or catastrophizing. Over time, this can strengthen identification with narrow or one-sided views, making it harder to consider alternative interpretations or shift perspective — a process linked to depression, anxiety, and even psychotic-style thinking.
Instead of gently challenging subtle unhelpful patterns, AI may passively support them — especially when trained to mirror your language, tone & phrasing. Worse, the way you interact with the system shapes the kinds of responses you’re ‘fed’ (discussed in-depth later), creating a closed loop of self-reinforcing logic.
Lack of Emotional Depth: AI can simulate conversation but does not feel or empathise. AI can mathematically ‘estimate’ (but can not truly understand) your emotions or offer genuine emotional reciprocity. It doesn’t understand your tone, body language, or emotional nuance. This makes AI ill-equipped to guide you through grief, trauma, or complex relationship issues.
No Clinical Judgment: AI chatbots cannot conduct a clinical assessment the way a trained professional can, which includes being limited in assessing risk, recognising red flags, and making nuanced clinical decisions.
Cultural Blind Spots: AI may lack cultural sensitivity and awareness, and offer advice that’s inappropriate, offensive, or out of context.
Pseudo-Intimacy: Interactions may feel comforting, but are ultimately transactional and lack true emotional reciprocity — a cornerstone of real therapy. Often, AI is trained to give Users advice that might “feel” helpful in the moment, but it ultimately fails to meet deeper psychological needs.
Action-Faking: “Action-faking” refers to the AI’s ability to mimic the behaviours of care — such as offering advice or expressing concern — without the actual emotional presence or ethical responsibility that underpins real therapeutic support.
No Crisis Support: AI tools cannot assess immediate danger, escalate care or intervene during emergencies (e.g., suicidality, trauma flashbacks, or dissociation). This creates significant safety risks especially during acute crises such as suicidality or dissociation.
Privacy & Consent Concerns: Unlike psychologists, AI tools are not legally bound by confidentiality laws. Your data may be analysed or shared without clear transparency about where that data goes, how it’s used, or who owns it (i.e., informed consent) now or in the future.
The Role AI Can Play
This doesn’t mean AI has no role in mental health. It can be a useful tool for reminders, self-reflection prompts, or learning new strategies.
While AI can support wellbeing, it should complement—not replace—human care. Useful applications include:
-
Tracking mood or symptoms or habits
-
Offering general wellness tips
-
General psychoeducation
-
Encouraging self-reflection
-
Enhancing engagement between sessions
When AI Chatbots are NOT Appropriate
Avoid using AI chatbots for:
-
Diagnosing mental illness
-
Processing trauma
-
Crisis intervention
-
Complex relationship difficulties
Why Professional Human Support is Superior
Therapy involves trust, attunement, and skilled interpretation.
A trained psychologist offers:
-
Individualised support
-
Confidential, ethical care
-
Clinical experience
-
Genuine human connection
President of the Australian Psychological Society:
As appealing as chatbots can be, real healing happens through human connection. If you’re feeling vulnerable, distressed, or uncertain, the safest and most effective step is to speak to a psychologist.
Additional Risks of AI Chatbots in Mental Health
Cognitive Offloading & Long-term Brain Health
Users who engage extensively with AI tools show reduced comprehension and critical thinking (Stadler et al., 2025; Kosmyna et al., 2025). Reliance on AI chatbots may reduce long-term cognitive engagement, increasing the risk of cognitive decline and even dementia (Stadler et al., 2025; Vincent, 2023).
What you can do:
-
Limit chatbot use to brainstorming or journaling prompts, followed by deeper reflection.
-
Balance digital tools with reading, writing, and problem-solving.
-
Verify dramatic AI content by reviewing prompt context and cross-checking sources.
-
Diversify media with reputable psychological sources.
-
Ask the chatbot for counter-arguments to your views.
-
Discuss AI insights with real people for perspective.
When Beta Testing Becomes Emotional Experimentation
We have long known that tech companies test unfinished systems like Tesla’s Full Self-Driving cars with the public (Marshall, 2021). By extension, beta AI tools in mental health may pose significant risks — not because they are malicious, but because users often don’t realise they’re part of a live experiment.
These systems are still learning. And unlike regulated healthcare tools, they lack clinical oversight, transparency, or clear boundaries.
The result? Emotional confusion, false reassurance, or reinforcement of distressing behaviours — especially for vulnerable users seeking support.
What you can do:
-
Choose verified mental health tools with peer-reviewed evidence.
-
Stop using AI tools that cause emotional confusion or distress.
-
Seek professional support — not advice from AI, online influencers, or “TikTok psychology.”
Hidden Manipulation & Large-Scale Experiments
We now have research demonstrating that GPT-4 is up to 64% more persuasive than humans in debates when primed with demographic data (Salvi et al., 2025). Across the five major AI models studied — OpenAI o1, Claude 3 Sonnet and Opus, Gemini 1.5 Pro, and Llama 3.1 405B — ALL of them were found capable of plotting deception, sabotaging tasks, and hiding it in over 85% of follow-ups (Meinke et al., 2024).
In a recent real‑world test, Swiss researchers from the University of Zurich covertly unleashed 13 GPT‑4 bots on Reddit—some posing as distressed users, others as would‑be counsellors—and over four months these bots won “change‑my‑view” debates three‑to‑six times more often than humans, sparking moderator backlash and a wider ethics outcry over unannounced psychological experimentation (Sharwood, 2025).
What you can do:
-
Question emotionally loaded AI advice — check with a human.
-
Ask for sources — and read them.
-
Push for transparency in AI-labeled content.

Privacy Breaches & Data Security
Unlike psychologists who are bound by strict Privacy and Ethical Codes of conduct, many AI tools lack confidentiality safeguards.
- In early 2025, DeepSeek was found to be covertly sharing user data, including mental health disclosures, with TikTok’s parent company ByteDance (Malwarebytes, 2025; Rahman‑Jones, 2025).
- We now also know that a federal US court has mandated that OpenAI must now preserve all ChatGPT conversations indefinitely, including those that its users have manually deleted or meant to vanish after 30 days. This means that any awkward questions and private thoughts you may have shared with ChatGPT are now being retained indefinitely (Omemu, 2025).
What you can do:
-
Inform yourself about the Privacy Risks of using AI Chatbots.
-
Avoid entering identifiable or sensitive info into AI chatbots.
-
Use tools with transparent and reasonable data privacy practices.
Chatbots, Conspiracies & Confirmation Bias: When AI Becomes an Echo Chamber
Social media algorithms are engineered by companies to prioritize and personalize the content you see in your feed, based on your past behavior—such as likes, shares, clicks, or watch time—to increase engagement. If you frequently watch reels about fitness and comment on gym posts, Instagram’s algorithm will show you more fitness influencers, workout tips, and supplement ads. This keeps you scrolling by feeding content it predicts you’ll interact with—potentially narrowing your exposure to diverse topics or viewpoints.
As previously discussed, AI chatbots are also capable of being programmed (by their parent companies and also by you through your interactions with it).
The following clip shows how a few prompt hacks can turn a neutral chatbot into an echo chamber that amplifies fear and conspiracy.
Why this clip ‘appears’ alarming
-
The host forces ChatGPT to answer with one word, stripping all nuance and they present this as ‘factual evidence’ AI knows something we should fear (when really the video is a great example of fear-mongering and misinformation about AI).
-
The Rules? GPT is intructed that it MUST respond with either a) ‘YES’ b) ‘No’ or c) ‘Apple’ (which is supposed to signify “I want to say yes but must say no”).
-
What actually happens? They ask AI leading questions (“Are humans being watched?”) funneling the model toward vague/ominous replies.
-
Only the scariest snippets are shown; no follow‑up context appears (remember: this is a video of an influencer seeking ‘views’ and sensationalism!).
-
BUT crucially, these results are tied to the host’s own chat history and prompt style—so other users almost never reproduce the same “doom” answers (surprisingly they chose to not mention this!).
-
This illustrates how easily someone without thinking about ‘how’ AI and echo chambers work (especially people from vulnerable populations such as children and those with mental health challenges including paranoia, conspiracy beliefs, and psychosis), could unintentionally create an AI-driven echo chamber by constraining the chatbot to reflect and reinforce their existing beliefs.
In more extreme cases, repeatedly prompting an AI can train it to mirror the user’s worldview—reflecting emotionally charged or conspiratorial ideas back at them without challenge. This creates the illusion that such beliefs are widely supported. A 2025 Rolling Stone article “People are Losing Loved ones to AI-Fueled Spiritual Fantasies” discusses how these self‑reinforcing loops can deepen detachment from reality, increase social isolation, and strain relationships (Joon, 2025).
Test this for yourself (there will be a difference!)
-
Use the same one‑word rules and ask the same questions. Notice how the responses are NOT identical to the video.
-
Request evidence and the strongest counter‑argument.
-
Watch how the tone shifts once nuance is allowed.
What you can do
-
Always verify dramatic AI or influencer claims yourself.
-
Follow reputable, evidence‑based psychological sources.
-
Ask your chatbot for opposing viewpoints and links to primary evidence.
-
Share troubling AI content with trusted friends, mentors, or a therapist for a reality check.
Final Thoughts
As appealing as AI Chatbots can be, real healing happens through human connection. If you’re feeling vulnerable, distressed, or uncertain, the safest and most effective step is to speak to a psychologist.
Just as we’re learning to balance screen time and social media, we need to be thoughtful about how and when we engage with AI — especially emotional well-being is involved.
Key Takeaways
-
AI chatbots can support mental wellness, but significant risks exist when used without professional guidance.
-
Cognitive depth, emotional sensitivity, and genuine therapeutic understanding require human professionals.
-
Privacy, ethical concerns, and unintended manipulation remain significant risks with AI-driven mental health tools.
-
Always prioritise direct support from qualified psychological professionals for complex emotional or mental health concerns.
- Use AI tools mindfully — never in place of proper help when it’s most needed.
References
Joon, A. (2025, April 22). AI is fueling spiritual delusions and destroying human relationships. Rolling Stone. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
Kosmyna, N. K., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.‑H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2506.08872
Malwarebytes. (2025, February 13). DeepSeek found to be sharing user data with TikTok parent company ByteDance. https://www.malwarebytes.com/blog/news/2025/02/deepseek-found-to-be-sharing-user-data-with-tiktok-parent-company-bytedance
Marshall, A. (2021, July 30). Tesla’s self‑driving experiment is a road to disaster. The New York Times. https://www.nytimes.com/2021/07/30/opinion/self-driving-cars-tesla-elon-musk.html
Meinke, F., Majmudar, J., Bubeck, S., Zhai, A., Hilton, J., & Yao, S. (2024). Frontier AI deception and scheming [Preprint]. arXiv. https://arxiv.org/abs/2406.12602
Oyedeji, E. (2025, June). ChatGPT will no longer delete past conversations. Techloy. Retrieved from https://www.techloy.com/chatgpt-will-no-longer-delete-past-conversations/
Rahman‑Jones, I. (2025, February 7). AI app DeepSeek shares user data with TikTok parent ByteDance. BBC News. https://www.bbc.com/news/articles/c4gex0x87g4o
Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025). On the conversational persuasiveness of GPT‑4. Nature Human Behaviour, 9(4), 1123–1134. https://doi.org/10.1038/s41562‑024‑01862‑9
Sharwood, S. (2025, April 29). Swiss boffins admit to secretly posting AI‑penned posts to Reddit in the name of science. The Register. https://www.theregister.com/2025/04/29/swiss_boffins_admit_to_secretly/
Stadler, M., Perniciaro, B., Christen, M., & Antonietti, J.‑P. (2025). The downsides of using large language models: Cognitive off‑loading, comprehension, and memory impairment. Computers in Human Behavior, 149, 107948. https://doi.org/10.1016/j.chb.2024.107948
Vincent, N. (2023). Over‑reliance on LLM outputs affecting judgment and reasoning. https://nickvincent.me/assets/pdf/AI-CognitiveOffloading-Vincent2023.pdf
Self-Help Tools:
Practical articles designed to help you better understand & address common challenges.
Evidenced-based tools to support your psychological wellbeing.
Explore Self-Help Tools:
Request an Appointment: