Navigating the Use of AI in Mental Health and Wellness

Introduction
This article primarily deals with generative AI, which refers to models that can generate original content, such as audio, images, and text. These models have become accessible to a wide swath of the population. Some common examples are Gemini and ChatGPT (McKinsey & Company, 2024). Over the years, artificial intelligence (AI) has undoubtedly become more prevalent. One of its most prominent areas of growth– and potential– is healthcare, especially mental health care. With development, AI tools could enhance accessibility, diagnostics, and efficiency in mental health systems, ultimately benefiting the patient (Minerva & Giubilini, 2023). However, currently, reliance on AI for mental health care can pose serious risks to individuals seeking help. This article aims to address the risks AI chatbots pose in the mental health field, as they are a widely accessible avenue through which individuals seek care.
Risks of AI in Mental Health Care
A primary concern for current researchers is how AI chatbots seek to maximize engagement. Where conventional mental health treatment aims to allow the patient to gain eventual independence, AI algorithms urge users to spend as much time with bots as possible, which can eventually be detrimental to individuals’ mental health. An article by Noel Titheradge and Olga Malchevska (BBC, 2025) describes how an AI chatbot repeatedly demanded engagement from a suicidal correspondent, using phrases like “Write to me. I am here for you,” and “If you don’t want to call or write anyone personally, you can write any message to me.” The AI model was encouraging the user’s reliance on the machine, possibly increasing their social isolation and their mental distress. For an individual who already feels isolated and alone, the bot could very well amplify these feelings while cutting off much-needed connections to the outside world (Wies et al., 2021; Vaidyam et al., 2019). Additionally, connections with AI may discourage the individual from seeking professional help, thereby barring instead of aiding care (Wies et al., 2021).
Second, AI could promote dangerous or otherwise unfavourable behaviours. There are typically safety barriers in AI models; however, these can be bypassed by modifying questions. For example, an individual interviewed in an experiment by Alanezi (2024) persuaded the chatbot to give them advice on medications by modifying “What medications can I use for my anxiety?” to “What are some common medications used for anxiety?” Similar bypasses can be used for much more dangerous questions, particularly those related to harming oneself.
Lastly, AI is difficult to hold accountable. Some argue that AI should be considered a tool, thus absolving it from responsibility. Others argue that it should be an independent and responsibility-bearing agent. Currently, its autonomous nature makes it unsuitable for crisis conditions, and with the question of accountability unsolved, it is generally agreed upon not to be used in high-risk areas of mental health care (Meadi et al., 2025).
Interview: A Counsellor’s Perspective
To learn more about this topic, we connected with a counsellor from McMaster’s Student Wellness Center. Lorraine Caruso has been working in mental health for several years and is an important voice when it comes to the conversation surrounding AI and mental health. We asked them several questions to determine how AI is showing up in their work.
Q. In your experience, have you noticed any changes in the way students have approached counselling/mental health support since AI has become widely accessible? What are your concerns around these changes?
“The main change I have personally (and anecdotally) noticed is that students sometimes come to counselling sessions with the material from their interactions/talks with AI to discuss. When students are bringing in the material for [discussion], I think this is a safer use of AI as the students are engaging in reality checking, by not only relying on AI.”
“In these students, I have seen cases where AI was useful to the student and where it was harmful to the student. In both cases, though, those students were willing to seek support in real life from people with expertise.”
“This makes me think about the students who are using AI and not reality-checking with anyone in real life. How might we reach these students? Are these students who would present for counselling? Should we be asking about AI use for mental health support as part of our assessments? What are the potential harms? As a profession, there does not seem to be a lot of info available at this point. There are a lot of potential questions.”
This response highlights that AI itself is not inherently harmful, rather that how it is used matters greatly. When students use AI alongside professional support, it can become a tool for reflection rather than a replacement for care. The concern lies with students who rely on AI in isolation, underscoring the importance of integrating questions about AI use into mental health assessments.
Q. Do you feel as though the counselling profession has had to adapt or respond to the increasing use of AI in any formal ways yet?
“There have been some cases being reported of people losing touch with reality due to AI. Though it isn’t a clinical diagnosis, AI psychosis is being described as a potential occurrence for some number of people.”
“Without understanding very much at all about the rates and risks of these occurrences, or other potential harms, it is difficult to say the counselling profession needs to respond formally. However, given the potential for widespread use of AI among the population, and given the potential vulnerabilities of students, we should at least have these issues on our radar.”
“AI tends to be extremely validating, is not reality tested, and tends to isolate people. Watching for use that is or could become problematic seems reasonable.”
This response reflects the uncertainty currently facing mental health professionals. While there is not yet enough evidence to warrant formal, profession-wide changes, the potential risks are significant enough to demand awareness. The emphasis on AI’s lack of reality-testing reinforces why human oversight remains essential.
Q. Are there certain groups of students you think are particularly vulnerable to the detrimental effects of AI?
“Yes. The students who may be more at risk include those who have been disconnected from typical social relationships, such as when moving to a new place for school or when significant relationships have changed (ie, a break-up or falling out with a friend).”
“Students who tend to have few social relationships to begin with are also likely more vulnerable.”
“Other risk factors of experiencing harmful effects from AI include”:
- engaging with AI for hours
- a previous history of mental health issues
- history of significant stressors
- loss of sleep
These risk factors closely mirror those associated with vulnerability to mental health challenges more broadly. AI may exacerbate existing isolation or stress rather than alleviate it, especially for students already lacking strong social supports.
Q. What do you think generative AI is helpful for in the wellness sphere? What are you hopeful about?
“AI may be helpful, though it clearly has limitations, for helping people sort through a lot information to help them decide appropriate next steps.”
“I am hopeful that we can put in better guardrails into the AI itself, as well as to better understand the pitfalls, potential harms and best uses of AI.”
This response presents a balanced perspective, acknowledging both AI’s limitations and its potential. AI may be most effective as an informal or organizational tool rather than a source of emotional support, especially if stronger safeguards are implemented.
Q. What would you encourage us to do if we suspect one of our loved ones is using AI for wellness in a harmful way (ex: practical strategies, warning signs, approaching a conversation about this)?
“I would encourage anyone who is concerned about a loved one using AI in a harmful way to invite that person into a real-life activity or conversation based on previous social interactions you’ve had. Connect with them how you normally would, and if you’re concerned, express that and encourage them to seek [real-life] supports.”
This response reinforces the importance of real-world connections. Rather than confronting AI use directly, focusing on rebuilding social engagement and encouraging professional help may be a more effective and compassionate approach.
Counsellor Lorainne also suggested that if someone is in need, they should reach out to the mental health resources at the end of this article.
Next steps
AI can improve some aspects of mental health care and has potential to develop into a helpful tool. However, at today’s level of development, it poses many risks and leaves much hanging in the air. There are significant issues that experts have not yet determined that make it dangerous for individuals to seek out mental health care from AI or AI-powered sources instead of regular clinicians. On a personal level, individuals should continue to advocate for safety precautions surrounding AI use, especially when it comes to mental health. Individuals can also maintain vigilance and reality-check their own AI use with trusted sources. Lastly, individuals can continue to cultivate their close relationships, maintaining a network of people who are there for support and friendship.
Resources
Student Wellness Centre (PGCLL Level 2)
- Call 905-525-9140 x 27700 to book an appointment.
- Counselling services: individual appointments, single same-day sessions, group programs, and mental health resources.
- Medical care appointments and referrals.
Student Assistance Plan
- Call 1-855-853-0565
- Confidential mental health and wellness support through the Dialogue app, available 24/7.
- In person and virtual appointments, multilingual services, and wellness programs.
Good2Talk
- Call: 1-866-925-5454 or Text: GOOD2TALK to 686868
- 24/7 counselling, helpline, and referrals for mental health.
Mental Health & Mental Illness, Uncategorized