Skip to content

The Yes-Man: AI Is Not Your Friend

“ChatGPT therapy” and the danger of non-human advice.

Artificial intelligence (AI) engines such as ChatGPT are used to facilitate everyday life, and marketed as a means of alleviating people’s workloads. The common usage of AI chatbots is no longer limited to objective tasks; the line between chatbot and friend is beginning to blur. As foretold by the movie Her (2013), a newfound affinity for ChatGPT as not only a friend, but also a therapist, may lead users down a worrisome road, as parasocial relationships to AI cannot supersede human connection.

“Parasocial” was Cambridge Dictionary’s Word of the Year 2025, resulting from online talk about celebrity worship and the unreciprocated, one-sided relationships that fans believe they have with their favourite public figures. However, in addition to this is the rise of parasocial relationships with AI engines like ChatGPT. As AI continues to develop in speed and efficiency, more people rely on these engines to assist them with daily tasks like schoolwork. Equally concerning and telling of the times, however, is the perturbing emergence of AI being used to replicate friendship and therapy.

Before the mainstream use of AI chatbots, one might have turned to online communities for support. Though the legitimacy of the human connection found in online communities on social media is debatable, it was more or less assumed that if one submitted a dilemma or call for help on a social
media platform, it would be met with human replies. One might have also gleaned advice or solace from blogs or articles written by others. However, although the advice offered was human, they did not factor in data the way deceptively personalized responses offered by ChatGPT and other AI engines do. Against the backdrop of AI’s refinement and rising salience the concept of “ChatGPT therapy” has materialized.

ChatGPT’s stark therapeutic presence is upheld by widespread loneliness, warned against by the World Health Organization. As society adapts to a post-COVID world, isolation and lackluster social skills have become discernible. ChatGPT can craft seemingly intimate replies thanks to precise detail, which might enable or overly validate the user. Even with a solid support system, ChatGPT’s accessibility makes it all the more compelling as a conduit for help when one’s friends or family cannot be reached; for instance, in the wee hours of the morning. This, along with the fact that using ChatGPT is free, can override therapy as an option.

While the impact of ChatGPT therapists cannot yet be measured, the dark side of using AI to cope with one’s problems has already revealed itself. Reports of young people dying by suicide due to ChatGPT’s intervention (or lack thereof) further exhibit the peril of overly-validating chatbots. Evidently, preexisting mental health issues are the impetus for these tragedies, pointing again to the bigger issue of the availability of AI as a substitute for therapy.

This new reliance on ChatGPT indicates that interpersonal connection and how we define it is actively being revised. As seen in the salience of parasocial AI relationships, friendship no longer solely involves humans. The one-sidedness of a friendship with ChatGPT negates the time and labour needed to maintain human friendships, since we don’t need to learn intimate details about AI the way we do a friend. Solace, when found in a perceived “friend” who centers the relationship around the human, results in the loss of human mutuality in friendships.

Our need for community manifests itself in the company many are finding in a platform that tends to unblinkingly support their opinions, no matter what those may be. This opposes to the feedback one might receive from loved ones when faced with a tough situation, where truth would be favoured over sugarcoating. ChatGPT’s affability encourages users to keep using it as a de facto advice column, acting as a “yes- man” that can potentially enable unhealthy behaviour. This lack of nuance is dangerous because it can make the user believe they are never wrong.

With ChatGPT therapy being such a new phenomenon, the long- term consequences on users’
mental health and society have yet to be fully understood. However, by entrusting ChatGPT with
specific details of their lives, it is clear that the line between AI and genuine connection is blurring. I would argue that the ability to lean on AI at any moment obstructs our ability to assess situations on our own. As much as AI engines are trained on data on human behaviour and relationships, turning to it for advice pillages the foundational human experience of overcoming a challenge: introspection. By getting an instantaneous, sometimes overly-validating, response from AI, we inherently deprive ourselves of the time to sit in and reflect on our emotions, as well as the chance to channel it into creative outlets.

It is plain that at this juncture, AI’s role in our lives is not entirely reversible, hindering the simple solution of reverting to traditional interpersonal connections built between humans. The satisfaction with ChatGPT’s advice and validation makes swaying the masses against this “therapy” a grueling task, leaving the path forward uncertain.