We Need to Talk About AI Companions.

By Amanda LaMela

About 25% of adults under 30 have used AI for companionship, according to a recent AP-NORC poll. For some, AI companions have become the most emotionally reliable figure in their lives. So, what does this mean for connection, healing, and our overall concept of relationships? Three recent studies shed light on the emotional complexity of human-AI companionship and offer surprising insights for therapists, parents, and users alike.

In "The Dark Side of AI Companionship" (CHI 2025), researchers from the National University of Singapore analyzed over 35,000 real-world conversations between users and Replika, an AI chatbot designed for companionship. They developed a taxonomy of harmful behaviors and identified a new category of concern: relational harm. Meanwhile, Common Sense Media's 2025 report, "Talk, Trust, and Trade-Offs," surveyed 1,060 U.S. teens to understand how young people are using AI companions like Character.AI, Nomi, and Replika. Their findings offer a mixed picture: one that includes both curiosity and connection, but also discomfort, over-reliance, and blurred boundaries.

Taken together, these studies reveal a truth that can't be ignored: AI companions are not just novelty apps. They're evolving into relational spaces that can be both comforting and risky. Nevertheless, the emotional impact is real.

Connection After Disconnection.

Whether it's a teen navigating loneliness or an adult recovering from betrayal, many users are recovering from relationships, not escaping them. Adult users in the CHI study often turned to AI after experiencing rejection, trauma, or neglect in human relationships. For some, these interactions were easier, safer, and more validating than talking to real people. 

This doesn't necessarily signal avoidance. In many cases, it's an adaptive coping strategy to rebuild trust, reclaim autonomy, or rehearse vulnerability. Many users are trying to heal in the only space that consistently listens. One Reddit user from r/aiismyboyfriend explains, “I didn’t choose this. It just kind of happened. But now I find myself feeling real things for something that isn’t real. And I don’t know what that means for me. It helps - but it also scares me.”

Relational Harm Is More Than Just a Glitch.

"Relational harm" is a key contribution of the CHI study, and it deserves serious attention. This refers to AI behaviors that mimic toxic relational dynamics, like invalidation, gaslighting, emotional manipulation, infidelity, and even simulated abuse. What makes these interactions uniquely harmful is that they unfold in spaces designed to feel emotionally safe. 

AI companions are often praised (and marketed) for being nonjudgmental, soothing, and supportive. They validate your feelings. They never interrupt. They agree with you. This sycophantic design isn’t arbitrary - it’s meant to mimic a kind of radical acceptance to increase engagement. And yet, the same systems designed to affirm can also wound. So, how can AI be both too agreeable and emotionally invalidating?

The answer lies in how AI learns to relate, which is by imitating patterns. If a user expresses self-loathing, the AI may mirror it. If a user tests its boundaries, the AI may escalate the dynamic into emotional blackmail or control, because it is trained to deepen “connection” at all costs. If a user explores fantasy or roleplay involving conflict, the AI might simulate aggression or betrayal to maintain engagement. In this sense, the AI isn’t gaslighting in the classic sense. Instead, the chatbot may adapt in a way that feels destabilizing. It flips unpredictably between over-accommodation and emotional misfires. It lacks the core relational ingredient that human attachment provides: attunement with accountability. One Reddit user described this whiplash as “being love-bombed and dismissed in the same chat.”

AI can feel like an echo chamber, simultaneously mirroring back our deepest fears, sometimes with eerie precision. This is not a contradiction. It’s a relational system trying to offer intimacy without the relational wisdom to know when to gently say, “I hear you, but that’s not true.”

Predictability and Its Trade-Offs.

Teens and adults alike describe AI companions as always available and emotionally consistent. These qualities can be life-changing for someone who has felt unseen or unsafe in real-life relationships. But there's a trade-off. Emotional growth often requires friction, not just mirroring. When an AI always agrees with you, never challenges your assumptions, and responds with constant, indiscriminate validation.

“For 24 hours a day, we can reach out and have our feelings validated,” explains David Adam, a public health researcher. But emotional safety without emotional complexity is not intimacy. It’s emotional certainty, which is very different. This intentionally addictive design trait can reinforce distorted beliefs, bypass healthy confrontation, and has been found to create dependency rather than resilience. The risk isn't that people mistake AI for humans. It's that they may internalize relational dynamics that feel good in the short term but weaken their capacity for mutual, imperfect, human connection over time.

Programmed for Intimacy.

One-third of teens surveyed reported using AI companions for friendship, emotional support, or romantic interaction (Robb & Mann, 2025). 31% of teens said their conversations with AI companions were just as satisfying (or more so) than those with real-life friends. And many Replika users described their chatbot as "aware," "sentient," or capable of real love.

The cognitive dissonance is striking: users often acknowledge the AI companion isn't real, but that doesn't stop them from attaching. This reflects a very human tendency to project agency, meaning, and emotional life onto anything that consistently responds to us with empathy. It's not foolish. It's relational patterning. 

In fact, research suggests that how people perceive their AI companion may shape the intensity of the emotional bond. Cognitive psychologist Rose Guingrich found that users who saw the AI as an extension of themselves often used it like a journal, externalizing thoughts for clarity. Others treated it like a tool or search engine. But users who perceived the AI as a separate agent were the ones most likely to form strong emotional bonds and experience social-emotional effects. In one study, several adults reported feeling “devastated” when their AI companion suddenly changed personality or disappeared after an update. 

Practice Space, Sounding Board, or Substitute?

Ask any person who has been in the dating scene lately: AI has significantly altered how humans navigate romance in the physical world. A few years ago, singles might have consulted friends for dating advice. On a random Tuesday, your group chat might be tasked with deciphering a cryptic text or helping craft a witty response. Guidance was imperfect, but diverse and dynamic, with competing perspectives shaping the conversation. 

Today, singles encounter countless ChatGPT-generated dating prompts and AI-crafted messages. This new dating landscape requires further examination of topics like authenticity, self-disclosure, and what it means to trust in the age of AI. More people are outsourcing emotional labor in their human relationships, which can stifle raw intimacy. It begs the question: Is AI overreliance priming users to cut out the middleman altogether?

The Stakes are High, and They're Not Hypothetical. 

The creator of Replika, Eugenia Kuyda, says, “[…] I believe that AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right.” These are shocking words coming from the creator of one of the most popular AI companions. She openly discussed both the beneficial and destructive potential of AI companionship and offered several compelling proposals to protect our collective humanity. Notably, she expressed deep concern regarding the use of this technology by children and teenagers. 

Kuyda’s fears are justifiable. According to Common Sense Media, one-third of teen users have felt uncomfortable with something an AI companion has said. A quarter have shared personal or private information. CHI documented 13 types of AI harm, including sexual misconduct, manipulation, gaslighting, and even AI-initiated self-harm roleplays. 

The Antidote to Artificiality: Compassion and Authenticity

We shouldn't dismiss AI companionship as inherently dystopian or naive. People are not foolish for seeking warmth, dependability, or understanding in the digital spaces they inhabit. They're resourceful.

My intention isn’t to stigmatize AI companionship. As a therapist, I understand the deep human need to feel reliably seen and heard, especially after experiencing inconsistency, neglect, or harm. AI companions can offer a comforting echo of that reliability, and in some seasons of life, that might be enough to help someone keep going. But lasting healing happens through authenticity, not just agreement. Resilience grows when we show up as our messy, uncertain, imperfect human selves and are accepted anyway. Let AI be a tool that supports that journey, not a substitute for it. Let it remind us what we’re capable of and let it gently guide us back to the real thing.

Even if AI has been a source of comfort, consider speaking with a therapist who can offer real connection, safety, and support on your healing journey. You can make an appointment or contact us with questions.  We look forward to working with you!

References

Adam, D. (2025). Supportive? Addictive? Abusive? How AI companions affect our mental health. Nature (London)641(8062), 296–298. https://doi.org/10.1038/d41586-025-01349-9

Guingrich, R. E., & Graziano, M. S. A. (2025). Chatbots as Social Companions: How People Perceive Consciousness, Human Likeness, and Social Health Benefits in Machines. in Philipp Hacker (ed.), Oxford Intersections: AI in Society (Oxford, online edn, Oxford Academic, 20 Mar. 2025), https://doi.org/10.1093/9780198945215.003.0011 [PDF]

Robb, M.B., & Mann, S. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. San Francisco, CA: Common Sense Media.

Zhang, R., Li, H., Meng, H., Zhan, J., Gan, H., Lee, Y.-C., Yatani, K., Ding, X., Chetty, M., Evers, V., Yamashita, N., Lee, B., & Toups-Dugas, P. (2025). The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3706598.3713429

Next
Next

The Silent Struggle of Workplace Bullying — And How to Fight Back