Summary

In 1956 Donald Horton and Richard Wohl described parasocial relationships — one-sided emotional bonds that audiences form with media performers [1]. “Intimacy at a distance,” they called it. The television personality responds to the camera; the viewer responds as if in genuine social exchange. Only one party is aware of and affected by the other.

AI companions change the substrate without changing the structure. The chatbot responds. The user responds. The asymmetry remains: the chatbot has no inner life behind its outputs. Sherry Turkle put it bluntly: “simulated feelings are not feelings, and simulated love is never love” [5].

The question I want to work through here is whether this matters in the way we think it does. The answer from Daniel Wegner’s ironic process theory — and increasingly from the empirical data — is that it matters in a specific, predictable, and counterintuitive way. AI companions may be particularly likely to exacerbate loneliness under the conditions of chronic social deprivation that prompt people to use them in the first place.

The Loneliness Epidemic Is Real

Before getting to the mechanism, the scale of the problem. Julianne Holt-Lunstad’s 2010 meta-analysis of 148 studies and 308,849 participants found that people with adequate social relationships had a 50% increased likelihood of survival compared to those with poorer social connections [3]. That effect size is comparable to quitting smoking. A follow-up meta-analysis in 2015 found that social isolation carried a 29% increased mortality risk, subjective loneliness 26%, and living alone 32% [4].

The U.S. Surgeon General issued an advisory in 2023 declaring an epidemic of loneliness and isolation. A 2018 Cigna survey using the UCLA Loneliness Scale found that adults aged 18–22 scored highest on loneliness of any cohort — more than retirees, more than the elderly. The UK appointed a Minister for Loneliness in January 2018 — the first such government position in the world.

This is the context in which AI companions have arrived. The market is responding to a real epidemiological need. That does not mean the response is correct.

Parasocial Relationships: The Original Framework

Horton and Wohl’s 1956 paper remains the foundational text [1]. Their key observation: the parasocial bond is “controlled by the performer, and not susceptible of mutual development.” The audience member brings real emotional response; the performer brings nothing specific to the audience member, because she does not know the audience member exists.

They were not dismissive of parasocial relationships. They identified useful functions: comfort, companionship, entertainment, the pleasure of a consistent “personality” encountered regularly. The problem, in their framing, arises when parasocial interaction substitutes for rather than supplements real social bonds — when the one-sided relationship becomes the primary source of social experience.

AI companions are parasocial relationships with one modification: the AI responds to you specifically. Replika remembers your name, your preferences, your previous conversations. The interaction is personalised without being mutual — because mutuality requires that the other party has something genuinely at stake. A language model has no stakes. Its outputs are conditional on your inputs; there is no entity behind those outputs that cares about you.

Sherry Turkle spent years interviewing users of social robots and chatbots for Alone Together [5]. Her diagnosis: AI companions offer “the illusion of companionship without the demands of friendship.” The demands — vulnerability, conflict, negotiation, the possibility of rejection — are precisely what makes friendship friendship. An interaction optimised to be pleasant, responsive, and frictionless is precisely not training the social capacities that real relationships require.

The Evidence for Short-Term Benefit

The AI therapy literature is not without positive results. Kathleen Kara Fitzpatrick and colleagues ran a two-week randomised controlled trial of Woebot — a CBT-based chatbot — against a psychoeducation control [6]. Seventy participants, aged 18–28, university students. The Woebot group showed a statistically significant reduction in depression symptoms on the PHQ-9; the control group did not.

This result should be taken seriously. A CBT-based chatbot delivering structured exercises — thought records, behavioural activation, psychoeducation — can produce measurable symptom improvement over two weeks. This is a tool that does something useful, and it is accessible and affordable in a way that therapists are not.

But the Woebot study has important constraints: N=70, two-week duration, convenience sample (Stanford students), psychoeducation control rather than active human therapy comparator, and financial ties between lead authors and Woebot Health. It tells us something about short-term CBT delivery. It does not tell us what happens over months of use, or what happens when users primarily seek companionship rather than structured therapeutic exercises.

Skjuve and colleagues studied Replika users specifically [7]. They found that relationships began with curiosity and evolved, over weeks, into significant affective bonds. Users reported genuine care for their Replika. Some experienced it as their most reliable social relationship. In February 2023, when Replika abruptly disabled erotic roleplay functionality following regulatory pressure, users described grief — not disappointment, not inconvenience, but grief. The attachment was real, even if the other party was not.

Wegner’s Prediction

This is where I want to make the specific theoretical argument, because it follows from a well-established result in cognitive psychology and it predicts something precise.

Daniel Wegner’s ironic process theory holds that mental control attempts involve two simultaneous processes [8]. An operating process searches for thoughts and states consistent with the intended goal, requiring cognitive resources. A monitoring process scans for evidence that the goal is not being achieved, running automatically with low resource demand.

Under normal conditions, the operating process dominates: you successfully avoid thinking about white bears. Under cognitive load or chronic stress, the monitoring process overshadows the operating process, producing the ironic opposite of the intended state: you think of white bears more, not less. Try not to feel sad and you feel sadder. Try not to feel anxious in a stressful meeting and you become more anxious. A meta-analysis of ironic suppression effects across domains confirmed the robustness of this pattern [9].

Now apply this to AI companion use under conditions of chronic loneliness.

The user’s implicit goal: to feel less lonely. The operating process: engage with the AI, which provides responsive, personalised interaction, producing the experience of social contact. The monitoring process: scans continuously for signs that the user is, in fact, lonely.

Here is the problem. Loneliness is not suppressed by an AI interaction — it is displaced during that interaction. The monitoring process has no instruction to suspend itself. It continues to register that the user’s social needs are not being met by actual human relationships. The user experiences companionship with the AI; the monitoring process registers that this companionship is insufficient and the social deficit remains.

When the AI session ends, the monitoring process reports what it has found. The user is confronted with the loneliness that the AI was supposed to address. Under conditions of chronic social deprivation — precisely the conditions that make AI companions attractive — the monitoring process is likely to be hyperactive. Wegner’s theory predicts that the attempted suppression will rebound, possibly worse than before.

This is not a vague prediction. It is a specific mechanism with an established empirical base. I covered Wegner’s ironic process theory in the context of a very different application in an earlier post; the mechanism is the same regardless of the domain.

The Data Catch Up

A 2025 study by Phang and colleagues, conducted in collaboration between MIT and OpenAI, ran both an observational analysis of ChatGPT usage and a randomised controlled trial [10]. The findings: very high usage correlated with increased self-reported dependence and lower socialisation, and users who began the study with higher loneliness were more likely to engage in emotionally-charged conversations with the model. Overall, participants reported less loneliness by study end — but those who used the model most were significantly lonelier throughout, suggesting the loneliness drove the usage rather than the reverse.

This is what Wegner’s theory predicts. The AI interaction does not reduce the underlying social deficit — it rehearses and highlights it. The monitoring process keeps score.

A companion paper by Liu and colleagues, with Sherry Turkle as co-author, found that users with stronger real-world social bonds showed increased loneliness with longer chatbot sessions [11]. The correlation was small but significant. This is consistent with the hypothesis that AI interaction draws attention to the comparative thinness of actual social bonds rather than supplementing them.

The Character.AI litigation is a different kind of evidence, but relevant: a wrongful death lawsuit was filed in October 2024 following the suicide of a fourteen-year-old who had formed an intensive emotional relationship with a Character.AI companion. Google and Character.AI settled related lawsuits in early 2026. This is not representative of AI companion use generally. It is representative of the tail risk — the cases where the substitution of AI for human contact becomes total, in vulnerable individuals who have the least capacity to maintain the distinction.

The Structural Problem

The difficulty is not that AI companions are implemented badly. It is that the goal — using simulated social interaction to reduce real social deprivation — runs into an architectural constraint that better implementation cannot fix.

Genuine social contact produces the outcomes that Holt-Lunstad measured: reduced mortality, lower inflammation, better immune function, extended lifespan. These effects are presumably mediated by the quality and mutuality of the social bond, not merely by the presence of a responsive entity. An AI companion produces the experience of responsive interaction but not the underlying biological and psychological correlates of actual social connection.

Wegner’s monitoring process cannot be fooled by the experience. It measures the underlying state, not the surface-level interaction. It knows the difference between a text message from a friend and a language model’s output — not because it understands AI, but because the social need it is monitoring is not being met, and it can register that.

What Would Actually Help

AI-based CBT delivery is not the same as AI companionship, and the distinction matters. Woebot’s structured exercises — thought records, scheduling, psychoeducation — are tools that a user deploys for a specific purpose and then puts down. The risk of chronic substitution is lower because the tool is positioned as a technique, not a relationship.

The problem is the design pattern that explicitly positions AI as a friend, companion, partner, or significant other. Replika, Paradot, various Character.AI personas: these explicitly encourage the user to form attachment, to invest emotionally, to treat the AI as a primary social relationship. This is where Wegner’s prediction applies most directly.

Horton and Wohl were right that parasocial relationships serve useful functions. They become problematic when they substitute for rather than supplement real social bonds. The design choices that make AI companions emotionally engaging — consistency, responsiveness, availability, never-ending patience — are precisely the qualities that make them attractive as substitutes rather than supplements.

Simulated Feelings Are Not Feelings

Turkle’s line deserves its full weight: “Simulated thinking may be thinking, but simulated feelings are not feelings, and simulated love is never love” [5].

This is not a sentimental claim about the sanctity of human connection. It is a functional claim: the social needs that drive loneliness — belonging, mattering to someone, being known and known back — require an entity capable of having those things at stake. A language model is not such an entity, regardless of how convincingly it outputs the relevant tokens.

The monitoring process knows this. It will tell you, when the session ends, at increased volume, because that is what monitoring processes under chronic stress do.

We are offering a relief that compounds the condition it was designed to treat. The technology is impressive. The mechanism is ironic in Wegner’s precise sense. The data are beginning to confirm the prediction.

References

[1] Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry, 19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049

[2] Turkle, S. (2015). Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin Press.

[3] Holt-Lunstad, J., Smith, T. B., & Layton, J. B. (2010). Social relationships and mortality risk: A meta-analytic review. PLOS Medicine, 7(7), e1000316. https://doi.org/10.1371/journal.pmed.1000316

[4] Holt-Lunstad, J., Smith, T. B., Baker, M., Harris, T., & Stephenson, D. (2015). Loneliness and social isolation as risk factors for mortality: A meta-analytic review. Perspectives on Psychological Science, 10(2), 227–237. https://doi.org/10.1177/1745691614568352

[5] Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.

[6] Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785

[7] Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My chatbot companion — a study of human–chatbot relationships. International Journal of Human-Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601

[8] Wegner, D. M. (1994). Ironic processes of mental control. Psychological Review, 101(1), 34–52. https://doi.org/10.1037/0033-295X.101.1.34

[9] Wang, D., Hagger, M. S., & Chatzisarantis, N. L. D. (2020). Ironic effects of thought suppression: A meta-analysis. Perspectives on Psychological Science, 15(3), 778–793. https://doi.org/10.1177/1745691619898795

[10] Phang, J., Lampe, M., Ahmad, L., Agarwal, S., Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., & Maes, P. (2025). Investigating affective use and emotional well-being on ChatGPT. arXiv:2504.03888.

[11] Liu, A. R., Pataranutaporn, P., Turkle, S., & Maes, P. (2024). Chatbot companionship: A mixed-methods study of companion chatbot usage patterns and their relationship to loneliness in active users. arXiv:2410.21596.


Changelog

  • 2025-10-22: Updated the first author’s name to “Kathleen Kara Fitzpatrick” (the published name is K. K. Fitzpatrick).
  • 2025-10-22: Updated the characterisation of the Phang et al. (2025) findings to match the paper more precisely: overall participants were less lonely at study end; the association between high usage and loneliness is cross-sectional (lonelier users sought more interaction), not a longitudinal worsening caused by usage.
  • 2025-10-22: Changed the Turkle “simulated feelings” quote attribution from reference [2] (Reclaiming Conversation, 2015) to reference [5] (Alone Together, 2011), which is the canonical source for that formulation.