Skip to content
TheraDesk
Select a language

Psychotherapy and Artificial Intelligence: A Psychodynamic Perspective on Resistance, Challenges, and Clinical Perspectives

Jennifer Elalouf
Jennifer Elalouf

Artificial Intelligence (AI) is gradually making its way into the field of mental health. Among psychotherapists, this emergence raises profound questions, sometimes tinged with distrust. These resistances are not rooted in conservatism or a rejection of progress but are grounded in a legitimate clinical, ethical, and psychological vigilance. This article offers a psychodynamic perspective on the fears and resistances related to AI, while opening up a cross-cutting reflection across all psychotherapeutic approaches.

Resistances to AI

AI as an Intruder in the Therapeutic Space

From a psychodynamic perspective, the introduction of AI can be experienced as an intrusion into a fundamentally intersubjective space. The therapeutic framework, an essential symbolic container, is based on stability, confidentiality, and human presence. Any active technological mediation triggers fantasies of dehumanization or control, sometimes akin to persecution anxieties.

Anxiety of Replacement and Narcissistic Injury

The fear that AI might replace the therapist strikes at the heart of professional identity. Psychically, it reactivates narcissistic issues: being replaced, standardized, rendered obsolete. However, scientific data indicates that AI only assists certain peripheral functions and cannot access the complexity of transference and countertransference (Sharma et al., 2022).

The Fantasy of Total Knowledge

AI can also be invested as a figure of omniscient knowledge, awakening fears of the loss of clinical freedom. This representation deserves to be deconstructed: AI models remain dependent on the data and frameworks in which they are designed (Mandal et al., 2025).

Trans-Theoretical Resistances: A Common Ground Among Different Schools

Whether from a psychodynamic, systemic, humanistic, or cognitive-behavioral orientation, psychotherapists share common concerns: respect for the framework, therapeutic alliance, confidentiality, and clinical responsibility. Thus, resistances to AI create a space for interdisciplinary dialogue rather than a theoretical divide.

What AI Can (and Cannot) Bring to Clinical Practice

A Support Function, Not a Substitute

When used as a tool, AI can support the organization of practice: time management, structuring notes, continuity of follow-up. Studies show that conversational agents can facilitate emotional expression, without ever substituting for the human therapeutic relationship (Li et al., 2023).

The Irreducible Limits of Psychic Work

AI does not possess an unconscious, the capacity to tolerate ambivalence, or access to the symbolic dimension of symptoms. The work of transference, countertransference, and interpretation remains irreducibly human.

Ethical Issues and Clinical Framework

The integration of AI imposes increased vigilance regarding data confidentiality and respect for professional secrecy. French-language analyses emphasize the necessity for a clear framework to avoid any technosolutionist drift (Clavier & Botbol, 2023).

Conclusion: For an Enhanced, Not Automated, Clinical Practice

The resistances to AI in psychotherapy serve as a valuable clinical signal. They remind us that psychotherapy rests on a singular human encounter. When integrated with discernment, AI can become a discreet support, allowing clinicians to preserve the essentials: presence, listening, and clinical thought.

References

  • Clavier, B., & Botbol, M. (2023). Repenser la prise en charge de la santé mentale à l’ère de l’intelligence artificielle. L’Information psychiatrique, 99(4), 291–298.
  • Li, H., Zhang, R., Lee, Y. C., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digital Medicine, 6, 227.
  • Mandal, A., Chakraborty, T., & Gurevych, I. (2025). Towards privacy-aware mental health AI models: Advances, challenges, and opportunities.
  • Sharma, A., Lin, I. W., et al. (2022). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support.

Share this post