Why Artificial Intelligence Disrupts Psychotherapeutic Practice
Artificial Intelligence (AI) is gradually asserting itself in the field of mental health. Writing assistance tools, augmented clinical platforms, conversational agents, data analysis: the uses are multiplying, often faster than the frameworks of thought that allow for their integration.
For many psychotherapists, this emergence elicits contrasting reactions: curiosity, caution, skepticism, sometimes rejection. These positions are often caricatured as a divide between modernity and conservatism. In reality, they reflect profound clinical, ethical, and psychological issues that must be considered rather than dismissed.
This article offers a nuanced reading: understanding the fantasies and resistances related to AI, identifying what it cannot do, and outlining the conditions for responsible clinical use, compatible with a demanding psychotherapeutic practice.
AI as a Figure of Total Knowledge
One of the most common fantasies is that of an omniscient AI, capable of understanding, predicting, and interpreting better than humans. This representation fuels the fear of a loss of clinical freedom, even leading to a normalization of practices.
However, AI systems remain fundamentally dependent on data, frameworks, and goals defined by humans. They possess neither clinical intuition, nor a lived understanding of subjectivity, nor access to the symbolic dimension of psychic experience (Mandal et al., 2025).
AI as a Threat of Dehumanization
Another major fantasy associates AI with the disappearance of human relationships. From this perspective, any technological mediation would be experienced as a depletion of the therapeutic bond.
This fantasy deserves to be examined: clinical practice has never been “purely” relational. It has always relied on mediations—writing, institutional frameworks, diagnostic tools—without this negating the encounter. Thus, the question is not the presence of a tool, but the place assigned to it.
AI as an Unsettling Double of the Therapist
Finally, AI can be perceived as a figure of replacement: faster, more available, potentially less costly. This representation directly touches upon professional identity and the subjective value of clinical work.
Current research shows, however, that AI cannot support a complex therapeutic process, nor work with the ambivalence, temporality, and uncertainty inherent in psychotherapy (Sharma et al., 2022).
Resistances to AI do not stem from a rejection of progress. They can be understood as clinical signals.
Defending the Framework
The therapeutic framework relies on stability, confidentiality, and reliability. Any new technology legitimately questions:
These concerns are elaborated in a more operational manner in the article "Confidentiality, GDPR, and Digital Tools for Therapists: What You Really Need to Know."
Protecting the Space for Clinical Thought
Another common resistance concerns the fear of a standardization of clinical reasoning. Many therapists fear that digital tools may impose categories, grids, or logics that are foreign to their way of understanding the patient.
This vigilance is well-founded: poorly designed technology can impoverish clinical practice. However, technology conceived as a reflective support can, conversely, sustain the continuity and depth of the work.
Organizational and Cognitive Support
The most significant contributions of AI today pertain to:
These indirect uses have a real clinical impact: they free up psychic space for the session itself.
Enhancing Clinical Continuity
Some studies show that well-integrated digital tools can improve the continuity of follow-up and the coherence of care without substituting for the therapeutic relationship (Li et al., 2023).
AI then becomes a discreet assistant, serving the clinician, rather than an actor in the therapeutic process.
It is essential to set clear limits.
AI cannot:
These dimensions constitute the core of psychotherapeutic work and remain irreducibly human.
A responsible use of AI in psychotherapy is based on several principles:
In this perspective, some professional platforms have chosen AI designed as a tool for reflective support, integrated within a strict ethical framework that respects confidentiality and clinical work.
One often-overlooked point concerns the sustainability of therapeutic practice. By supporting organization, clinical memory, and administrative burden, AI can indirectly contribute to preventing professional burnout.
These issues are discussed in “Preventing Emotional Burnout in Therapists: Recognizing, Understanding, and Acting.”
AI acts as a revealer: it brings to light our representations of care, knowledge, and the therapeutic relationship. The fantasies and resistances it evokes are not obstacles to be eliminated but clinical materials to consider.
When integrated with discernment, AI can become a valuable support, allowing psychotherapists to preserve what lies at the heart of their profession: presence, listening, and clinical thought.
Clavier, B., & Botbol, M. (2023). Repenser la prise en charge de la santé mentale à l’ère de l’intelligence artificielle. L’Information psychiatrique, 99(4), 291–298.
Li, H., Zhang, R., Lee, Y. C., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digital Medicine, 6, 227.
Mandal, A., Chakraborty, T., & Gurevych, I. (2025). Towards privacy-aware mental health AI models: Advances, challenges, and opportunities. ACM Computing Surveys.
Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2022). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2).