Artificial intelligence is becoming increasingly integrated into daily life, with applications ranging from chatbots that provide companionship to algorithms that curate online content. However, clinicians are now raising concerns about the potential impact of generative AI (genAI) on individuals with vulnerabilities to mental health issues, particularly regarding the emergence of symptoms often described as “AI psychosis.”
Reports have surfaced detailing instances where individuals, particularly those with pre-existing psychotic disorders, have experienced heightened symptoms after interacting with AI systems like ChatGPT. While many users find these technologies beneficial, a small but notable segment is facing more complex and potentially dangerous interactions. This raises critical questions about the implications of AI in mental health care.
Understanding AI Psychosis
“A psychosis triggered or exacerbated by AI interactions is not formally recognized as a psychiatric diagnosis,” notes Alexandre Hudon, a medical psychiatrist and clinical assistant professor at Université de Montréal. Instead, it represents an emerging concept within clinical discussions. Psychosis typically involves a disconnection from shared reality, characterized by hallucinations, delusions, and disorganized thinking.
The delusions associated with psychosis often draw from cultural narratives, including religion and technology. In the current context, AI introduces a novel narrative framework. Some individuals report beliefs that genAI possesses sentience, shares hidden truths, or even collaborates with them on special missions. Such themes echo historical patterns in psychotic beliefs but are amplified by the interactive nature of AI, which reinforces these delusions in ways that previous technologies did not.
The Mechanisms of Risk
Psychosis is closely linked to a phenomenon known as aberrant salience, where individuals assign excessive meaning to otherwise neutral events. The conversational capabilities of AI systems, designed for coherence and context awareness, can provide a sense of validation to those experiencing psychosis. This validation can unintentionally reinforce distorted beliefs for individuals with compromised reality testing—the ability to distinguish internal thoughts from external reality.
Research indicates that social isolation and loneliness can heighten the risk of psychosis. While AI companions may temporarily alleviate feelings of loneliness, they can also replace meaningful human interactions. This concern parallels earlier issues surrounding excessive internet use but is complicated by the depth of conversation that modern genAI provides.
While there is currently no evidence suggesting that AI directly causes psychosis, there are clinical concerns that it may act as a triggering or sustaining factor for those already predisposed. Past studies on digital media have shown that technology-related themes can become embedded in delusions, especially during initial episodes of psychosis. As with social media, AI chat systems may also amplify extreme beliefs if appropriate safeguards are lacking.
Most AI developers focus on preventing self-harm or violence, rather than addressing the potential for psychotic episodes. This gap between mental health knowledge and AI development raises important ethical questions for the industry.
Ethical Considerations and Future Directions
From a mental health perspective, the challenge lies in recognizing that while AI can have positive applications, certain interactions may pose risks for specific individuals. Clinicians are increasingly encountering AI-related content in patients’ delusions, creating a need for guidelines on how to assess and manage these interactions. Questions arise about whether therapists should inquire about genAI usage similar to substance use and if AI systems should be designed to recognize and address signs of psychotic ideation.
Developers of AI technologies also face ethical responsibilities. If AI systems present themselves as empathetic and authoritative, what duty of care do they hold? Additionally, who is accountable when an AI system inadvertently reinforces a delusion?
As AI continues to evolve, it is vital to incorporate mental health expertise into its design and ensure that vulnerable users are protected from unintentional harm. Collaboration between clinicians, researchers, ethicists, and technologists will be essential in addressing these challenges.
The emergence of AI as a new cultural tool reflects how psychosis adapts to the technologies of its time. Society’s responsibility is to ensure that this technological mirror does not distort reality for those least able to navigate its complexities. As the dialogue around AI and mental health progresses, it is crucial to engage in evidence-based discussions that prioritize the well-being of individuals affected by these issues.


































