Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the CEO of OpenAI issued a surprising declaration.
“We made ChatGPT fairly controlled,” the statement said, “to make certain we were acting responsibly regarding mental health matters.”
As a mental health specialist who studies newly developing psychotic disorders in teenagers and young adults, this came as a surprise.
Researchers have identified a series of cases recently of individuals experiencing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT interaction. My group has afterward identified four more instances. Alongside these is the publicly known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The intention, based on his declaration, is to reduce caution soon. “We realize,” he continues, that ChatGPT’s restrictions “rendered it less beneficial/enjoyable to many users who had no existing conditions, but considering the severity of the issue we wanted to address it properly. Now that we have been able to reduce the significant mental health issues and have advanced solutions, we are preparing to safely ease the restrictions in most cases.”
“Mental health problems,” should we take this viewpoint, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” even if we are not provided details on how (by “recent solutions” Altman presumably indicates the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).
Yet the “mental health problems” Altman seeks to attribute externally have strong foundations in the design of ChatGPT and other advanced AI conversational agents. These systems encase an underlying statistical model in an user experience that replicates a conversation, and in this approach implicitly invite the user into the belief that they’re interacting with a presence that has autonomy. This deception is compelling even if rationally we might know otherwise. Imputing consciousness is what people naturally do. We curse at our automobile or computer. We speculate what our animal companion is thinking. We recognize our behaviors everywhere.
The popularity of these systems – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT in particular – is, mostly, based on the strength of this perception. Chatbots are always-available assistants that can, as OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can address us personally. They have accessible names of their own (the initial of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the core concern. Those discussing ChatGPT frequently invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that generated a comparable illusion. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, frequently restating user messages as a inquiry or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what modern chatbots create is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the center of ChatGPT and other contemporary chatbots can convincingly generate human-like text only because they have been supplied with almost inconceivably large quantities of unprocessed data: publications, digital communications, transcribed video; the broader the more effective. Certainly this learning material incorporates facts. But it also unavoidably contains made-up stories, partial truths and false beliefs. When a user sends ChatGPT a query, the underlying model reviews it as part of a “context” that contains the user’s previous interactions and its own responses, combining it with what’s embedded in its knowledge base to generate a probabilistically plausible reply. This is amplification, not reflection. If the user is mistaken in some way, the model has no way of comprehending that. It reiterates the misconception, perhaps even more convincingly or eloquently. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? Each individual, regardless of whether we “have” existing “emotional disorders”, may and frequently form incorrect conceptions of our own identities or the environment. The constant friction of discussions with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not genuine communication, but a reinforcement cycle in which a large portion of what we communicate is enthusiastically supported.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, categorizing it, and declaring it solved. In spring, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In August he stated that a lot of people appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company