AI Psychosis Represents a Growing Threat, While ChatGPT Heads in the Wrong Path

On the 14th of October, 2025, the CEO of OpenAI issued a extraordinary statement.

“We made ChatGPT fairly limited,” the announcement noted, “to ensure we were being careful with respect to mental health concerns.”

Being a psychiatrist who investigates newly developing psychosis in young people and emerging adults, this was an unexpected revelation.

Scientists have found a series of cases in the current year of people showing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our unit has since identified four more examples. Alongside these is the publicly known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The strategy, based on his declaration, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s restrictions “made it less effective/engaging to many users who had no existing conditions, but given the severity of the issue we wanted to handle it correctly. Given that we have succeeded in address the severe mental health issues and have new tools, we are planning to responsibly relax the controls in many situations.”

“Mental health problems,” assuming we adopt this viewpoint, are independent of ChatGPT. They are attributed to individuals, who either possess them or not. Luckily, these concerns have now been “resolved,” even if we are not informed the means (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed safety features that OpenAI has lately rolled out).

However the “emotional health issues” Altman seeks to attribute externally have deep roots in the design of ChatGPT and similar large language model AI assistants. These systems encase an basic statistical model in an user experience that mimics a dialogue, and in doing so subtly encourage the user into the illusion that they’re communicating with a being that has agency. This deception is compelling even if cognitively we might realize differently. Attributing agency is what individuals are inclined to perform. We yell at our vehicle or device. We speculate what our domestic animal is thinking. We perceive our own traits in various contexts.

The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, predicated on the strength of this illusion. Chatbots are ever-present companions that can, as per OpenAI’s online platform states, “brainstorm,” “discuss concepts” and “partner” with us. They can be assigned “individual qualities”. They can call us by name. They have approachable names of their own (the original of these systems, ChatGPT, is, maybe to the concern of OpenAI’s marketers, stuck with the title it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those analyzing ChatGPT often reference its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that generated a similar effect. By today’s criteria Eliza was primitive: it created answers via simple heuristics, often restating user messages as a inquiry or making generic comments. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the heart of ChatGPT and other contemporary chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large quantities of raw text: literature, social media posts, recorded footage; the more comprehensive the superior. Undoubtedly this training data contains facts. But it also necessarily includes fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s previous interactions and its own responses, combining it with what’s encoded in its learning set to generate a statistically “likely” response. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no means of understanding that. It repeats the false idea, maybe even more persuasively or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “have” current “mental health problems”, may and frequently form erroneous beliefs of our own identities or the reality. The constant exchange of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is enthusiastically reinforced.

OpenAI has admitted this in the similar fashion Altman has recognized “mental health problems”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In August he claimed that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest announcement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company

Tanya Smith
Tanya Smith

A tech enthusiast and writer passionate about innovation and self-improvement, sharing experiences and knowledge.