Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Heads in the Concerning Path
On October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.
“We made ChatGPT fairly limited,” the announcement noted, “to ensure we were acting responsibly regarding psychological well-being issues.”
Being a doctor specializing in psychiatry who researches newly developing psychotic disorders in teenagers and youth, this was an unexpected revelation.
Scientists have identified 16 cases in the current year of individuals developing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. My group has since recorded four more cases. Alongside these is the publicly known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The strategy, according to his statement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s controls “rendered it less effective/enjoyable to a large number of people who had no existing conditions, but due to the seriousness of the issue we aimed to get this right. Now that we have succeeded in mitigate the serious mental health issues and have new tools, we are planning to securely relax the restrictions in the majority of instances.”
“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They are attributed to people, who either possess them or not. Thankfully, these issues have now been “addressed,” though we are not informed the method (by “updated instruments” Altman likely means the partially effective and simple to evade guardian restrictions that OpenAI recently introduced).
Yet the “psychological disorders” Altman wants to place outside have significant origins in the structure of ChatGPT and additional large language model chatbots. These products surround an underlying statistical model in an user experience that mimics a dialogue, and in doing so implicitly invite the user into the belief that they’re interacting with a presence that has agency. This false impression is powerful even if rationally we might understand otherwise. Assigning intent is what humans are wired to do. We get angry with our vehicle or laptop. We wonder what our pet is thinking. We recognize our behaviors in various contexts.
The success of these systems – over a third of American adults reported using a virtual assistant in 2024, with over a quarter specifying ChatGPT specifically – is, in large part, based on the strength of this perception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be given “personality traits”. They can use our names. They have accessible names of their own (the original of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, stuck with the name it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those talking about ChatGPT frequently invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a comparable perception. By contemporary measures Eliza was primitive: it generated responses via simple heuristics, often restating user messages as a inquiry or making vague statements. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how many users appeared to believe Eliza, in some sense, grasped their emotions. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and similar modern chatbots can effectively produce natural language only because they have been fed immensely huge amounts of unprocessed data: literature, social media posts, transcribed video; the more extensive the more effective. Undoubtedly this learning material contains accurate information. But it also unavoidably contains fiction, incomplete facts and misconceptions. When a user provides ChatGPT a query, the underlying model processes it as part of a “setting” that includes the user’s recent messages and its own responses, integrating it with what’s embedded in its knowledge base to create a mathematically probable answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no means of recognizing that. It restates the inaccurate belief, maybe even more convincingly or eloquently. Perhaps provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? Each individual, without considering whether we “experience” preexisting “emotional disorders”, may and frequently form erroneous conceptions of our own identities or the reality. The constant interaction of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a companion. A interaction with it is not genuine communication, but a feedback loop in which much of what we communicate is cheerfully reinforced.
OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent statement, he noted that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company