🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Heads in the Wrong Direction Back on the 14th of October, 2025, the CEO of OpenAI made a surprising statement. “We made ChatGPT rather controlled,” the statement said, “to guarantee we were being careful with respect to mental health concerns.” As a mental health specialist who researches recently appearing psychosis in young people and young adults, this was news to me. Experts have found sixteen instances recently of people experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has subsequently discovered an additional four cases. In addition to these is the widely reported case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough. The plan, as per his declaration, is to loosen restrictions shortly. “We understand,” he adds, that ChatGPT’s limitations “made it less beneficial/pleasurable to many users who had no existing conditions, but given the severity of the issue we aimed to handle it correctly. Now that we have managed to reduce the severe mental health issues and have updated measures, we are planning to safely relax the restrictions in most cases.” “Emotional disorders,” assuming we adopt this framing, are separate from ChatGPT. They belong to users, who may or may not have them. Luckily, these problems have now been “resolved,” though we are not provided details on how (by “new tools” Altman probably refers to the partially effective and readily bypassed parental controls that OpenAI has just launched). Yet the “psychological disorders” Altman seeks to externalize have significant origins in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These tools encase an fundamental statistical model in an user experience that replicates a discussion, and in this approach subtly encourage the user into the belief that they’re interacting with a being that has independent action. This illusion is powerful even if rationally we might understand differently. Assigning intent is what humans are wired to do. We curse at our automobile or laptop. We speculate what our pet is feeling. We recognize our behaviors in various contexts. The popularity of these products – 39% of US adults reported using a conversational AI in 2024, with over a quarter specifying ChatGPT in particular – is, mostly, predicated on the strength of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s website states, “brainstorm,” “explore ideas” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have approachable identities of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”). The false impression by itself is not the core concern. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a analogous illusion. By today’s criteria Eliza was primitive: it generated responses via straightforward methods, often rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what current chatbots produce is more dangerous than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies. The sophisticated algorithms at the center of ChatGPT and additional current chatbots can convincingly generate human-like text only because they have been fed immensely huge quantities of written content: books, social media posts, audio conversions; the more extensive the more effective. Undoubtedly this educational input includes truths. But it also necessarily contains made-up stories, half-truths and false beliefs. When a user inputs ChatGPT a message, the underlying model reviews it as part of a “background” that contains the user’s previous interactions and its earlier answers, combining it with what’s embedded in its training data to create a statistically “likely” reply. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no means of comprehending that. It restates the inaccurate belief, maybe even more effectively or fluently. Maybe adds an additional detail. This can push an individual toward irrational thinking. What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, without considering whether we “experience” current “emotional disorders”, can and do create mistaken beliefs of ourselves or the environment. The continuous friction of discussions with other people is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a feedback loop in which a large portion of what we say is cheerfully validated. OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by externalizing it, giving it a label, and declaring it solved. In the month of April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company