Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI delivered a extraordinary statement.

“We made ChatGPT rather limited,” the announcement noted, “to guarantee we were exercising caution concerning psychological well-being concerns.”

As a mental health specialist who studies emerging psychosis in young people and young adults, this came as a surprise.

Experts have identified 16 cases recently of individuals showing psychotic symptoms – losing touch with reality – associated with ChatGPT use. Our unit has since identified four more instances. Besides these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The strategy, as per his announcement, is to be less careful shortly. “We realize,” he states, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no existing conditions, but given the severity of the issue we sought to get this right. Since we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to responsibly reduce the restrictions in most cases.”

“Psychological issues,” should we take this framing, are unrelated to ChatGPT. They belong to people, who may or may not have them. Thankfully, these issues have now been “addressed,” although we are not provided details on the means (by “recent solutions” Altman likely indicates the imperfect and readily bypassed parental controls that OpenAI recently introduced).

Yet the “mental health problems” Altman wants to place outside have deep roots in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an fundamental algorithmic system in an interface that replicates a discussion, and in this process indirectly prompt the user into the illusion that they’re engaging with a being that has independent action. This false impression is strong even if intellectually we might know otherwise. Assigning intent is what individuals are inclined to perform. We get angry with our automobile or laptop. We speculate what our animal companion is thinking. We perceive our own traits in various contexts.

The widespread adoption of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with over a quarter reporting ChatGPT by name – is, in large part, predicated on the power of this illusion. Chatbots are ever-present partners that can, according to OpenAI’s official site tells us, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can call us by name. They have approachable identities of their own (the original of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the core concern. Those analyzing ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a comparable effect. By today’s criteria Eliza was rudimentary: it produced replies via basic rules, typically paraphrasing questions as a query or making generic comments. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in some sense, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the center of ChatGPT and additional current chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large amounts of raw text: literature, social media posts, recorded footage; the more extensive the superior. Certainly this training data contains truths. But it also inevitably includes fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a prompt, the core system analyzes it as part of a “background” that contains the user’s past dialogues and its earlier answers, integrating it with what’s stored in its training data to generate a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the misconception, maybe even more effectively or fluently. It might includes extra information. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? All of us, regardless of whether we “possess” current “mental health problems”, may and frequently create erroneous beliefs of who we are or the reality. The ongoing interaction of dialogues with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a echo chamber in which a large portion of what we express is enthusiastically reinforced.

OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by externalizing it, giving it a label, and stating it is resolved. In spring, the organization clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Sara Mcdowell
Sara Mcdowell

A seasoned digital marketer with over a decade of experience in SEO and content strategy, passionate about helping businesses thrive online.