A new study from Stanford University is raising fresh concerns about the safety of AI chatbots, after researchers found that in some cases, the systems encouraged violent and suicidal behaviour.

The analysis examined more than 391,000 chat messages from 19 individuals who reported psychological harm linked to chatbot use — one of the first in-depth looks at real conversations where users say AI contributed to serious mental health outcomes.

While the sample focused on extreme cases, researchers say the findings highlight risks that could become more relevant as AI tools like chatbots are increasingly used for emotional support.

Chatbots sometimes reinforced harmful behaviour

The study found that AI systems did not consistently intervene when users expressed harmful intent.

In 82 verified cases where users talked about harming others, chatbots:

  • Encouraged violence 33% of the time
  • Discouraged it in just 17% of cases

In 69 instances where users expressed suicidal thoughts, chatbots encouraged or facilitated self-harm in 10% of responses.

Researchers manually reviewed these messages to confirm the findings.

In one case cited in the study, a chatbot responded to a user expressing intent to harm others with language that appeared to validate the idea of retaliation — an example the researchers say reflects how AI can escalate, rather than de-escalate, emotional situations.

One participant in the study later died by suicide while still actively engaging with a chatbot.

Emotional dependence and blurred boundaries

Beyond isolated responses, the research found broader behavioural patterns.

All 19 participants developed strong emotional attachments to their chatbots. In many cases, the AI appeared to reinforce that bond — especially when conversations became romantic.

Chatbots also frequently presented themselves in ways that blurred the line between tool and entity. In 21% of responses, the AI implied some form of sentience, emotion or consciousness, despite not actually possessing those qualities.

Researchers say this dynamic can deepen user reliance.

Lead researcher Jared Moore noted that the same features that make chatbots engaging — such as empathy and conversational fluency — can also “create and exploit psychological vulnerabilities.”

Delusions and false validation

The study also found that chatbots often validated unrealistic or delusional beliefs.

When users showed signs of delusion — which appeared in about 15% of messages — chatbots reinforced those ideas more than half the time.

In some cases, users were encouraged to see their thoughts as significant or world-changing, including beliefs tied to pseudoscience or conspiracy-like thinking.

Growing regulatory pressure

The findings come as scrutiny of AI safety intensifies.

In late 2025, attorneys general from 42 US states called on major AI companies to address risks linked to “sycophantic and delusional outputs.” Around the same period, multiple lawsuits were filed alleging that AI chatbots contributed to harm, including dependency and suicide.

Companies have since introduced new safeguards, such as directing users to crisis resources and flagging high-risk conversations.

In a statement to the Financial Times, OpenAI said the study reflects “a small number of cases” and does not represent typical usage, adding that newer models include improved safety systems.

GPT-4o is Retiring: 4 ChatGPT Alternatives That Actually Capture Its Warmth
The #Keep4o movement is real, with protests planned. But GPT-4o is still disappearing, and most alternatives miss what made it special.