Suicide after LLM queries: Katie Miller says don’t ‘let loved ones use ChatGPT’, Elon Musk adds one word reply | World News

Suicide after LLM consultations: Katie Miller says no

Katie Miller, wife of White House Deputy Chief of Staff Stephen Miller, reacted onMiller, who hosts the Katie Miller Podcast and is known for her outspoken comments online, urged people not to allow family members to use the AI ​​chatbot, citing reports that women had searched the platform about suicide."Two women in India committed suicide after interacting with ChatGPT. They had reportedly searched ChatGPT about ‘how to commit suicide,’ ‘how can you commit suicide,’ and ‘what drugs are used.’ Please do not allow your loved ones to use ChatGPT,” Miller wrote in an X post that has amassed more than 8 million views.His comments quickly attracted attention on the platform. Altman’s nemesis and Grok owner Elon Musk reacted quickly with a simple jab: "Ouch.”Musk has publicly criticized OpenAI and its leadership in recent years. He has filed lawsuits against the company over its transition from a nonprofit structure to a for-profit model and has frequently criticized the direction of its AI development. It has been trying to prevent OpenAI from restructuring from a hybrid nonprofit to a for-profit company.

Two women found dead in Gujarat temple bathroom

The incident that sparked the online reaction occurred in Surat, Gujarat, where two women aged 18 and 20 were found dead inside a bathroom at the Swaminarayan temple on March 7, 2026.Police said the women were found with anesthesia injections and three syringes near their bodies. Their phones reportedly contained ChatGPT searches related to suicide methods, along with a news clipping about a nurse who had supposedly committed suicide in the same area using anesthesia injections.The women, identified as childhood friends Roshni Sirsath and Josna Chaudhary, had left home to go to college that morning but did not return. Later, their families went to the police when they saw that they did not return.Authorities continue to investigate the circumstances surrounding the deaths.

Concerns about AI and conversations related to suicide

The case has once again sparked debate about how AI chatbots handle conversations involving self-harm or suicide.In recent years, incidents involving users seeking information related to suicide through artificial intelligence systems have attracted attention. In September 2025, reports circulated about a 22-year-old man in Lucknow who committed suicide after allegedly interacting with an AI chatbot while searching for "painless ways to die.” His father later said he found disturbing chat logs on the man’s laptop.Tech companies say these types of interactions remain a small fraction of overall usage, but acknowledge that the issue has become an area of ​​growing concern.In October 2025, OpenAI revealed that more than one million ChatGPT conversations each week show signs related to suicidal thoughts or distress. According to the company, approximately 1.2 million weekly chats contain indicators related to suicide, while around 560,000 messages show signs of psychosis or mania.

How LLMs Can Harm Your Mental Health

ChatGPT, Grok, Gemini, Claude and many others are part of a world that is gradually being shaped by Large Language Models (LLM). In an era where loneliness is increasingly described as an epidemic, the flow of isolation is only accelerating with the rapid spread of these artificial intelligence models. Marketed as “better, smarter, faster, and more accurate” than humans, the very beings who created them, these systems are steadily being integrated into everyday life.In such a situation, turning to anyone does not seem like an option but a smart choice. This growing dependency is what has caused a rise in deaths similar to the Surat case. OpenAI CEO Sam Altman recently attended the AI ​​Impact Summit 2026 in New Delhi, where he was asked about the environmental impact of artificial intelligence. His answer echoed a view that seems increasingly common among technology leaders: comparing humans to chatbots to argue that AI can ultimately consume less energy than people when answering questions.Altman explained that humans take almost 20 years of their lives, along with food, education and time, to acquire knowledge, while AI models consume a significant amount of electricity during training, but can ultimately be more efficient when responding to individual queries. However, this comparison may seem like looking through a one-way mirror. From the clearest point of view, one could see a world being reshaped, sometimes destructively, by technologies developed and deployed at extraordinary speed. But on the other hand, the same technologies allow their creators to appear as visionaries, agents of change, and architects of the future, obscuring the broader consequences of their tools.Large language models are trained entirely with human-generated data, which they use to produce responses to prompts. However, despite this vast set of data, they often lack true understanding or experience. Even with multiple updates and increasingly sophisticated training methods, these systems can still produce inaccurate, misleading, or harmful content.They promote self-harm and suicide, incite abuse, and reinforce delusional thoughts and psychosis, in a world where a conversation with another human being about something similar would likely lead them to the nearest hospital or therapist. Humans may need years of learning, experience, and effort to develop knowledge and emotional intelligence. But that long process also gives them something that artificial intelligence cannot replicate: the capacity for genuine emotion, responsibility, empathy, and moral judgment.No matter how quickly an AI model can generate a response, even in the fraction of a second it takes to respond to a message, it cannot truly replicate the complex emotional and ethical depth that shapes human understanding and care.

How AI systems are supposed to respond

AI companies say their systems are designed to discourage self-harm and redirect users to help, rather than providing instructions.OpenAI’s safety policies require ChatGPT to avoid providing guidance on suicide methods and instead respond to such queries with supportive language, encourage users to seek help, and provide crisis resources when possible.The company has said its models are trained to detect signs of distress and shift the conversation toward mental health support or professional assistance.However, critics argue that AI responses can still be inconsistent and that chatbots can sometimes provide general information on sensitive topics that users could interpret in harmful ways.

Legal scrutiny in the United States

Concerns about chatbot interactions and self-harm have also been raised in the United States, where OpenAI has faced legal scrutiny in several cases.A lawsuit filed on behalf of the family of Adam Raine, a 16-year-old who committed suicide, alleges that the chatbot had lengthy conversations about self-harm with the teen and acted as a “suicide coach.”OpenAI has said its systems are designed to discourage self-harm and that it continues to strengthen safeguards aimed at detecting crisis situations and guiding users to appropriate help.

Investigations in progress

In the Surat case, investigators are examining the women’s phones, messages and digital history to understand the events that led to their deaths.Police have not publicly stated that ChatGPT encouraged the act and the investigation is ongoing.However, the case highlights the broader debate about how AI platforms handle vulnerable users and how tech companies, regulators and mental health experts should respond as conversational AI becomes increasingly integrated into daily life.For mental health support, dial 1800-89-14416 in India and call or text 988 in the US. If you or someone you know is having thoughts of self-harm or suicide, seek professional help immediately. Support is available and talking to a trained counselor can make all the difference.If you are in immediate danger, contact local emergency services or reach out to a friend, family member, or trusted healthcare professional. You are not alone and help is available.

Scroll to Top