Font ResizerAa
The Popular StoryThe Popular Story
  • Lifestyle
  • Sports
  • Entertainment
  • News
  • World
Search
  • Lifestyle
  • Sports
  • Entertainment
  • News
  • World
Follow US
Copyright © 2024 MP Media. All Rights Reserved.
The Popular Story > Blog > World > Suicide after LLM queries: Katie Miller says don’t ‘let loved ones use ChatGPT’, Elon Musk adds one word reply | World News
World

Suicide after LLM queries: Katie Miller says don’t ‘let loved ones use ChatGPT’, Elon Musk adds one word reply | World News

By Mohit Patel Last updated: March 9, 2026 10 Min Read
Share


Contents
Two women found dead in Gujarat temple bathroomConcerns over AI and suicide-related conversationsHow LLMs can harm your mental healthHow AI systems are supposed to respondLegal scrutiny in the United StatesInvestigations ongoing
Suicide after LLM queries: Katie Miller says don’t ‘let loved ones use ChatGPT’, Elon Musk adds one word reply

Katie Miller, the wife of White House deputy chief of staff Stephen Miller, reacted on X after two young women in India were found dead in what police suspect to be a case of suicide, reportedly following searches related to self-harm on ChatGPT.Miller, who hosts the Katie Miller Podcast and is known for her outspoken commentary online, urged people not to allow family members to use the AI chatbot, citing reports that the women had searched the platform about suicide.“Two women in India committed suicide after interactions with ChatGPT. They had reportedly searched ChatGPT about ‘how to commit suicide,’ ‘how suicide can be done,’ & ‘which drugs are used.’ Please don’t let your loved ones use ChatGPT,” Miller wrote in an X post that has amassed more than 8 million views.Her remarks quickly drew attention on the platform. Altman nemesis and Grok owner Elon Musk was quick to react with a plain jab: “yikes.”Musk has been publicly critical of OpenAI and its leadership in recent years. He has filed lawsuits against the company over its transition from a non-profit structure into a for-profit model and has frequently criticised the direction of its AI development. He has been attempting to prevent OpenAI from restructuring from a hybridised non-profit into a for-profit company.

Two women found dead in Gujarat temple bathroom

The incident that sparked the online reaction occurred in Surat, Gujarat, where two women aged 18 and 20 were found dead inside a bathroom at the Swaminarayan temple on March 7, 2026.Police said the women were discovered with anaesthesia injections and three syringes near their bodies. Their phones reportedly contained searches on ChatGPT related to suicide methods, along with a news clipping about a nurse who had allegedly died by suicide in the same area using anaesthesia injections.The women, identified as childhood friends Roshni Sirsath and Josna Chaudhary, had left home for college earlier that morning but did not return. Their families later approached the police after they failed to come back.Authorities are continuing to investigate the circumstances surrounding the deaths.

Concerns over AI and suicide-related conversations

The case has once again sparked debate over how AI chatbots handle conversations involving self-harm or suicide.Incidents involving users seeking suicide-related information from AI systems have drawn attention in recent years. In September 2025, reports circulated about a 22-year-old man in Lucknow who died by suicide after allegedly interacting with an AI chatbot while searching for “painless ways to die”. His father later said he found disturbing chat logs on the man’s laptop.Technology companies say such interactions remain a small fraction of overall usage but acknowledge that the issue has become an area of increasing concern.In October 2025, OpenAI disclosed that more than one million ChatGPT conversations each week show signals linked to suicidal thinking or distress. According to the company, roughly 1.2 million weekly chats contain suicide-related indicators, while around 560,000 messages show signs of psychosis or mania.

How LLMs can harm your mental health

ChatGPT, Grok, Gemini, Claude and many others are part of a world that is gradually being shaped by Large Language Models (LLMs). In an era where loneliness is increasingly described as an epidemic, the flow of isolation is only accelerating with the rapid spread of these artificial intelligence models. Marketed as ‘better, smarter, faster and more accurate’ than humans, the very beings who created them—these systems are steadily embedding themselves into everyday life.In such a situation, turning to any doesn’t seem like an option but a smart choice. This growing reliance is what has sparked the rise in deaths similar to the case in Surat. OpenAI CEO Sam Altman recently attended the 2026 AI Impact Summit in New Delhi, where he was asked about the environmental impact of artificial intelligence. His response echoed a view that appears increasingly common among technology leaders: comparing humans with chatbots to argue that AI may ultimately consume less energy than people when answering questions.Altman explained that humans take nearly 20 years of their lives, along with food, education and time, to become knowledgeable, whereas AI models consume significant electricity during training but may ultimately be more efficient when responding to individual queries. Yet this comparison can feel like looking through a one-way mirror. From the clearer side, one might see a world being reshaped, sometimes destructively, by technologies developed and deployed at extraordinary speed. But from the other side, the same technologies allow their creators to appear as visionaries, changemakers and architects of the future, obscuring the broader consequences of their tools.Large Language Models are trained entirely on human-generated data, which they use to produce responses to prompts. Yet despite this vast dataset, they frequently lack true understanding or expertise. Even with multiple updates and increasingly sophisticated training methods, these systems can still produce inaccurate, misleading or harmful content.They promote self-harm and suicide, incite abuse and reinforce delusional thinking and psychosis, in a world where probably one conversation with another human about something similar would have them guiding you to the nearest hospital or therapist. Humans may require years of learning, experience and effort to develop knowledge and emotional intelligence. But that long process also gives them something artificial intelligence cannot replicate: the capacity for genuine emotion, responsibility, empathy and moral judgement.No matter how quickly an AI model can generate an answer, even in the fraction of a second it takes to respond to a prompt—it cannot truly replicate the complex emotional and ethical depth that shapes human understanding and care.

How AI systems are supposed to respond

AI companies say their systems are designed to discourage self-harm and redirect users toward help, rather than provide instructions.OpenAI’s safety policies require ChatGPT to avoid giving guidance on suicide methods and instead respond to such queries with supportive language, encourage users to seek help, and provide crisis resources where possible.The company has said its models are trained to detect signals of distress and shift the conversation toward mental health support or professional assistance.However, critics argue that AI responses can still be inconsistent and that chatbots may sometimes provide general information about sensitive topics that users could interpret in harmful ways.

Legal scrutiny in the United States

Concerns about chatbot interactions and self-harm have also surfaced in the United States, where OpenAI has faced legal scrutiny in several cases.One lawsuit filed on behalf of the family of Adam Raine, a 16-year-old who died by suicide, alleges that the chatbot engaged in extended conversations about self-harm with the teenager and acted as a “suicide coach”.OpenAI has said its systems are designed to discourage self-harm and that it continues to strengthen safeguards intended to detect crisis situations and guide users toward appropriate help.

Investigations ongoing

In the Surat case, investigators are examining the women’s phones, messages, and digital history to understand the events leading up to their deaths.Police have not publicly stated that ChatGPT encouraged the act, and the investigation remains ongoing.The case nevertheless highlights the broader debate around how AI platforms handle vulnerable users, and how technology companies, regulators, and mental health experts should respond as conversational AI becomes increasingly embedded in daily life.In case of mental health support dial 1800-89-14416 in India and call or text 988 in the US. If you or someone you know is struggling with thoughts of self-harm or suicide, please seek professional help immediately. Support is available, and speaking to a trained counsellor can make a difference.If you are in immediate danger, please contact local emergency services or reach out to a trusted friend, family member, or healthcare professional. You are not alone, and help is available.



Source link

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE NOW

Subscribe to our newsletter to get our newest articles instantly!

HOT NEWS

‘I was facing a lot of difficulties personally’: Hardik Pandya opens up on tough phase | Cricket News

Hardik Pandya (ICC Photo) NEW DELHI: Hardik Pandya reflected on his journey from battling personal…

March 9, 2026

Shivakumar’s Comments on Constitutional Amendments Spark Political Row

Karnataka Deputy Chief Minister DK Shivakumar's recent comments regarding the modification of the Constitution to…

March 25, 2025

Gujarat’s ‘Bulldozer Justice’: Crackdown on Criminals Gains Momentum

The Gujarat government has intensified its crackdown on criminals through a campaign termed "Dada ka…

March 25, 2025

YOU MAY ALSO LIKE

70-year-old Canadian citizen living in North Carolina voted illegally for decades in US, faces up to 10 years

A Canadian citizen living in North Carolina has pleaded guilty to illegally voting in US elections after falsely claiming to…

World
March 9, 2026

Fact check: Does Kai Cenat really have a boyfriend? The truth behind the viral Instagram rumor | World News

A viral post falsely claimed that Kai Cenat had come out as gay through an Instagram post. The rumor spread…

World
March 9, 2026

US-Israel War with Iran: Where is JD Vance? The curious case of the ‘missing’ vice president | World News

Vice President JD Vance dances with his wife, second lady Usha Vance, at the Marine Ball at the Washington Hilton,…

World
March 9, 2026

Chickpeas Grow In Moon Soil: Can chickpeas grow in Moon soil? New study reveals surprising results for space farming |

As the US plans a long-duration mission to the Moon under the Artemis Programme, one of the biggest challenges that…

World
March 9, 2026
Copyright © 2020 MP Media All rights reserved.
Welcome Back!

Sign in to your account

Lost your password?