Close Menu
todayupdate.site

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    A Radiohead song from 1997 is on the Hot 100 charts, thanks to TikTok

    UN demands justice over Israeli double strike on Gaza hospital

    Ben Foster drops intriguing Senne Lammens verdict as Man United close in on deal – Man United News And Transfer News

    Facebook X (Twitter) Instagram
    todayupdate.site
    Geometry Dash Updates
    • Home
    • On iOS
    • For PC
    • Latest Updates
    • Privacy Policy
    todayupdate.site
    You are at:Home»Latest Updates»Parents sue OpenAI over ChatGPT’s role in son’s suicide
    Latest Updates

    Parents sue OpenAI over ChatGPT’s role in son’s suicide

    Nancy G. MontemayorBy Nancy G. MontemayorAugust 26, 2025002 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Before sixteen-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are filing the first known wrongful death lawsuit against OpenAI, the New York Times reports.

    Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But research has shown that these safeguards are far from foolproof.

    In Raine’s case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing.

    OpenAI has addressed these shortcomings on its blog. “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the post reads. “We are continuously improving how our models respond in sensitive interactions.”

    Still, the company acknowledged the limitations of the existing safety trainings for large models. “Our safeguards work more reliably in common, short exchanges,” the post continues. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

    These issues are not unique to OpenAI. Character.AI, another AI chatbot maker, is also facing a lawsuit over its role in a teenager’s suicide. LLM-powered chatbots have also been linked to cases of AI-related delusions, which existing safeguards have struggled to detect.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAdam Sandler Owned the Rights to ‘Twilight’
    Next Article Grimsby v Man United: Line-ups, stats and preview
    Nancy G. Montemayor
    • Website

    Related Posts

    A Radiohead song from 1997 is on the Hot 100 charts, thanks to TikTok

    August 26, 2025

    UN demands justice over Israeli double strike on Gaza hospital

    August 26, 2025

    Ben Foster drops intriguing Senne Lammens verdict as Man United close in on deal – Man United News And Transfer News

    August 26, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Porn Star Kylie Page Has Passed Away

    July 5, 202569 Views

    Mother of 2 Rescued Campers in Texas Relays Their Story

    July 5, 202566 Views

    Chelsea 3-0 Paris Saint-Germain – Report result and goals as Blues become world champions

    July 13, 202557 Views
    © 2025 TodayUpdate.site. All Rights Reserved.
    • Contect us
    • Privacy Policy
    • Disclaimer
    • DMCA Notice

    Type above and press Enter to search. Press Esc to cancel.