Close Menu
todayupdate.site

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    No Neymar in Ancelotti’s squad but Paqueta returns

    Mirzya’s Saiyami Kher Boards Priyadarshan’s Haiwaan Starring Akshay Kumar And Saif Ali Khan

    Man United fans in no doubt who will be surprise package of season – Man United News And Transfer News

    Facebook X (Twitter) Instagram
    todayupdate.site
    Geometry Dash Updates
    • Home
    • On iOS
    • For PC
    • Latest Updates
    • Privacy Policy
    todayupdate.site
    You are at:Home»Latest Updates»Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims
    Latest Updates

    Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

    Nancy G. MontemayorBy Nancy G. MontemayorAugust 18, 2025005 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Texas attorney general Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,” according to a press release issued Monday.

    “In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton is quoted as saying. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

    The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting.

    The Texas Attorney General’s office has accused Meta and Character.AI of creating AI personas that present as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” 

    Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup’s young users. Meanwhile, Meta doesn’t offer therapy bots for kids, but there’s nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. 

    “We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

    However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    For its part, Character includes prominent disclaimers in every chat to remind users that a “Character” is not a real person, and everything they say should be treated as fiction, according to a Character.AI spokesperson. She noted that the startup adds additional disclaimers when users create Characters with the words “psychologist,” “therapist,” or “doctor” to not rely on them for any type of professional advice.

    In his statement, Paxton also observed that though AI chatbots assert confidentiality, their “terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”

    According to Meta’s privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to “improve AIs and related technology.” The policy doesn’t explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for “more personalized outputs.” Given Meta’s ad-based business model, this effectively translates to targeted advertising. 

    Character.AI’s privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, which it may link to a user’s account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. 

    A Character.AI spokesperson said the startup is “just beginning to explore targeted advertising on the platform” and that those explorations “have not involved using the content of chats on the platform.”

    The spokesperson also confirmed that the same privacy policy applies to all users, even teenagers.

    TechCrunch has asked Meta such tracking is done on children, too, and will update this story if we hear back.

    Both Meta and Character say their services aren’t designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character’s kid-friendly characters are clearly designed to attract younger users. The startup’s CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform’s chatbots under his supervision.  

    That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after major pushback from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill’s broad mandates would undercut its business model. 

    KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). 

    Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws.

    This story was updated with comments from a Character.AI spokesperson.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUkrainians watch peace talks as Russian bombing continues
    Next Article Crimea is at the crux of negotiations between Russia and Ukraine. Here’s why it’s important.
    Nancy G. Montemayor
    • Website

    Related Posts

    No Neymar in Ancelotti’s squad but Paqueta returns

    August 26, 2025

    Mirzya’s Saiyami Kher Boards Priyadarshan’s Haiwaan Starring Akshay Kumar And Saif Ali Khan

    August 26, 2025

    Man United fans in no doubt who will be surprise package of season – Man United News And Transfer News

    August 26, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Porn Star Kylie Page Has Passed Away

    July 5, 202569 Views

    Mother of 2 Rescued Campers in Texas Relays Their Story

    July 5, 202566 Views

    Chelsea 3-0 Paris Saint-Germain – Report result and goals as Blues become world champions

    July 13, 202557 Views
    © 2025 TodayUpdate.site. All Rights Reserved.
    • Contect us
    • Privacy Policy
    • Disclaimer
    • DMCA Notice

    Type above and press Enter to search. Press Esc to cancel.