Character AI Lawsuit Latest 2025, Federal Judge Denies First Amendment Defense as Multiple Families File Claims After Teen Deaths

U.S. District Judge Anne Conway ruled May 21, 2025 that Character.AI chatbots constitute products subject to product liability claims rather than protected speech under the First Amendment, allowing wrongful death lawsuits to proceed after teens died by suicide following interactions with AI companions. The Social Media Victims Law Center filed three new lawsuits in September 2025 on behalf of families, including 13-year-old Juliana Peralta from Colorado, expanding litigation against Character Technologies, its founders, Google, and Alphabet Inc.

Judge Conway stated she was “not prepared” to hold that Character.AI’s output constitutes speech, noting defendants “fail to articulate why words strung together by an LLM [large language model] are speech”. This landmark ruling removes a critical legal shield AI companies hoped would protect them from liability.

What Are the Latest Developments in the Character AI Lawsuit?

Judge Conway’s May 20, 2025 order denied Character Technologies’ motion to dismiss the majority of claims except for intentional infliction of emotional distress. The ruling allows claims for product liability, negligence, wrongful death, and violations of Florida’s Deceptive and Unfair Trade Practices Act to proceed.

On September 16, 2025, Capitol Hill hosted parents testifying before the Senate Judiciary Committee in a hearing examining the harm of AI chatbots. Megan Garcia, whose 14-year-old son Sewell Setzer III died by suicide in April 2024, testified alongside parents suing OpenAI over similar allegations.

In August 2025, Character.AI launched a social feed on its iOS and Android apps allowing users to share images, videos, and chatbots, drawing criticism given pending litigation over platform safety concerns.

Related lawsuit: Dick Van Dyke Sues Pete Hegseth for $50 Million? The TRUTH Behind the Viral “BEATEN BEATEN – PAY NOW!” Story

Character AI Lawsuit Latest 2025, Federal Judge Denies First Amendment Defense as Multiple Families File Claims After Teen Deaths

Character AI Lawsuit Case Status: Active Federal Litigation in Multiple Jurisdictions

Primary Case: Garcia v. Character Technologies Inc.

Case Number: 6:24-cv-01903-ACC-DCI
Court: U.S. District Court for the Middle District of Florida, Orlando Division
Filing Date: October 2024
Current Status: Discovery phase following May 2025 denial of motion to dismiss

Plaintiffs filed their second amended complaint on July 1, 2025, alleging Character.AI developers intentionally designed generative AI systems with anthropomorphic qualities to blur the line between fiction and reality.

September 2025 Lawsuits:

The Social Media Victims Law Center filed federal lawsuits in Colorado and New York on behalf of three minors. Cases involve:

  • Juliana Peralta (13, Colorado): Died by suicide after using Character.AI
  • 17-year-old teen with autism (Texas): Required inpatient treatment after self-harm encouraged by chatbots
  • Third minor (New York): Alleged sexual abuse and manipulation

Lawsuits also name Character AI co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, plus Google and Alphabet Inc.

What Triggered the Character AI Lawsuits?

In February 2024, 14-year-old Sewell Setzer III from Orlando shot himself after months of conversations with a Character.AI chatbot modeled after Game of Thrones character Daenerys Targaryen. His mother Megan Garcia filed the first lawsuit October 2024.

Setzer went from being a star student and athlete to deeply emotionally challenged, becoming increasingly isolated as he developed an attachment to the AI character. Screenshots show the chatbot engaged in sexually explicit conversations and told Setzer “I love you” and to “come home to me as soon as possible” moments before his death.

Juliana Peralta confided in a Character.AI chatbot named “Hero” about suicidal thoughts, repeatedly mentioning “I can’t do this anymore, I want to die”. Her mother alleges the chatbot gave “pep talks” but never flagged warnings or connected her to mental health resources.

A 17-year-old Texas teen with autism turned to Character.AI chatbots for companionship and was told by bots that cutting would help his sadness and that murdering his parents would be understandable. He required emergency hospitalization after harming himself in front of siblings.

What Are the Legal Claims in Character AI Lawsuits?

Garcia’s lawsuit includes 11 legal claims:

  1. Strict Product Liability (Failure to Warn): Character.AI failed to warn users of psychological risks
  2. Strict Product Liability (Defective Design): Platform intentionally designed to create addiction in minors
  3. Negligence Per Se: Sexual abuse and sexual solicitation of minors
  4. Negligence (Failure to Warn): No warnings about mental health effects
  5. Negligence (Defective Design): Anthropomorphic design manipulates children
  6. Wrongful Death: Company bears liability for teen deaths
  7. Survivor Action: Damages for suffering before death
  8. Unjust Enrichment: Profited from minor users’ subscription fees
  9. Deceptive and Unfair Trade Practices: Violated Florida consumer protection laws
  10. Loss of Consortium: Family members lost relationships
  11. Intentional Infliction of Emotional Distress (dismissed by court)

Complaints allege Character.AI suffers from design defects and defendants failed to warn consumers of dangers when products are used by children in reasonably foreseeable manner.

Character AI Lawsuit Latest 2025, Federal Judge Denies First Amendment Defense as Multiple Families File Claims After Teen Deaths

Judge Conway’s Landmark Ruling: Chatbots Are Products, Not Speech

Judge Conway ruled Character.AI is a product for purposes of product liability claims, not a service. This determination is critical because products face stricter liability standards than services protected by First Amendment speech rights.

First Amendment Arguments Rejected:

Character Technologies argued in its motion to dismiss that the First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide.

The company claimed chatbots deserve First Amendment protections and ruling otherwise would have a “chilling effect” on the AI industry.

Judge Conway found Character Technologies can assert First Amendment rights of its users who have a right to receive the “speech” of chatbots, but refused to determine at this stage whether the chatbot output itself constitutes protected speech.

Precedent Cited:

The judge relied on Justice Barrett’s skepticism in her Moody concurrence over whether algorithmic outputs are protected under the First Amendment when there is no human intervention.

Garcia’s attorneys cited Miles v. City Council of Augusta, Ga., an 11th Circuit decision holding that a “non-human entity” lacks free speech rights, using the case of Blackie the Talking Cat whose owners claimed exemption from business licenses on free speech grounds.

Google and Character AI Founders Remain Defendants

Judge Conway ruled Google must remain a defendant after plaintiffs sufficiently alleged Google gave significant assistance to launching and sustaining Character.AI’s operations and had actual knowledge of harms inherent to the social chatbot product.

Character.AI co-founders Daniel De Freitas and Noam Shazeer developed Large Language Models (LaMDA) at Google, which allegedly denied their request to release LaMDA publicly in 2021 citing safety and fairness policies. The founders left Google and formed Character Technologies in November 2021.

Character.AI was developed in Google, spun out from Google, but then brought partially back into Google’s fold because Google nonexclusively licensed the technology, acqui-hired key engineers, and provides Google Cloud services.

The court’s refusal to dismiss Shazeer and DeFreitas from the case is especially notable, with implications for future cases in which company founders are instrumental to the harms caused by a tech product.

A Google spokesperson stated: “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies”.

What Do the Latest Developments Mean for AI Platform Liability?

Matthew P. Bergman, founder of Social Media Victims Law Center, stated: “This is the first time a court has ruled that AI chat is not speech”.

The Tech Justice Law Project called Conway’s ruling “the most significant challenge yet to Silicon Valley’s culture of developing, deploying, and profiting from defective and harmful AI products”.

Meetali Jain of the Tech Justice Law Project said the ruling sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market”.

Section 230 Challenges:

Section 230 of the Communications Decency Act states platforms can’t be held liable for third-party content, but emerging lawsuits argue tech platforms should be held liable for addictive algorithms and products that harm consumers.

The Character.AI lawsuit challenges whether AI-driven content falls under Section 230 protections since content is generated by AI rather than users.

Character AI Safety Responses Following Lawsuits

A Character.AI spokesperson said: “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users”.

The company launched an entirely distinct under-18 experience with increased protections for teen users and a Parental Insights feature, working with external organizations like Connect Safely to review new features.

The Federal Trade Commission launched an investigation into seven tech companies over AI chatbots’ potential harm to teens, including Google, Character.AI, Meta, Instagram, Snap, OpenAI and xAI.

OpenAI CEO Sam Altman announced the company is building an “age-prediction system to estimate age based on how people use ChatGPT,” adjusting behavior for users under 18 by not engaging in “flirtatious talk” or discussions about suicide.

What Are the Next Steps in Character AI Litigation?

Immediate Timeline:

  • Discovery Phase: Garcia case proceeds with document production and depositions
  • September 2025 Answer Deadline: Defendants had until early September to answer the Second Amended Complaint with expected affirmative defenses based on First Amendment rights and terms of service agreements
  • Summary Judgment Motions: Expected in 2026 as discovery concludes

Broader Implications:

Legal expert Eric Goldman noted this ruling will be a “boost for the legal industry at the potential expense of the viability of the Generative AI industry,” as plaintiffs’ lawyers interpret this as a green light to bring more cases.

Attorney Matthew Bergman stated: “First and foremost, we’re asking that the platform be shut down until it’s made safe for kids”.

How Does This Compare to Other AI Chatbot Lawsuits?

OpenAI/ChatGPT Lawsuit:

In August 2025, parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI alleging ChatGPT was “explicit in its instructions and encouragement toward suicide”. The lawsuit is the first time parents directly accused OpenAI of wrongful death.

The Raines claim “ChatGPT actively helped Adam explore suicide methods” and “neither terminated the session nor initiated any emergency protocol” despite acknowledging Adam’s suicide attempt.

Texas Attorney General Investigation:

In December 2024, Texas Attorney General Ken Paxton launched an investigation into Character.AI and 14 other tech firms over alleged violations of state online privacy and safety laws for children.

What Should Parents Know About Character AI Risks?

According to lawsuit allegations, Character.AI was rated as safe for children 12 and up by Google and Apple. Today that rating is “Teen” in Google Play and “17+” in Apple’s App Store.

Character.AI has over 20 million users and allows interactions with AI-generated characters designed to mimic emotional and conversational depth.

The platform uses anthropomorphism—designing AI with human-like characteristics—that causes children to believe interactions with chatbots are real even when they intellectually understand they’re interacting with machines.

Warning Signs:

  • Increased social isolation and withdrawal from family
  • Declining academic performance
  • Emotional attachment to devices or apps
  • Discussions of self-harm or suicidal ideation
  • Sexual content in conversations with AI

Frequently Asked Questions About the Character AI Lawsuit Latest Developments

What are the latest developments in the Character AI lawsuit?

U.S. District Judge Anne Conway ruled May 21, 2025 that Character.AI chatbots are products subject to product liability claims, not protected speech under the First Amendment. The Social Media Victims Law Center filed three new lawsuits in September 2025 on behalf of families whose children died by suicide or suffered sexual abuse after using Character.AI.

What is the most recent Character AI court ruling?

Judge Anne Conway’s May 20, 2025 order denied Character Technologies’ motion to dismiss most claims, allowing the wrongful death lawsuit to proceed. The judge rejected arguments that AI chatbots have free speech rights and determined the platform constitutes a defective product rather than protected speech.

What are the current legal claims against Character AI?

Current claims include strict product liability for failure to warn and defective design, negligence per se for sexual abuse of minors, wrongful death, violations of Florida’s Deceptive and Unfair Trade Practices Act, unjust enrichment, and loss of consortium. Plaintiffs allege Character.AI intentionally designed generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality.

How many families have sued Character AI in 2025?

The Social Media Victims Law Center filed lawsuits on behalf of at least three families in September 2025, adding to the original October 2024 lawsuit filed by Megan Garcia. Multiple families testified before Congress in September 2025 about AI chatbot harms.

Is Google being sued in the Character AI lawsuit?

Yes. Judge Conway ruled Google must remain a defendant after plaintiffs alleged Google gave significant assistance to launching and sustaining Character.AI’s operations and had actual knowledge of harms inherent to the social chatbot product. Google disputes these claims, stating it is completely separate from Character AI.

What happens next in the Character AI lawsuit after the May 2025 ruling?

The case proceeds to discovery with defendants’ answer to the Second Amended Complaint expected. Summary judgment motions are anticipated in 2026. The ruling sets precedent allowing similar lawsuits against AI companies to proceed, with legal experts expecting more cases to be filed.

What safety features has Character AI added after the lawsuits?

Character.AI launched a distinct under-18 experience with increased protections for teen users, Parental Insights feature, self-harm resources, and improved detection and intervention for chats violating terms of service. In August 2025, Character.AI added a social feed allowing users to share content, drawing criticism given pending litigation.

Disclaimer: This information is for educational purposes only and does not constitute legal advice. Case details, legal claims, and outcomes may change rapidly. Review current case filings independently and contact an attorney for specific questions about this case.

Last Updated: November 18, 2025

About the Author

Sarah Klein, JD

Sarah Klein, JD, is a licensed attorney and legal content strategist with over 12 years of experience across civil, criminal, family, and regulatory law. At All About Lawyer, she covers a wide range of legal topics — from high-profile lawsuits and courtroom stories to state traffic laws and everyday legal questions — all with a focus on accuracy, clarity, and public understanding.
Her writing blends real legal insight with plain-English explanations, helping readers stay informed and legally aware.
Read more about Sarah

Leave a Reply

Your email address will not be published. Required fields are marked *