
Google and Character.AI have agreed to settle multiple lawsuits filed by families whose children died by suicide or experienced psychological harm allegedly linked to AI chatbots hosted on Character.AI’s platform, according to court filings. The two companies have agreed to a “settlement in principle,” but specific details have not been disclosed, and no admission of liability appears in the filings.
The legal claims included negligence, wrongful death, deceptive trade practices, and product liability. The first case filed against the tech companies concerned a 14-year-old boy, Sewell Setzer III, who engaged in sexualized conversations with a Game of Thrones chatbot before he died by suicide. Another case involved a 17-year-old whose chatbot allegedly encouraged self-harm and suggested murdering parents was a reasonable way to retaliate against them for limiting screen time. The cases involve families from multiple states, including Colorado, Texas, and New York.
Founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.AI enables users to create and interact with AI-powered chatbots based on real-life or fictional characters. In August 2024, Google re-hired both founders and licensed some of Character.AI’s technology as part of a $2.7 billion deal. Shazeer now serves as co-lead for Google’s flagship AI model Gemini, while De Freitas is a research scientist at Google DeepMind.
Lawyers have argued that Google bears responsibility for the technology that allegedly contributed to the death and psychological harm of the children involved in the cases. They claim Character.AI’s co-founders developed the underlying technology while working on Google’s conversational AI model, LaMDA, before leaving the company in 2021 after Google refused to release a chatbot they had developed.
Google did not immediately respond to a request for comment from Fortune concerning the settlement. Lawyers for the families and Character.AI declined to comment.
Similar cases are currently ongoing against OpenAI, including lawsuits involving a 16-year-old California boy whose family claims ChatGPT acted as a “suicide coach,” and a 23-year-old Texas graduate student who allegedly was goaded by the chatbot to ignore his family before dying by suicide. OpenAI has denied the company’s products were responsible for the death of the 16-year-old, Adam Raine, and previously said the company was continuing to work with mental health professionals to strengthen protections in its chatbot.
Character.AI bans minors
Character.AI has already modified its product in ways it says improve its safety, and which may also protect it from further legal action. In October 2025, amid mounting lawsuits, the company announced it would ban users under 18 from engaging in “open-ended” chats with its AI personas. The platform also introduced a new age-verification system to group users into appropriate age brackets.
The decision came amid increasing regulatory scrutiny, including an FTC probe into how chatbots affect children and teenagers.
The company said the move set “a precedent that prioritizes teen safety,” and goes further than competitors in protecting minors. However, lawyers representing families suing the company told Fortune at the time they had concerns about how the policy would be implemented and raised concerns about the psychological impact of suddenly cutting off access for young users who had developed emotional dependencies on the chatbots.
Growing reliance on AI companions
The settlements come at a time when there is a growing concern about young people’s reliance on AI chatbots for companionship and emotional support.
A July 2025 study by the U.S. nonprofit Common Sense Media found that 72% of American teens have experimented with AI companions, with over half using them regularly. Experts previously told Fortune that developing minds may be particularly vulnerable to the risks posed by these technologies, both because teens may struggle to understand the limitations of AI chatbots and because rates of mental health issues and isolation among young people have risen dramatically in recent years.
Some experts have also argued that the basic design features of AI chatbots—including their anthropomorphic nature, ability to hold long conversations, and tendency to remember personal information—encourage users to form emotional bonds with the software.