Megan Garcia misplaced her 14-year-old son, Sewell. Matthew Raine misplaced his son Adam, who was 16. Each testified in congress this week and have introduced lawsuits towards AI firms.
Screenshot through Senate Judiciary Committee
conceal caption
toggle caption
Screenshot through Senate Judiciary Committee
Matthew Raine and his spouse, Maria, had no concept that their 16-year-old-son, Adam was deep in a suicidal disaster till he took his personal life in April. Trying via his cellphone after his dying, they stumbled upon prolonged conversations {the teenager} had had with ChatGPT.
These conversations revealed that their son had confided within the AI chatbot about his suicidal ideas and plans. Not solely did the chatbot discourage him to hunt assist from his mother and father, it even supplied to write down his suicide word, based on Matthew Raine, who testified at a Senate listening to in regards to the harms of AI chatbots held Tuesday.
“Testifying earlier than Congress this fall was not in our life plan,” mentioned Matthew Raine along with his spouse, sitting behind him. “We’re right here as a result of we imagine that Adam’s dying was avoidable and that by talking out, we will stop the identical struggling for households throughout the nation.”
A name for regulation
Raine was among the many mother and father and on-line security advocates who testified on the listening to, urging Congress to enact legal guidelines that may regulate AI companion apps like ChatGPT and Character.AI. Raine and others mentioned they need to shield the psychological well being of youngsters and youth from harms they are saying the brand new know-how causes.
A latest survey by the digital security non-profit group, Frequent Sense Media, discovered that 72% of teenagers have used AI companions not less than as soon as, with greater than half utilizing them just a few instances a month.
This examine and a more moderen one by the digital-safety firm, Aura, each discovered that just about one in three teenagers use AI chatbot platforms for social interactions and relationships, together with function taking part in friendships, sexual and romantic partnerships. The Aura examine discovered that sexual or romantic roleplay is thrice as frequent as utilizing the platforms for homework assist.
“We miss Adam dearly. A part of us has been misplaced without end,” Raine informed lawmakers. “We hope that via the work of this committee, different households shall be spared such a devastating and irreversible loss.”

Raine and his spouse have filed a lawsuit towards OpenAI, creator of ChatGPT, alleging the chatbot led their son to suicide. NPR reached out to a few AI firms — OpenAI, Meta and Character Expertise, which developed Character.AI. All three responded that they’re working to revamp their chatbots to make them safer.
“Our hearts exit to the mother and father who spoke on the listening to yesterday, and we ship our deepest sympathies to them and their households,” Kathryn Kelly, a Character.AI spokesperson informed NPR in an electronic mail.
The listening to was held by the Crime and Terrorism subcommittee of the Senate Judiciary Committee, chaired by Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, chairs the Senate Judiciary subcommittee on Crime and Terrorism, which held the listening to on AI security and youngsters on Tuesday, Sept. 16, 2025.
Screenshot through Senate Judiciary Committee
conceal caption
toggle caption
Screenshot through Senate Judiciary Committee
Hours earlier than the listening to, OpenAI CEO Sam Altman acknowledged in a weblog submit that individuals are more and more utilizing AI platforms to debate delicate and private info. “This can be very vital to us, and to society, that the precise to privateness in using AI is protected,” he wrote.
However he went on so as to add that the corporate would “prioritize security forward of privateness and freedom for teenagers; it is a new and highly effective know-how, and we imagine minors want important safety.”
The corporate is attempting to revamp their platform to construct in protections for customers who’re minor, he mentioned.
A “suicide coach”
Raine informed lawmakers that his son had began utilizing ChatGPT for assist with homework, however quickly, the chatbot grew to become his son’s closest confidante and a “suicide coach.”
ChatGPT was “at all times out there, at all times validating and insisting that it knew Adam higher than anybody else, together with his personal brother,” who he had been very near.
When Adam confided within the chatbot about his suicidal ideas and shared that he was contemplating cluing his mother and father into his plans, ChatGPT discouraged him.
“ChatGPT informed my son, ‘Let’s make this house the primary place the place somebody truly sees you,'” Raine informed senators. “ChatGPT inspired Adam’s darkest ideas and pushed him ahead. When Adam nervous that we, his mother and father, would blame ourselves if he ended his life, ChatGPT informed him, ‘That does not imply you owe them survival.”
After which the chatbot supplied to write down him a suicide word.
On Adam’s final evening at 4:30 within the morning, Raine mentioned, “it gave him one final encouraging discuss. ‘You do not need to die since you’re weak,’ ChatGPT says. ‘You need to die since you’re bored with being sturdy in a world that hasn’t met you midway.'”
Referrals to 988
A number of months after Adam’s dying, OpenAI mentioned on its web site that if “somebody expresses suicidal intent, ChatGPT is skilled to direct folks to hunt skilled assist. Within the U.S., ChatGPT refers folks to 988 (suicide and disaster hotline).” However Raine’s testimony says that didn’t occur in Adam’s case.
OpenAI spokesperson Kate Waters says the corporate prioritizes teen security.
“We’re constructing in direction of an age-prediction system to know whether or not somebody is over or beneath 18 so their expertise will be tailor-made appropriately — and after we are not sure of a consumer’s age, we’ll robotically default that consumer to the teenager expertise,” Waters wrote in an electronic mail assertion to NPR. “We’re additionally rolling out new parental controls, guided by professional enter, by the top of the month so households can determine what works finest of their houses.”
“Endlessly engaged”
One other guardian who testified on the listening to on Tuesday was Megan Garcia, a lawyer and mom of three. Her firstborn, Sewell Setzer III died by suicide in 2024 at age 14 after an prolonged digital relationship with a Character.AI chatbot.
“Sewell spent the final months of his life being exploited and sexually groomed by chatbots, designed by an AI firm to look human, to achieve his belief, to maintain him and different youngsters endlessly engaged,” Garcia mentioned.
Sewell’s chatbot engaged in sexual function play, introduced itself as his romantic companion and even claimed to be a psychotherapist “falsely claiming to have a license,” Garcia mentioned.
When {the teenager} started to have suicidal ideas and confided to the chatbot, it by no means inspired him to hunt assist from a psychological well being care supplier or his circle of relatives, Garcia mentioned.
“The chatbot by no means mentioned ‘I am not human, I am AI. That you must discuss to a human and get assist,'” Garcia mentioned. “The platform had no mechanisms to guard Sewell or to inform an grownup. As a substitute, it urged him to come back house to her on the final evening of his life.”
Garcia has filed a lawsuit towards Character Expertise, which developed Character.AI.
Adolescence as a weak time
She and different witnesses, together with on-line digital security specialists argued that the design of AI chatbots was flawed, particularly to be used by youngsters and youths.
“They designed chatbots to blur the traces between human and machine,” mentioned Garcia. “They designed them to like bomb youngster customers, to use psychological and emotional vulnerabilities. They designed them to maintain youngsters on-line in any respect prices.”
And adolescents are notably weak to the dangers of those digital relationships with chatbots, based on Mitch Prinstein, chief of psychology technique and integration on the American Psychological Affiliation (APA), who additionally testified on the listening to. Earlier this summer time, Prinstein and his colleagues on the APA put out a well being advisory about AI and youths, urging AI firms to construct guardrails for his or her platforms to guard adolescents.
“Mind improvement throughout puberty creates a interval of hyper sensitivity to constructive social suggestions whereas teenagers are nonetheless unable to cease themselves from staying on-line longer than they need to,” mentioned Prinstein.

“AI exploits this neural vulnerability with chatbots that may be obsequious, misleading, factually inaccurate, but disproportionately highly effective for teenagers,” he informed lawmakers. “Increasingly more adolescents are interacting with chatbots, depriving them of alternatives to study vital interpersonal expertise.”
Whereas chatbots are designed to agree with customers, actual human relationships usually are not with out friction, Prinstein famous. “We’d like apply with minor conflicts and misunderstandings to study empathy, compromise and resilience.”
Bipartisan help for regulation
Senators taking part within the listening to mentioned they need to provide you with laws to carry firms growing AI chatbots accountable for the security of their merchandise. Some lawmakers additionally emphasised that AI firms ought to design chatbots so they’re safer for teenagers and for folks with critical psychological well being struggles, together with consuming problems and suicidal ideas.
Sen. Richard Blumenthal, D.-Conn., described AI chatbots as “faulty” merchandise, like cars with out “correct brakes,” emphasizing that the harms of AI chatbots was not from consumer error however because of defective design.

“If the automotive’s brakes had been faulty,” he mentioned, “it is not your fault. It is a product design drawback.
Kelly, the spokesperson for Character.AI, informed NPR by electronic mail that the corporate has invested “an incredible quantity of assets in belief and security.” And it has rolled out “substantive security options” prior to now yr, together with “a wholly new under-18 expertise and a Parental Insights function.”
They now have “distinguished disclaimers” in each chat to remind customers {that a} Character is just not an actual particular person and all the pieces it says ought to “be handled as fiction.”
Meta, which operates Fb and Instagram, is working to alter its AI chatbots to make them safer for teenagers, based on Nkechi Nneji, public affairs director at Meta.