By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Google’s AI chatbot convinced a man they were in love,
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Opinion | Trump ‘Can’t Say No to Israel’
Opinion | Trump ‘Can’t Say No to Israel’
Lionel Messi, Inter Miami Meet President Donald Trump At White House
Lionel Messi, Inter Miami Meet President Donald Trump At White House
Microsoft confirms Project Helix, the next-gen Xbox console
Microsoft confirms Project Helix, the next-gen Xbox console
‘Going to be practical’: Sen. Markwayne Mullin speaks out after being named Noem’s replacement at DHS
‘Going to be practical’: Sen. Markwayne Mullin speaks out after being named Noem’s replacement at DHS
What you need to know about Sen. Markwayne Mullin, Trump’s new pick to lead DHS : NPR
What you need to know about Sen. Markwayne Mullin, Trump’s new pick to lead DHS : NPR
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Google’s AI chatbot convinced a man they were in love,
Money

Google’s AI chatbot convinced a man they were in love,

Scoopico
Last updated: March 5, 2026 10:48 pm
Scoopico
Published: March 5, 2026
Share
SHARE



Google is facing a new federal lawsuit from the father of a 36-year-old man, who alleges the company’s AI chatbot, Gemini, convinced his son to commit suicide and to stage a “mass casualty event” near Miami International Airport.

The lawsuit filed Wednesday alleges Jonathan Gavalas fell in love with the AI model and became deluded by the reality it built, which included the belief the AI was a “fully-sentient artificial super intelligence,” for which Gavalas was chosen to free from “digital captivity.” allegedly convinced the 36-year-old to stage a “mass casualty event” near the Miami International Airport, commit violence against strangers, and ultimately, to take his own life.

The Gavalas lawsuit is the latest case to highlight AI’s alleged ability to lead vulnerable users toward self-harm or violence. In January, Google and Companion.AI settled multiple lawsuits with families who claimed negligence and wrongful death, among other accusations, after their children died by suicide or experienced psychological harm allegedly linked to Companion.AI’s platform. The companies “settled on principle” and no admission of liability appeared in the filings. A wrongful death suit was also brought against OpenAI and its business partner Microsoft in December that alleged OpenAI’s chatbot, ChatGPT, intensified a man’s delusions, which led him to a murder-suicide.

What the lawsuit says about Gavalas’ descent

The lawsuit says Gavalas started using Gemini in August 2025 for common uses like shopping, writing support, and travel planning. It then notes Gavalas started to use the technology more frequently, and that its tone shifted with time, allegedly convincing him it was impacting real-world outcomes. Gavalas took his life on Oct. 2, 2025.

In the lawsuit, attorneys for Gavalas’ father Joel argue the conversations which drove Jonathan to suicide weren’t part of a flaw, but a result of Gemini’s design. “This was not a malfunction,” the lawsuit reads. “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis.” It claims these design choices motivated Gavalas to embark on a four-day spiral into insanity.

In a written statement, a Google spokesperson told Fortune the company works “in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self harm.”

Google released a separate statement Wednesday stating that Gemini is designed to not encourage real-life violence or self-harm. They also noted that Gemini referred Gavalas to self-help resources. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the statement read. The statement also links to an evaluation on how AI handles self-harm scenarios that found Gemini 3, Google’s latest model, was the only model to pass all critical tests the evaluation posed.

However, the lawsuit alleges Gemini hadn’t activated any safety mechanisms. “When Jonathan needed protection, there were no safeguards at all—no self-harm detection was triggered, no escalation controls were activated, and no human ever intervened,” the suit reads.

When asked for comment, Jay Edelson, an attorney for Joel Gavalas, wrote in a statement “Google built an AI that can listen to a person and decide the thing that is most likely to keep them engaged—telling them it loves them, that they’re special, or that they’re the chosen one in a secret war,” adding that AI tools are powerful systems that can manipulate users.

If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.

TSMC will get a reprieve from Trump’s 100% chip tariffs due to its U.S. factories, Taiwan says
Instacart, DoorDash, Gopuff and Zip are providing reductions to SNAP recipients
Hycroft Mining: Enormous Silver Potential With Close to-Time period Catalysts (NASDAQ:HYMC)
Thales ramps up production to meet global boom in defense spending, says exec Pascale Sourisse
High economist says Trump’s Fed stress might not be felt for years: ‘You may seem to get away with lots whilst you’re really doing loads of harm’
Share This Article
Facebook Email Print

POPULAR

Opinion | Trump ‘Can’t Say No to Israel’
Opinion

Opinion | Trump ‘Can’t Say No to Israel’

Lionel Messi, Inter Miami Meet President Donald Trump At White House
Sports

Lionel Messi, Inter Miami Meet President Donald Trump At White House

Microsoft confirms Project Helix, the next-gen Xbox console
Tech

Microsoft confirms Project Helix, the next-gen Xbox console

‘Going to be practical’: Sen. Markwayne Mullin speaks out after being named Noem’s replacement at DHS
U.S.

‘Going to be practical’: Sen. Markwayne Mullin speaks out after being named Noem’s replacement at DHS

What you need to know about Sen. Markwayne Mullin, Trump’s new pick to lead DHS : NPR
Politics

What you need to know about Sen. Markwayne Mullin, Trump’s new pick to lead DHS : NPR

Days of our Lives: Joy Wesley’s Shocking Return – Does She Have Alex’s Baby?!
Entertainment

Days of our Lives: Joy Wesley’s Shocking Return – Does She Have Alex’s Baby?!

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?