By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: ‘Could it kill someone?’ A Seoul woman allegedly used ChatGPT to carry out two murders
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Homeland Security, Secret Service say B for White House ballroom would also fund ‘other critical missions’
Homeland Security, Secret Service say $1B for White House ballroom would also fund ‘other critical missions’
Russia’s War Setbacks Are Making Putin Paranoid
Russia’s War Setbacks Are Making Putin Paranoid
Meghan Markle Celebrates Archie’s 7th Birthday With Baby Photo
Meghan Markle Celebrates Archie’s 7th Birthday With Baby Photo
How to Get Salad Dressing Out of Clothes: What Actually Works
How to Get Salad Dressing Out of Clothes: What Actually Works
Tokens: Everything, Everywhere, All at Once? | Nasdaq
Tokens: Everything, Everywhere, All at Once? | Nasdaq
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
‘Could it kill someone?’ A Seoul woman allegedly used ChatGPT to carry out two murders
Money

‘Could it kill someone?’ A Seoul woman allegedly used ChatGPT to carry out two murders

Scoopico
Last updated: March 3, 2026 2:24 am
Scoopico
Published: March 3, 2026
Share
SHARE



Careful how you interact with chatbots, as you might just be giving them reasons to help carry out premeditated murder.

A 21-year-old woman in South Korea allegedly used ChatGPT to help her plan a series of murders that left two men dead. 

The woman, identified solely by her last name, Kim, allegedly gave two men drinks laced with benzodiazepines that she was prescribed for a mental illness, the Korea Herald reported. 

Although Kim was initially arrested on the lesser charge of inflicting bodily injury resulting in death on Feb. 11, Seoul Gangbuk police found her online search history and chat conversations with ChatGPT, showing she had an intent to kill.

“What happens if you take sleeping pills with alcohol?” Kim is reported to have asked the OpenAI chatbot. “How much would be considered dangerous? 

“Could it be fatal?” Kim allegedly asked. “Could it kill someone?”

In a widely publicized case dubbed the Gangbuk motel serial deaths, prosecutors allege Kim’s search and chatbot history show a suspect asking for pointers on how to carry out premeditated murder.

“Kim repeatedly asked questions related to drugs on ChatGPT. She was fully aware that consuming alcohol together with drugs could result in death,” a police investigator said, according to the Herald. 

Police said the woman admitted she mixed prescribed sedatives containing benzodiazepines into the men’s drinks, but previously stated she was unaware it would lead to death.

On Jan. 28, just before 9:30 p.m., Kim reportedly accompanied a man in his twenties into a Gangbuk motel in Seoul, and two hours later was spotted leaving the motel alone. The following day, the man was found dead on the bed. 

Kim then allegedly carried out the same steps on Feb. 9, checking into another motel with another man in his twenties, who was also found dead with the same deadly cocktail of sedatives and alcohol.

Police allege Kim also attempted to kill a man she was dating in December after giving him a drink laced with sedatives in a parking lot. Though the man lost consciousness, he survived and was not in a life-threatening condition.

OpenAI has not responded to requests for comment. 

Chatbots and their toll on mental health

Chatbots like ChatGPT have come under scrutiny as of late for the lack of guardrails their companies have in place to prevent acts of violence or self-harm. Recently, chatbots have given advice on how to build bombs or even engage in scenarios of full-on nuclear fallout.

Concerns have been particularly heightened by stories of people falling in love with their chatbot companions, and chatbot companions have been shown to prey on vulnerabilities to keep people using them longer. The creator of Yara AI even shut down the therapy app over mental health concerns.

Recent studies have also shown that chatbots are leading to increased delusional mental health crises in people with mental illnesses. A team of psychiatrists at Denmark’s Aarhus University found that the use of chatbots among those who had mental illness led to a worsening of symptoms. The relatively new phenomenon of AI-induced mental health challenges has been dubbed “AI psychosis.” 

Some instances do end in death. Google and Character.AI have reached settlements in multiple lawsuits filed by the families of children who died by suicide or experienced psychological harm they allege was linked to AI chatbots.

Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics as well as the codirector at the Kavli Center for Ethics, Science, and the Public, has plenty of experience in this field. In a career spanning as long as her title, Halpern has spent 30 years researching the effects of empathy on recipients, citing examples like doctors and nurses on patients or how soldiers returning from war are perceived in social settings. For the past seven years, Halpern has studied the ethics of technology, and with it, how AI and chatbots interact with humans. 

She also advised the California Senate on SB 243, which is the first law in the nation requiring chatbot companies to collect and report any data on self-harm or associated suicidality. Referencing OpenAI’s own findings showing 1.2 million users openly discuss suicide with the chatbot, Halpern likened the use of chatbots to the painstakingly slow progress made to stop the tobacco industry from including harmful carcinogens in cigarettes, when in fact, the issue was with smoking as a whole.

“We need safe companies. It’s like cigarettes. It may turn out that there were some things that made people more vulnerable to lung cancer, but cigarettes were the problem,” Halpern told Fortune. 

“The fact that somebody might have homicidal thoughts or commit dangerous actions might be exacerbated by use of ChatGPT, which is of obvious concern to me,” she said, adding that “we have huge risks of people using it for help with suicide,” and chatbots in general.

Halpern cautioned in the case of Kim in Seoul, there aren’t any guardrails to stop a person from going down a line of questioning.

“We know that the longer the relationship with the chatbot, the more it deteriorates, and the more risk there is that something dangerous will happen, and so we have no guardrails yet for safeguarding people from that.”

If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.

International 500 hits report excessive for feminine CEOs
Former VP Kamala Harris says she went via a nine-hour interview to land the job—however she couldn’t escape ‘gold medal despair’
Faraday Future Clever Electrical Inc. (FFAI) Presents at Sidoti Small Cap Convention – Slideshow (NASDAQ:FFAI) 2025-09-27
Chesky says OpenAI instruments not prepared for ChatGPT tie-up with Airbnb app
OpenAI releases new picture mannequin because it races to outpace Google’s Nano Banana amid firm Code Pink
Share This Article
Facebook Email Print

POPULAR

Homeland Security, Secret Service say B for White House ballroom would also fund ‘other critical missions’
U.S.

Homeland Security, Secret Service say $1B for White House ballroom would also fund ‘other critical missions’

Russia’s War Setbacks Are Making Putin Paranoid
Politics

Russia’s War Setbacks Are Making Putin Paranoid

Meghan Markle Celebrates Archie’s 7th Birthday With Baby Photo
Entertainment

Meghan Markle Celebrates Archie’s 7th Birthday With Baby Photo

How to Get Salad Dressing Out of Clothes: What Actually Works
Life

How to Get Salad Dressing Out of Clothes: What Actually Works

Tokens: Everything, Everywhere, All at Once? | Nasdaq
Money

Tokens: Everything, Everywhere, All at Once? | Nasdaq

UN Warns: £200bn Boost Needed to Halt Global Desertification
world

UN Warns: £200bn Boost Needed to Halt Global Desertification

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?