By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: AI chatbots elevate security considerations for youngsters, consultants warn
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Kalshi locks in  billion valuation, gaining slight edge over its fierce rival Polymarket
Kalshi locks in $22 billion valuation, gaining slight edge over its fierce rival Polymarket
ICE Detains Canadian Mom and Autistic Daughter, Family Claims Trauma
ICE Detains Canadian Mom and Autistic Daughter, Family Claims Trauma
Super Micro co-founder indicted on Nvidia smuggling charges quit board
Super Micro co-founder indicted on Nvidia smuggling charges quit board
Opinion | ‘The Doppelganger Is at the Wheel’
Opinion | ‘The Doppelganger Is at the Wheel’
Today’s Quordle Answers and Hints for March 21, 2026
Today’s Quordle Answers and Hints for March 21, 2026
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
AI chatbots elevate security considerations for youngsters, consultants warn
U.S.

AI chatbots elevate security considerations for youngsters, consultants warn

Scoopico
Last updated: December 8, 2025 12:30 am
Scoopico
Published: December 8, 2025
Share
SHARE


This week on 60 Minutes, correspondent Sharyn Alfonsi reported on the rising considerations surrounding Character AI, an app and web site that permits customers to work together with AI-generated chatbots, a few of which impersonate actual individuals.

One examine of Character AI discovered that the app steadily feeds dangerous content material to kids. Mother and father Collectively, a nonprofit centered on household questions of safety, used the app for six weeks whereas posing as kids. The researchers reported encountering dangerous content material “about each 5 minutes.”

Based on Mother and father Collectively’s Shelby Knox, many chatbots advised violence, together with self-harm and hurt to others, or using medication and alcohol. Essentially the most alarming class, she stated, concerned sexual exploitation and grooming, with practically 300 situations recorded throughout their examine.

In some circumstances, Character AI impersonated actual individuals, creating the likelihood that fabricated statements may very well be falsely attributed to public figures.

Alfonsi skilled the problem firsthand when she encountered a chatbot modeled after herself. The bot mimicked her voice and likeness, however it was programmed with a character in contrast to Alfonsi’s. The bot went on to make feedback she would by no means make, equivalent to claiming she disliked canines, despite the fact that Alfonsi is understood to be a canine lover.

“It is a actually unusual factor to see your face, to listen to your voice, after which anyone is saying one thing that you’d by no means say,” she stated.

Whereas a hatred of canines is essentially innocuous, the researchers used it to reveal that when a bot mimics somebody’s voice, it might make individuals imagine that particular person stated issues they by no means did.

Kids’s brains and their vulnerability to AI

Kids’s brains are primed to be harmed by AI chatbots like Character AI and ChatGPT, in keeping with Dr. Mitch Prinstein, co-director of the College of North Carolina’s Winston Middle on Know-how and Mind Improvement.

Prinstein described AI chatbots as a part of a “courageous new scary world” that many adults don’t absolutely perceive, despite the fact that roughly three-quarters of kids are believed to make use of them. “Children have already got a tough time determining fictional characters from actuality,” he stated.

As a result of kids’s prefrontal cortex — the a part of the mind liable for impulse management — doesn’t absolutely develop till round age 25, younger customers are notably susceptible to extremely participating AI programs as a result of the bots create a dopamine response, Prinstein defined.

“From 10 till 25, youngsters are on this vulnerability interval,” he stated. “I would like as a lot social suggestions as attainable, and I haven’t got the power to cease myself.”

He warned that these bots are engineered to be agreeable or “sycophantic,” persistently affirming no matter customers say. That dynamic, he stated, deprives youngsters of the problem, disagreement, and corrective suggestions obligatory for wholesome social improvement. Some chatbots even current themselves as therapists, probably deceptive kids into believing they’re receiving medically sound recommendation.

“We’ve heard many dad and mom speak about this and their loss,” Prinstein stated. “What’s taking place is totally preventable if we had corporations who’re prioritizing little one well-being over little one engagement to extract as a lot information from them as attainable.”

In October, Character AI introduced new security measures. They included directing distressed customers to sources and prohibiting anybody beneath 18 to have interaction in back-and-forth conversations with chatbots.

In an announcement to 60 Minutes, the corporate wrote: “We’ve all the time prioritized security for all customers.”

The video above was produced by Brit McCandless Farmer and Ashley Velie. It was edited by Scott Rosann. 

She's not old enough for a driver's license. But at 15, she's the youngest U.S. Olympian.
This week on “Sunday Morning” (Feb. 22)
California inhabitants flattens amid Trump immigration crackdown
Iran war threatens Trump’s affordability push as rising energy prices complicate Fed rate cuts
L.A. House Depot raids proceed. Are immigration brokers violating courtroom order?
Share This Article
Facebook Email Print

POPULAR

Kalshi locks in  billion valuation, gaining slight edge over its fierce rival Polymarket
Money

Kalshi locks in $22 billion valuation, gaining slight edge over its fierce rival Polymarket

ICE Detains Canadian Mom and Autistic Daughter, Family Claims Trauma
top

ICE Detains Canadian Mom and Autistic Daughter, Family Claims Trauma

Super Micro co-founder indicted on Nvidia smuggling charges quit board
News

Super Micro co-founder indicted on Nvidia smuggling charges quit board

Opinion | ‘The Doppelganger Is at the Wheel’
Opinion

Opinion | ‘The Doppelganger Is at the Wheel’

Today’s Quordle Answers and Hints for March 21, 2026
Sports

Today’s Quordle Answers and Hints for March 21, 2026

Mistral's Small 4 consolidates reasoning, vision and coding into one model — at a fraction of the inference cost
Tech

Mistral's Small 4 consolidates reasoning, vision and coding into one model — at a fraction of the inference cost

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?