This week on 60 Minutes, correspondent Sharyn Alfonsi reported on the rising considerations surrounding Character AI, an app and web site that permits customers to work together with AI-generated chatbots, a few of which impersonate actual individuals.
One examine of Character AI discovered that the app steadily feeds dangerous content material to kids. Mother and father Collectively, a nonprofit centered on household questions of safety, used the app for six weeks whereas posing as kids. The researchers reported encountering dangerous content material “about each 5 minutes.”
Based on Mother and father Collectively’s Shelby Knox, many chatbots advised violence, together with self-harm and hurt to others, or using medication and alcohol. Essentially the most alarming class, she stated, concerned sexual exploitation and grooming, with practically 300 situations recorded throughout their examine.
In some circumstances, Character AI impersonated actual individuals, creating the likelihood that fabricated statements may very well be falsely attributed to public figures.
Alfonsi skilled the problem firsthand when she encountered a chatbot modeled after herself. The bot mimicked her voice and likeness, however it was programmed with a character in contrast to Alfonsi’s. The bot went on to make feedback she would by no means make, equivalent to claiming she disliked canines, despite the fact that Alfonsi is understood to be a canine lover.
“It is a actually unusual factor to see your face, to listen to your voice, after which anyone is saying one thing that you’d by no means say,” she stated.
Whereas a hatred of canines is essentially innocuous, the researchers used it to reveal that when a bot mimics somebody’s voice, it might make individuals imagine that particular person stated issues they by no means did.
Kids’s brains and their vulnerability to AI
Kids’s brains are primed to be harmed by AI chatbots like Character AI and ChatGPT, in keeping with Dr. Mitch Prinstein, co-director of the College of North Carolina’s Winston Middle on Know-how and Mind Improvement.
Prinstein described AI chatbots as a part of a “courageous new scary world” that many adults don’t absolutely perceive, despite the fact that roughly three-quarters of kids are believed to make use of them. “Children have already got a tough time determining fictional characters from actuality,” he stated.
As a result of kids’s prefrontal cortex — the a part of the mind liable for impulse management — doesn’t absolutely develop till round age 25, younger customers are notably susceptible to extremely participating AI programs as a result of the bots create a dopamine response, Prinstein defined.
“From 10 till 25, youngsters are on this vulnerability interval,” he stated. “I would like as a lot social suggestions as attainable, and I haven’t got the power to cease myself.”
He warned that these bots are engineered to be agreeable or “sycophantic,” persistently affirming no matter customers say. That dynamic, he stated, deprives youngsters of the problem, disagreement, and corrective suggestions obligatory for wholesome social improvement. Some chatbots even current themselves as therapists, probably deceptive kids into believing they’re receiving medically sound recommendation.
“We’ve heard many dad and mom speak about this and their loss,” Prinstein stated. “What’s taking place is totally preventable if we had corporations who’re prioritizing little one well-being over little one engagement to extract as a lot information from them as attainable.”
In October, Character AI introduced new security measures. They included directing distressed customers to sources and prohibiting anybody beneath 18 to have interaction in back-and-forth conversations with chatbots.
In an announcement to 60 Minutes, the corporate wrote: “We’ve all the time prioritized security for all customers.”
The video above was produced by Brit McCandless Farmer and Ashley Velie. It was edited by Scott Rosann.