This week, X customers observed that the platform’s AI chatbot Grok will readily generate nonconsensual sexualized photographs, together with these of kids.
Mashable reported on the shortage of safeguards round sexual deepfakes when xAI first launched Grok Think about in August. The generative AI instrument creates photographs and brief video clips, and it particularly features a “spicy” mode for creating NSFW photographs.
Whereas this is not a brand new phenomenon, the constructing backlash compelled the Grok crew to reply.
This Tweet is at the moment unavailable. It is likely to be loading or has been eliminated.
“There are remoted circumstances the place customers prompted for and acquired AI photographs depicting minors in minimal clothes,” Grok’s X account posted on Thursday. It additionally acknowledged that the crew has recognized “lapses in safeguards” and is “urgently fixing them.”
xAI technical workers member, Parsa Tajik, made a related assertion on his private account: “The crew is wanting into additional tightening our gaurdrails. [sic]”
Grok additionally acknowledged that baby intercourse abuse materials (CSAM) is prohibited, and the platform itself might face felony or civil penalties.
Mashable Mild Velocity
X customers have additionally introduced consideration to the chatbot manipulating harmless photographs of girls, typically depicting them in much less clothes. This consists of personal residents in addition to public figures, comparable to Momo, a member of the Okay-pop group TWICE, and Stranger Issues star Millie Bobby Brown.
This Tweet is at the moment unavailable. It is likely to be loading or has been eliminated.
This Tweet is at the moment unavailable. It is likely to be loading or has been eliminated.
This is every thing Elon Musk promised in 2025 – and did not ship
Grok Think about, the generative AI instrument, has had an issue with sexual deepfakes since its launch in August 2025. It even reportedly created specific deepfakes of Taylor Swift for some customers with out being prompted to take action.
AI-manipulated media detection platform Copyleaks performed a short observational evaluate of Grok’s publicly accessible picture tab and recognized examples of seemingly actual ladies, sexualized picture manipulation (i.e., prompts asking to take away clothes or change physique place), and no clear indication of consent. Copyleaks discovered roughly one nonconsensual sexualized picture per minute within the noticed picture stream, the group shared with Mashable.
Regardless of the xAI Acceptable Use Coverage prohibiting customers from “Depicting likenesses of individuals in a pornographic method,” this does not essentially embody merely sexually suggestive materials. The coverage does, nevertheless, prohibit “the sexualization or exploitation of kids.”
Within the first half of 2024, X despatched greater than 370,000 studies of kid exploitation to the Nationwide Middle for Lacking and Exploited Youngsters (NCMEC)’s CyberTipline, as required by legislation. It additionally acknowledged that it suspended greater than two million accounts actively partaking with CSAM. Final 12 months, NBC Information reported that nameless, seemingly automated X accounts have been flooding some hashtags with baby abuse content material.
Grok has additionally been within the information in current months for spreading misinformation concerning the Bondi Seashore taking pictures and praising Hitler.
Mashable despatched xAI questions and a request for remark and acquired the automated reply, “Legacy Media Lies.”
If in case you have had intimate photographs shared with out your consent, name the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 totally free, confidential assist. The CCRI web site additionally consists of useful info in addition to an inventory of worldwide assets.
[/gpt3]