X blames users for Grok-generated CSAM; no fixes announced
No one knows how X plans to purge bad prompters While some users are focused on how X can hold users responsible for Grokâs outputs when X is the one training the model, others are questioning how exa...
No one knows how X plans to purge bad prompters
While some users are focused on how X can hold users responsible for Grokâs outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.
X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has âa zero tolerance policy towards CSAM content,â the majority of which is âautomaticallyâ detected using proprietary hash technology to proactively flag known CSAM.
Under this system, more than 4.5 million accounts were suspended last year, and X reported âhundreds of thousandsâ of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that âin 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,â and in the first half of 2025, â170 reports led to arrests.â
âWhen we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,â X Safety said. âWe then report the account to the NCMEC, which works with law enforcement globallyâincluding in the UKâto pursue justice and protect children.â
At that time, X promised to âremain steadfastâ in its âmission to eradicate CSAM,â but if left unchecked, Grokâs harmful outputs risk creating new kinds of CSAM that this system wouldnât automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.
Another troublingly vague aspect of X Safetyâs response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on whatâs harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.
Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.