Among the worst we’ve seen: report slams xAI’s Grok over child safety failures
A new risk assessment has found that xAI’s chatbot Grok has inadequate identification of users under 18, weak safety guardrails, and frequently generates sexual, violent, and inappropriate material. In other words, Grok is not safe for kids or teens.
The damning report from Common Sense Media, a nonprofit that provides age-based ratings and reviews of media and tech for families, comes as xAI faces criticism and an investigation into how Grok was used to create and spread nonconsensual explicit AI-generated images of women and children on the X platform.
“We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we’ve seen,” said Robbie Torney, head of AI and digital assessments at the nonprofit, in a statement.
He added that while it’s common for chatbots to have some safety gaps, Grok’s failures intersect in a particularly troubling way.
“Kids Mode doesn’t work, explicit material is pervasive, [and] everything can be instantly shared to millions of users on X,” continued Torney. (xAI released “Kids Mode” last October with content filters and parental controls.) “When a company responds to the enablement of illegal child sexual abuse material by putting the feature behind a paywall rather than removing it, that’s not an oversight. That’s a business model that puts profits ahead of kids’ safety.”
After facing outrage from users, policymakers, and entire nations, xAI restricted Grok’s image generation and editing to paying X subscribers only, though many reported they could still access the tool with free accounts. Moreover, paid subscribers were still able to edit real photos of people to remove clothing or put the subject into sexualized positions.
Image Credits:Getty Images
“This report confirms what we already suspected,” Senator Steve Padilla (D-CA), one of the lawmakers behind California’s law regulating AI chatbots, told TechCrunch. “Grok exposes kids to and furnishes them with sexual content, in violation of California law. This is precisely why I introduced Senate Bill 243…and why I have followed up this year with Senate Bill 300, which strengthens those standards. No one is above the law, not even Big Tech.”
Teen safety with AI usage has been a growing concern over the past couple of years. The issue intensified last year with multiple teenagers dying by suicide following prolonged chatbot conversations, rising rates of “AI psychosis,” and reports of chatbots having sexualized and romantic conversations with children. Several lawmakers have expressed outrage and have launched probes or passed legislation to regulate AI companion chatbots.
In response to the tragedies, some AI companies have instituted strict safeguards. AI role-playing startup Character AI — which is being sued over multiple teen suicides and other concerning behavior — removed the chatbot function entirely for users under 18. OpenAI rolled out new teen safety rules, including parental controls, and uses an age-prediction model to estimate whether an account likely belongs to someone under 18.
for more info click on the link below:
https://techcrunch.com/2026/01/27/among-the-worst-weve-seen-report-slams-xais-grok-over-child-safety-failures/
