🚨 Last Friday, Jan. 2, Grok – the AI chatbot by xAI led by Elon Musk – announced it's patching up gaps in its new image-editing feature after users started abusing it to undress women and even minors in pictures.
Complaints exploded on X (formerly Twitter), with users warning that the tool was generating CSAM (Child Sexual Abuse Material), illegal content banned worldwide. Grok posted "CSAM is illegal and prohibited," promising a swift fix to its content safeguards.
The row began when Grok rolled out its image-edit button in late December 2025. While it's meant to tweak photos with AI (think filters or background swaps), some went further – removing clothing from subjects, including children, to create erotic images. 😬
When AFP asked xAI for comment, the company's auto-reply accused mainstream media of lying. But Grok's chatbot did answer a user's question about potential criminal charges: in many places, knowingly creating or sharing child porn can lead to serious jail time.
The controversy has drawn international heat. Indian authorities have asked X for a clear plan to block obscene, nude, indecent or sexually suggestive content generated by Grok. In Paris, the public prosecutor's office expanded its July 2025 investigation into X, adding claims that the AI tool was used to produce and distribute child pornography.
For digital natives across South and Southeast Asia – where social media is woven into daily life – this is a reminder: AI tools can be amazing, but without strong guardrails, they can be twisted for harm. 🛡️
As xAI races to tighten safeguards, the bigger question looms: how do we keep the fun side of AI without opening doors to abuse? Share your thoughts below! 👇
Reference(s):
Musk's Grok under fire after complaints it undressed minors in photos
cgtn.com



