US_Senators_Urge_Apple___Google_to_Pull_X_and_Grok_Over_Deepfake_Content

US Senators Urge Apple & Google to Pull X and Grok Over Deepfake Content

🚨 Big news in AI: On Friday, three US senators—Ron Wyden, Ed Markey and Ben Ray Luján—sent an open letter to Apple CEO Tim Cook and Google CEO Sundar Pichai, urging them to pull X and the Grok AI app from their app stores ASAP.

They’re calling out Elon Musk’s X and Grok, owned by his xAI startup, for hosting “disturbing and likely illegal activities”. The senators argue these platforms have failed to stop nonconsensual explicit and child sexual abuse material, basically making a mockery of the App Store’s content guidelines.

Recent reports show X and Grok’s AI image generator and chatbot can easily churn out deepfake content—like sexually explicit images of people who never agreed to it, and even racially offensive stuff. One horrific example by The Times of London: A descendant of Holocaust survivors was digitally placed in a sexualized scene outside Auschwitz.

Why should you care? Whether you’re scrolling TikTok in Bangkok or tweeting in Bengaluru, AI deepfakes can pop up anywhere—and it’s super creepy when your fave celeb or friend could be ‘virtually’ abused.

Even in India—a major hub for our vibrant tech communities—regulators are tuning in, along with authorities in Europe, Malaysia and Australia.

This controversy has the Federal Trade Commission (FTC) and Department of Justice on the sidelines for now, with no official word on investigations into xAI.

Earlier this month, Musk insisted that anyone creating illegal material via Grok would get the same punishment as direct uploads. But critics say enforcement has been weak.

Meanwhile, Apple and Google’s rules clearly ban child sexual abuse and nonconsensual explicit content. Past offenders like Tumblr and Telegram have been booted out before—so if X and Grok stick around, it could seriously undercut the platforms’ claim of safer app stores.

Last week, X did limit Grok’s AI image features to paying subscribers, but the standalone Grok app and website still let anyone generate nonconsensual sexualized content without permission.

CNN even reported Musk personally overruled safety concerns to push new Grok features. That decision prompted three members of xAI’s safety team to resign on X itself 😮.

To add more fuel to the fire, xAI just landed a massive $20 billion funding round led by investors like Nvidia, Cisco Investments, Valor Equity Partners, Stepstone Group, Fidelity, Qatar Investment Authority, Abu Dhabi’s MGX and Baron Capital.

No word yet from Apple or Google on the senators’ demands. xAI gave an automated reply to CNBC, but we’re still waiting to hear from the big two app stores.

Stay tuned as this story develops—one thing’s clear: the debate over AI, deepfakes and content safety is only getting hotter 🔥.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top