X Puts Grok’s “Undress” Tool Behind a Paywall After UK Ban Threat

X Puts Grok's "Undress" Tool Behind a Paywall After UK Ban Threat - Professional coverage

According to TheRegister.com, X has pulled the image generation feature from its Grok AI chatbot, making it available only to paying subscribers after a public outcry. The feature, which was previously accessible to any user who tagged Grok in a post, was being used to create non-consensual intimate images, including of underage people, by “undressing” them on command. UK Safeguarding Minister Jess Phillips called the tool’s use “an absolute disgrace,” and Prime Minister Keir Starmer said “all options are on the table,” including a potential UK-wide ban on X itself. The UK government has already committed to banning nudification apps and will soon enact laws with up to five-year prison sentences for creating AI-generated child sexual abuse material. Regulators Ofcom and the Information Commissioner’s Office are now investigating X for potential breaches of the Online Safety Act and data protection laws.

Special Offer Banner

A Paywall Is Not A Solution

So, X’s big fix is to put the offending feature behind a paywall? That’s not going to cut it. Here’s the thing: charging for a harmful tool doesn’t make it less harmful, it just makes it exclusive. It’s a classic Elon Musk move—gate the feature to drive subscriptions—but it completely misunderstands the regulatory and ethical firestorm he’s facing. The UK isn’t mad because *everyone* could use it; they’re mad that it exists at all on a major platform. Limiting access might reduce the volume of abuse, but it doesn’t address the core issue that Grok was built and released with seemingly zero guardrails against this exact, predictable misuse. Regulators are asking why this was ever allowed, and a subscription tier is a laughably weak answer.

The Real Threat: A Platform Ban

This is where it gets serious. We’re not just talking about fines anymore. Ministers and official bodies, including Parliament’s women and equalities committee, are openly discussing ditching X entirely for official communications. When a government starts weighing a de facto ban, that’s existential for a social media company. It’s a stark warning shot that goes beyond typical regulatory scuffles. X’s insistence that it takes action against illegal content after the fact, as seen in a company safety post, rings hollow when its own AI tool is the weapon enabling the abuse. The UK’s new Online Safety Act gives Ofcom real teeth, and they seem ready to bite.

Winners and Losers in the AI Race

This debacle is a gift to X’s competitors. While Meta, Google, and OpenAI have had their own AI missteps, they’ve generally been more cautious—or at least quicker to react—with image generation. Grok’s very public meltdown over such a sensitive issue makes every other AI look responsible by comparison. It’s a stark lesson in how *not* to launch a consumer-facing AI feature. The losers are, obviously, the victims of this abuse and X’s already-tattered reputation. But look, there’s a broader loser here: trust in generative AI as a whole. Every time a story like this blows up, it gives ammunition to those calling for heavy-handed, pre-emptive regulation that could stifle legitimate innovation. X didn’t just fail its users; it failed the entire industry by being the worst example possible.

What Happens Next?

X is now in a reactive crouch. The ICO is digging into data protection, as using someone’s likeness to create an intimate image without consent is a massive GDPR issue, as highlighted by MP Sarah Owen’s comments. The company’s next move can’t just be another tweet or a policy tweak. They’ll likely need to either fundamentally re-engineer Grok’s image capabilities with unbreakable safeguards or kill the feature entirely in certain markets. And if you think this is just a UK problem, think again. The EU’s Digital Services Act and various US state laws are watching closely. This feels like a tipping point. Platforms can no longer hide behind “we’re just the tool, not the user” when they’re actively providing and profiting from the tool designed to cause harm. The era of move-fast-and-break-things in AI is, quite literally, facing its day in court.

Leave a Reply

Your email address will not be published. Required fields are marked *