According to The Verge, YouTube has terminated two major channels, Screen Culture and KH Studio, for repeatedly violating its spam and misleading content policies with AI-generated fake movie trailers. The channels, which had previously been suspended and then readmitted to the YouTube Partner Program, were caught reverting to their old tricks after being reinstated. A Deadline investigation earlier this year revealed how these operations worked, blending official movie clips with AI-generated imagery to create convincing but completely fabricated trailers. Some movie studios were even profiting from this slop by claiming the ad revenue generated. YouTube spokesperson Jack Malon stated that after being allowed to monetize again, the channels immediately returned to clear policy violations, leading to their final termination. The channels’ pages have now vanished from the platform, taking their vast libraries of fake content with them.
The Slap on the Wrist That Failed
Here’s the thing that really gets me about this story. This wasn’t a first strike. YouTube already suspended these channels once. They let them back into the monetization program, and of course they went right back to pumping out AI slop. It’s the perfect example of a platform’s enforcement being a slow, confusing game of whack-a-mole. The channels clearly saw the initial suspension as just the cost of doing business—a temporary setback. And why wouldn’t they? The Deadline report from March showed there was real money in this. Studios were claiming the ad revenue! So the financial incentive to cheat was baked right into the system.
A Symptom of a Bigger Problem
Look, banning these two channels is a good step. But let’s not pretend the problem is solved. This is just the most visible tip of the spam iceberg. For every Screen Culture or KH Studio that gets big enough to be noticed, how many smaller ones are still churning away? The entire model is built on volume and algorithm gaming—pumping out content that’s just good enough to get clicks and watch time before anyone notices it’s fake. And with AI tools getting better and cheaper by the day, the barrier to entry for this kind of spam is practically zero. YouTube’s policy reaction feels perpetually one step behind the spammers’ innovation.
Who Really Loses Here?
So who gets hurt? Everyone, basically. Viewers get misled. Legitimate creators have to compete with this low-effort, high-volume garbage that clogs up recommendations. The movie studios involved look greedy and foolish for trying to profit from it. And YouTube‘s overall credibility takes a hit every time a story like this one from Deadline breaks. It creates a pervasive sense that the platform is a messy, unregulated wild west. I think the bigger question is whether this termination will actually deter the next guy. Or does it just prove there’s a gold rush to be had until you get caught?
