New York just made advertisers fess up about their AI avatars

New York just made advertisers fess up about their AI avatars - Professional coverage

According to The Verge, New York Governor Kathy Hochul signed a bill into law on Thursday that will force advertisers to disclose when they use AI-generated people in their ads. The legislation, known as S.8420-A/A.8887-B, is described as the first of its kind in the United States. Governor Hochul called it a “common sense” law aimed at boosting transparency and protecting consumers. In a related move, she also signed a separate bill that requires consent from heirs or executors to use a deceased person’s likeness for commercial purposes. The announcement was made official on the governor’s website. The new AI disclosure law represents a direct legislative response to the rapid proliferation of synthetic media in advertising.

Special Offer Banner

So what does this actually mean for you?

Look, it’s pretty straightforward. If you’re scrolling through your feed and see an ad with a “person” in it, the advertiser will now have to tell you if that person is a complete fabrication. No more wondering if that impossibly perfect skincare model or that relatable testimonial-giver is real. Basically, it’s a truth-in-advertising patch for the AI age. And that’s a good thing. But here’s the thing: the devil will be in the enforcement details. How prominent does the disclosure need to be? A tiny footnote in the corner won’t cut it. Will it apply to all social media platforms and video ads equally? These are the questions that will determine if this law has real teeth or is just a symbolic gesture.

The bigger picture for creators and companies

This isn’t just about consumer trust; it’s a warning shot across the bow for the entire marketing and content creation industry. Agencies and brands that have been quietly using AI avatars for cheap, scalable campaigns now have a new compliance hurdle. It adds friction and cost. But honestly, that’s probably the point. The law creates a disincentive for deceptive practices and could push the industry toward more ethical uses of the technology. Think of it as a speed bump on the road to a fully synthetic media landscape. For developers and AI toolmakers, it also signals a future where their outputs might need built-in disclosure mechanisms. Is this the start of a patchwork of state laws, or will it push the federal government to act? That’s the billion-dollar question.

A first step, but far from the last

Let’s be real: one state law isn’t going to solve the deepfake problem. The internet doesn’t respect state borders. But it sets a precedent. Other states—looking at you, California—will likely follow. It establishes a baseline expectation that digitally fabricated humans should be labeled as such. That’s a powerful norm to set. The companion law about deceased personas is also huge, potentially curtailing the creepy, unauthorized digital resurrection of celebrities and loved ones for ads. Together, these laws are a recognition that our legal frameworks are woefully behind the tech. They’re a start. A necessary, common-sense start. But in the relentless arms race between AI generation and detection, legislation can only do so much. The real test will be whether this transparency actually changes consumer behavior or if we all just get numb to the disclaimers.

Leave a Reply

Your email address will not be published. Required fields are marked *