
AI is everywhere in advertising right now, and audiences are starting to notice. Trust is taking a hit across the board, not because AI is inherently untrustworthy, but because nobody’s been clear about when and how it’s being used. The Interactive Advertising Bureau (IAB) released its “AI Transparency and Disclosure Framework” to start fixing that. It gives advertisers an actual standard to follow, instead of guessing.
In this week’s blog, we’re breaking down the state of AI in advertising and walking through the IAB’s framework: what it covers, what it skips, and what it means for advertisers running programmatic campaigns.
Why Universal Disclosure Doesn’t Work
When most people hear “AI,” they picture someone typing a prompt into ChatGPT and walking away. In practice, AI touches dozens of steps in the advertising process, many of them invisible to the audience.

According to eMarketer, roughly 61.3% of US consumers want a label on every piece of content that AI touched in any way. On the surface, that sounds reasonable. But universal disclosure creates three real problems.
Label Fatigue
If everything gets an AI label, the label stops meaning anything. Audiences tune it out fast. An AI warning on every piece of content has the same practical effect as no warning at all.
Nuanced AI Usage is Lost
Take this article. The author ran some statistics searches using AI tools. Every word was typed by hand. Universal disclosure rules would require an AI label anyway, treating that the same as a fully generated article. That’s not useful information for anyone.
The Implied Truth Effect
If AI-assisted content always gets flagged, audiences start assuming that anything without a label is 100% human-made. That’s not true, and it opens the door to manipulation rather than closing it. Misleading content existed long before AI.
What Do Consumers Want in AI Disclosure?
Whatever their personal feelings on the use of AI, the general public definitely wants to know when companies are using it. A 2023 survey found that 94% of consumers want more transparency around AI use, especially in marketing and advertising.

The harder problem is that multiple studies have also shown that consumer trust, purchase intent, and brand preference drop once people know AI was involved in content creation. So advertisers are caught between two things that are both real: audiences want disclosure, and disclosure can hurt performance.
This is the exact tension the IAB is trying to address with its framework. The goal is a middle ground that keeps consumers informed without slapping a warning label on everything.
What the IAB Framework Actually Says
The IAB’s position is that AI disclosure shouldn’t be universal. Instead, it should be required any time the use of AI could reasonably mislead someone about what they’re seeing, hearing, or interacting with.
The framework offers one core test for advertisers to run before publishing: does your use of AI create a real risk that consumers will be misled?
If yes, disclose. If no, you don’t need a label.
When You Should Disclose
- Realistic images or videos generated from AI prompts, even if a human refined them afterward
- Digital “twins” of real people doing or saying things they never actually did
- Synthetic voices of deceased persons (with or without estate approval)
- Synthetic voices of living people making statements they never made
- AI chatbots or personas simulating human interaction
- Unedited copy generated from a prompt.

Real-World Examples of When to Disclose
- A retailer uses prompts to generate pictures of furniture in various room settings
- A prompt-generated background scene, even with real people in the foreground
- Advertiser has estate approval to generate voice of deceased person endorsing a political party
- A generated image depicting a historical event
When You Don’t Have to Disclose
- AI used for post-production on recorded audio or video (breath removal, noise reduction, etc.)
- Clearly stylized, fantastical, or obviously non-human content
- Authorized synthetic voices of living people used in standard marketing
- Generic or non-celebrity synthetic voices
- Background soundscapes and ambient audio
- AI-powered text translation
- Incidental background figures in a scene
Real-World Examples of When to Not Disclose
- AI narrates a scenario using a generic synthetic voice that doesn’t imply any real person
- Crowds in the background of a filmed scene were digitally filled-in
- Generating a cartoon mascot that is clearly non-human
- Cleaning up audio using AI tools
What’s the Takeaway for Advertisers?
At Genius Monkey, we think the IAB framework is a meaningful step forward, and we’d encourage advertisers to start applying these standards now rather than waiting for a legal reason to do so.
Ultimately, the goal of AI disclosure is to build trust with the consumer. Early disclosure might earn the ire of some consumers, but will also earn transparency points from the rest. By taking the initiative, companies can build long-term trust and get ahead of regulations.

And that regulation is coming, sooner than advertisers realize. Many states and entities have passed or are in progress of passing legislation dictating AI disclosure:
- New York has already passed laws requiring clear disclosure for AI actors
- The FTC is making amendments to the “Truth-in-Advertising” act, which penalizes advertisers who present AI-generated content as human created
- The “AI Ads Act” seeks to include AI-generation in the definition of “misrepresentation” in political campaigns
- The “Protecting Consumers for Deceptive AI Act” would require disclosure of substantial visual or audio content generation.
How Genius Monkey Approaches AI in Programmatic Advertising
We’ve been running fully managed programmatic advertising campaigns since 2009. Our approach to AI is the same as it’s been across every other shift in the industry: use it where it genuinely improves campaign performance, and be honest about how it’s being used.
AI plays a role in our bidding algorithms, targeting optimization, and performance analysis. That’s not AI replacing human judgment, but handling the volume of data that no human team could process in real time. Our team of programmatic experts makes the decisions that matter. We call it Quants with Human Oversight, and it’s been central to how we work since the beginning.
The results speak for themselves. Genius Monkey users enjoy an average display CPC of $1.07 in 2025, while the industry average is closer to $3. The combination of high-end tech, AI, and human expertise let our platform users reach their ideal audience for less.
If you’re running programmatic campaigns and trying to figure out how to position AI use with your audience, that’s a conversation worth having. Get in touch with Genius Monkey today!

