The IAB has announced the group’s first AI Transparency and Disclosure Framework. The goal: to create consumer clarity around when AI is used in advertising, through disclosures that balance transparency and responsible AI use with the (painful) realities of efficient ad creation.
Under the framework, not every ad will require a disclosure. Instead, disclosure is triggered on a sliding risk scale, when “AI materially affects authenticity, identity, or representation in ways that may mislead consumers,” according to the IAB. Examples include digital twins of living or deceased notable people, as well as more basic cases such as images or videos used in ads that rely on minimal human editing or oversight.
Disclosures can take two forms: consumer-visible cues, such as visual or text indicators within the creative itself, or backend, machine-readable disclosures using Coalition for Content Provenance and Authenticity (C2PA) protocols. C2PA is an open standard that embeds metadata into digital content files, outlining when, how, and with what tools an asset was created or modified.
Why This Matters:
AI tools have improved rapidly over the past year, with products like Sora, Google’s image-generation tools, and even Grok producing (for better or worse) increasingly realistic outputs. As AI-generated content becomes harder to spot, the need for clear standards around whether an ad was fully created or materially augmented by AI has grown.
The “uncanny valley” effect still exists, but it’s becoming less obvious. As AI use becomes more subtle, disclosures play a larger role in helping consumers understand what they’re seeing—and in maintaining trust.
Disclosures also matter because not all consumers are comfortable with AI-generated advertising. Multiple studies and reports have shown skepticism toward AI use in marketing, with consumers indicating a preference for transparency when AI is involved. The IAB’s approach attempts to address those concerns while focusing disclosures on the most meaningful and potentially misleading use cases.
Experts React:
In the announcement, David Cohen, CEO of IAB, said: “We must get transparency and disclosure right, or we risk losing the trust that underpins the entire value exchange. We’re giving the ecosystem tools it needs to drive responsible innovation.”
Separately, the framework arrives as lawmakers pursue a patchwork of AI regulations—particularly around political advertising. In New York, for example, Gov. Kathy Hochul has pursued restrictions on the use of AI in political ads during the final 90 days of an election.
Our Take:
This feels late, no? These standards arguably should have landed a year ago, given how quickly generative AI entered mainstream advertising workflows. Still, having a clear framework—even now—is better than continued confusion and ambiguity.
One open question: whether the industry ultimately settles on a simple, standardized visual marker for AI-generated ads. A consistent, low-friction symbol could reduce confusion and remove some of the complexity advertisers now face as AI use continues to scale. Hopefully the IAB framework is a starting point for that to happen.