Brand Safety Institute: Is AI Creative a ‘Reputational Risk’?

ai slop via wikipedia

It’s a good question—one posed by The Brand Safety Institute’s COO AJ Brown (formerly led brand safety efforts at Twitter/X) in a post-Cannes wrap-up on the BSI website. The piece covers a few topics, but AJ’s thoughts on “AI slop” and the overreliance on AI for creative—potentially leading to an uncanny valley effect where everything feels slightly off and feels like “slop”—creates a new kind of brand safety risk.

Here’s what AJ asks:

“It raises a cautionary question, which feels appropriate to end with as marketers begin integrating these shiny new content creation tools: if audiences start viewing AI use in ads as “slop,” could an overreliance on synthetic creative become a reputational risk to brands in its own right?”

This smart question raises others. Ultimately, where is the line between “slop” and high-quality synthetic creative—or even just standard AI-generated creative? As consumers become more attuned to AI creative, will that affect what they view as slop? How do disclosures factor in? Is AI content more off-putting when it’s not disclosed? Or does disclosure even influence what feels appropriate?

AJ adds:

“If authenticity is the currency of trust, brands may need to apply the same standards to their own creative that they demand of the content surrounding it.”

Why This Matters:

AI slop is widely seen as a problem. But not all AI-generated content falls into that bucket. Text, images, and video created by AI are getting better—fast—and widespread use in campaigns is all but inevitable.

But how do consumers actually feel about synthetic content? Some research suggests they don’t mind it, especially when done well. Others point to the importance of disclosure. But for now, this still feels like uncharted territory: adoption is scaling fast, but the public response is hard to define. (And, let’s be honest, all of the studies on this topic are largely done by companies with a vested interest in promoting AI creative, in some way shape or form.)

Experts React:

On the topic of “AI slop” specifically—not necessarily high-quality synthetic content—AJ notes:

“Many vendors and platforms now offer filters to detect and demonetize this content, and many marketers are eager to avoid it.”

While much of this article’s conversation centers on the role of AI in creative and whether overreliance poses a risk, make no mistake: AI slop is already a brand safety concern right now. We’re seeing a flood of low-quality AI media hit both the open web and walled gardens. It’s an equal-opportunity challenge—and the industry is still figuring out how to manage it.

Our Take:

What we’re seeing or will potentially see isn’t just a quality issue—it’s a perception issue. AI can produce incredible creative work, but the second something looks or “feels” like it was churned out by a machine, it risks being dismissed as lazy, cheap, or inauthentic. That’s the brand safety threat: not the tool itself, but the erosion of trust that can come from careless or excessive use. To that end, this isn’t a call to avoid AI, of course. It’s just a reminder that creative still needs to feel like “craft.”

(What this means for the wave of AI creative startups, or even companies like Meta who seem hell-bent on automating most creative development, at least for SMBs, is unclear.)

You May Also Like