Meta Boosts Brand Safety with New Third-Party Controls

instagram logo on smartphone
Photo by Shantanu Kumar on Pexels.com

It turns out, Meta does, indeed, care about brand safety.

After weeks of debate over Meta’s commitment to brand safety, sparked by its decision to loosen content moderation policies across platforms like Facebook, the company is making its stance clearer.

In a blog post on Thursday, Meta reiterated its support of brand safety, stating: “Ensuring brand safety and suitability through our robust suite of tools for advertisers continues to be a priority for Meta, and we continue to invest in this area.”

As part of this commitment, Meta announced that third-party content block lists are now available for Facebook and Instagram Feed and Reels through DoubleVerify and Zefr, expanding beyond IAS, which was the initial test partner. (See the DV, Zefr, and IAS releases on the announcement for more details.)

Meta further clarified: “Businesses will work directly with Meta Business Partners to determine which specific categories they may want to block.” The company also noted that partners can create block lists for any category, provided they comply with Meta’s Discriminatory Practices policy and “that all reporting remains consistent with standards previously set.”

Why This Matters:

The claim that Meta doesn’t care about brand safety was always a stretch. (Let’s not forget, Meta also recently launched brand safety features for Threads advertisers.)

Loosening content moderation doesn’t automatically mean an influx of harmful content will be monetized with ads. It just means there will be more content overall, and the tools Meta has built—along with third-party solutions from DoubleVerify, IAS, and Zefr—will likely become even more essential.

This tracks with what we wrote in January:

Yeah, Meta is introducing its own brand safety and suitability filtering on Threads and signaled that third-party verification is on the way. (Every platform has their own safety and suitability filtering, then layer on third-party given advertisers want independent auditing/grading.) These tools will work harder in a more freewheeling content environment but they’re still there and even more necessary now.

Experts React:

Some quotables here from the DV, IAS, and Zefr releases—

First, Zefr: “As social media continuously evolves, Zefr’s partnership with Meta better ensures that brands are always equipped with cutting-edge tools to stay protected and aligned with their values.” 

Second, IAS: “Our Content Block Lists are improving media quality, helping advertisers safeguard and scale their campaigns.”

Third, DV: “This release will allow advertisers to proactively avoid content they deem unsuitable before their ads are served, enhancing brand impact across Meta’s platforms.”

Our Take:

Concerns over relaxed content moderation are certainly valid—it does make platforms feel less “safe” overall. But that doesn’t necessarily mean they’re unsafe for advertisers if the right tools (mostly) do their job.

Meta claims that 99% (!) of the content it monetizes is brand safe. That number sounds almost too good to be true, but if it’s based on filtered impressions—after blocking and suitability controls are applied—it could be accurate. (This does appear to be what this number means, based on a previous Business Insider report.)

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like