DV Cracks Down on Illegal Sites with New Default Blocking Tools

man showing stop sign by his palm near black background
Photo by Daniel Reche on Pexels.com

Verification is often like Switzerland, trying not to “take a side” on an issue. But, in this case, they had to.

After a critical Adalytics report last week revealed ads – mainly enabled by players like Amazon and Google – appearing alongside child sexual abuse material (CSAM) on the anonymous file-sharing site imgbb.com, verification company DoubleVerify has introduced new tools to help advertisers block, avoid, and monitor ad-supported sites that either traffic in illegal content or can be easily exploited to host it.

In a blog post on its Transparency Center, DV wrote:

“We are unwavering in our commitment to keeping digital advertising free from harmful and illegal content. The presence of child sexual abuse material (CSAM) or other forms of criminal content online is abhorrent, and even one impression running alongside this material is unacceptable. We all can be and should be, part of the solution.”

DV says it has stepped up efforts to strengthen its tools, collaborate with industry and law enforcement partners, and roll out new solutions to give advertisers stronger, more proactive safeguards.

These new tools will be available to all industry partners, whether they’re DV customers or not.

The first is a “Highly Illicit: Do Not Monetize” content category—designed to help advertisers steer clear of domains flagged by trusted third-party experts like the National Center for Missing & Exploited Children (NCMEC). DV says it analyzed three years’ worth of publicly available NCMEC data and cross-referenced it with its own to build this category. (See this report as an example.) It’s now live across 100-plus media-buying platforms and partners, with additional sites to be added over time. To ensure broad protection, the category will be enabled by default, which is a big deal.

The second tool, first mentioned in DV’s initial response to the report, is a “P2P Sharing and Streaming” avoidance category. This lets brands block ads from appearing on peer-to-peer (P2P) sharing and streaming sites and apps—such as imgbb.com—that could be misused to distribute illegal or exploitative content. This category will also be turned on by default (another big deal).

While not a tool, DV says it’s also deepening its collaboration with law enforcement agencies and specialists focused on CSAM and other illegal content to help keep threat lists updated and improve protection.

Why This Matters:

Taking steps to cut off ad-supported funding of illegal, exploitative content online is a no-brainer. Ads should never appear alongside this kind of material. While pre-bid avoidance and post-bid monitoring are valuable tools, they aren’t perfect—pre-bid isn’t 100% foolproof, and post-bid only flags issues after an ad has already run. This kind of full-scale blocking is a more direct way to stop the monetization of illegal content, whether it’s CSAM or other harmful material.

Experts React:

Here’s more from DV’s blog post:

“Our work is not done. The fight against illegal and harmful content online requires ongoing, collective action. We encourage all advertisers to implement the “P2P sharing and streaming” category, and the “Highly Illicit: Do Not Monetize” category, into their media buys.

By working together — across advertisers, ad tech platforms, publishers, law enforcement, and third-party experts — we can make a meaningful impact in keeping digital advertising safe, responsible, and free from the monetization of harmful content. It is a constantly evolving challenge, but we understand the importance of our responsibility and the trust the market place has placed in us.”

DV also says it plans to continue investing in this space and release research on the monetization of illegal content online.

Our Take:

The fact that these protections will be enabled by default is a big deal. That said, one wonders how this will affect seemingly safer, popular ad-supported image-hosting sites like Imgur. (Ads help keep these services free or mostly free.) The ends may justify the means here, though, and platforms with stronger content enforcement policies or registration requirements for uploads could likely be added to brand allowlists if the brand truly wants to run there.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like