Skip to main content
ai

New Research Shows How ‘Brand Safety’ Algorithms Hurt Publishers

By marking important "hard news" topics as unsafe but failing to detect disinformation, brand safety algorithms can create worrying incentives for news publishers.
article cover

Francis Scialabba

less than 3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

To try and keep their logos from appearing next to unsavory internet things, 85% of advertisers use “brand safety” tech. It leverages tools like natural language processing to analyze the content of web pages. Pages deemed unsafe can be blocked from serving ads.

The problem?

Advertisers commonly use these tools to parse context at an article level, rather than at the publisher level. Experts say that practice isn’t backed up by evidence, and it can produce funky classifications. Some new findings, c/o Adalytics research and Branded:

  • An Oracle tool marked more articles on WSJ, Wired, and The Economist as unsafe than it did on disinformation-friendly outlet OANN.
  • Coverage of the pandemic, civil unrest, and politics were disproportionately flagged as unsafe.
  • The study found no evidence that the tools can screen for disinformation.

And a separate 2019 study concluded that news publishers lost $3.2 billion due to overzealous brand safety tech. Stories about rugby and Avengers: Endgame had been demonetized for using words like “attack,” for instance.

Bottom line:By marking "hard news" topics as unsafe while failing to detect disinformation, brand safety algorithms can create worrying incentives for news publishers.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.