Advertisement

SKIP ADVERTISEMENT

Google Training Ad Placement Computers to Be Offended

MOUNTAIN VIEW, Calif. — Over the years, Google trained computer systems to keep copyrighted content and pornography off its YouTube service. But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart appear next to racist, anti-Semitic or terrorist videos, its engineers realized their computer models had a blind spot: They did not understand context.

Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.

Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances of what makes certain videos objectionable. Advertisers may tolerate use of a racial epithet in a hip-hop video, for example, but may be horrified to see it used in a video from a racist skinhead group.

That ads bought by well-known companies can occasionally appear next to offensive videos has long been considered a nuisance to YouTube’s business. But the issue has gained urgency in recent weeks, as The Times of London and other outlets have written about brands that inadvertently fund extremists through automated advertising — a byproduct of a system in which YouTube shares a portion of ad sales with the creators of the content those ads appear against.

This glitch in the company’s giant, automated process turned into a public-relations nightmare. Companies like AT&T and Johnson & Johnson said they would pull their ads from YouTube, as well as Google’s display advertising business, until they could get assurances that such placement would not happen again.

Consumers watch more than a billion hours on YouTube every day, making it the dominant video platform on the internet and an obvious beneficiary as advertising money moves online from television. But the recent problems opened Google to criticism that it was not doing enough to look out for advertisers. It is a significant problem for a multibillion-dollar company that still gets most of its revenue through advertising.

“We take this as seriously as we’ve ever taken a problem,” Philipp Schindler, Google’s chief business officer, said in an interview last week. “We’ve been in emergency mode.”

Over the last two weeks, Google has changed what types of videos can carry advertising, barring ads from appearing with hate speech or discriminatory content.

In addition, Google is simplifying how advertisers can exclude specific sites, channels and videos across YouTube and Google’s display network. It is allowing brands to fine-tune the types of content they want to avoid, such as “sexually suggestive” or “sensational/bizarre” videos.

It is also putting in more stringent safety standards by default, so an advertiser must choose to place ads next to more provocative content. Google created an expedited way to alert it when ads appear next to offensive content.

The Silicon Valley giant is trying to reassure companies like Unilever, the world’s second-largest advertiser, with a portfolio of consumer brands like Dove and Ben & Jerry’s. As other brands started fleeing YouTube, Unilever discovered three instances in which its brands appeared on objectionable YouTube channels.

Image
A Google project is bringing machine-learning techniques to bear on the problem of identifying content on its YouTube service that advertisers might find inappropriate.Credit...Dado Ruvic/Reuters

But Keith Weed, chief marketing officer of Unilever, decided not to withdraw its ads because the number of ads appearing with objectionable content was proportionally small. The average $100,000 YouTube campaign runs across more than 7,000 channels, according to the video analytics firm OpenSlate, and Unilever spends hundreds of millions of dollars on YouTube. Also, Google discovered that the ads in question appeared because of a human error in setting safety levels.

Mr. Weed said it was in Unilever’s best interest to win concessions from Google instead of cutting ties. As part of its new measures, Google agreed to work with outside companies to provide third-party verification about where ads appeared on YouTube.

When he broached the idea of independent verification in the past, Mr. Weed said, Google executives acted as though he had suggested the company was not trustworthy. He said the issue was not about trust, but about companies being able to “mark their own homework.” He said he thought that Google would have agreed eventually, but that “the current situation accelerated their plans.”

Google’s efforts are being noticed. Johnson & Johnson, for example, said it had resumed YouTube advertising in a number of countries. Google said other companies were starting to return.

For the most part, Google failed to address the issue adequately before because it did not have to; the instances in which ads appeared next to objectionable content happened infrequently and out of view from the broader public. Google said that for many of its top advertisers, the objectionable videos accounted for fewer than one one-thousandth of a percent of their total ad impressions.

To train the computers, Google is applying machine-learning techniques — the underlying technology for many of its biggest breakthroughs, like the self-driving car. It has also brought in large human teams (it declined to say how big) to review the appropriateness of videos that computers flagged as questionable.

Essentially, they are training computers to recognize footage of a woman in a sports bra and leggings doing yoga poses in an exercise video safe for advertising and not sexually suggestive content. Similarly, they will mark video of a Hollywood action star waving a gun as acceptable to some advertisers, while flagging a similar image involving an Islamic State gunman as inappropriate.

Google used a similar approach in the past to create an automated rating system for videos, similar to movie ratings, based on appropriateness of content for specific audiences. But Google is now trying to solve a different problem.

“Computers have a much harder time understanding context, and that’s why we’re actually using all of our latest and greatest machine learning abilities now to get a better feel for this,” Mr. Schindler said.

Armed with human-verified examples of what is safe and what is not, Google’s computer systems break down the images of a YouTube video frame by frame, analyzing every image. They also digest what is being said, the video’s description from the creator and other signals to detect patterns and identify subtle cues for what makes a video inappropriate.

The idea is for machines to eventually make the tough calls. In the instances when brands feel that Google failed to flag an inappropriate video, that example is fed back into the system so it improves over time. Google said it had already flagged five times as many videos as inappropriate for advertising, although it declined to provide absolute numbers on how many videos that entailed.

With more than a billion videos on YouTube, 400 hours of new content being uploaded every minute and three million ad-supported channels on the platform, Mr. Schindler said it was impossible to guarantee that Google could eradicate the problem completely. He made a comparison to how a car company could not promise that even a new tire would never fail in the first 10,000 miles.

“No system can be 100 percent perfect,” he said. “But we’re working as hard as we can to make it as safe as possible.”

A version of this article appears in print on  , Section B, Page 1 of the New York edition with the headline: Google Trains Ad Computers to Be Offended. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT