FYI.

This story is over 5 years old.

News

YouTube finally realized neo-Nazis are bad for business

If you’re a neo-Nazi or religious extremist on YouTube, it’s going to be a lot tougher to host your stuff on the internet’s largest video network.

YouTube on Thursday began rolling out a set of tools and changes to minimize religious and racial extremist content, a move which was first reported by Bloomberg. Though Silicon Valley has been booting white supremacists from tech platforms after their violent rally in Charlottesville, YouTube’s new policies have their roots in the “brand safety” scandal from earlier this year.

Advertisement

In March and April, at least 250 companies pulled their ads from YouTube after journalists reported that their ads were showing up next to hate speech. These advertisers included blue-chip brands like Walmart, Verizon, Johnson & Johnson, and Pepsi. At the time, one analyst estimated that the scandal could cost Google $300 million in net revenue by the end of 2017.

In all, the brand safety crisis caused direct advertising spending on YouTube to decrease by 26 percent year-over-year in the second quarter of 2017 — compared to an average increase of 18 percent among its competitors — according to the ad data tracking firm Standard Media Index.

The brand safety changes rolling out this week were first outlined by YouTube’s general counsel Kent Walker in a June Financial Times op-ed. Walker said then that there were four major changes that Google was making in how it would handle hateful and extremist content, some of which were apparently introduced prior to Thursday:

  • A pledge to “devote more engineering resources” to automatically flagging and removing such videos. These tools were actually implemented at the beginning of August, and have drawn scrutiny. YouTube “inadvertently” removed thousands of videos documenting atrocities in the Middle East, the New York Times said on Tuesday.
  • Strengthening the “Trusted Flagger Program” by “adding 50 expert NGOs to the 63 organisations” already involved, who will get grants from Google to fund their work.
  • Videos that don’t technically violate YouTube’s content policies — meaning they don’t incite harassment or violence — ”will appear behind a warning and will not be monetized, recommended or eligible for comments or user endorsements.”
  • And putting more energy in its Jigsaw initiative that deploys “targeted online advertising to reach potential Isis recruits” and steer them toward anti-ISIS material.

Advertisement

Given how easy it was for white supremacists to organize on tech platforms, Charlottesville has put Silicon Valley under new scrutiny.

“We announced in June that we would be taking a tougher stance on videos that do not violate our policies,” a YouTube spokesperson said in a statement to VICE News. “We believe this approach strikes the right balance between supporting free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

For users who upload flagged content, they’ll get an email explaining why it’s been designated as inappropriate or offensive. Viewers will get a small warning box next to the video that explains why features have been disabled:

Although other social media services like Facebook also struggle with managing graphic or hateful content, YouTube has been selling advertising from big-name brands against its video content for far longer. And realistically, because these advertisers don’t have many other places on the internet to go, YouTube and Google will likely be fine.