FYI.

This story is over 5 years old.

News

YouTube kills ads on 50,000 channels as advertisers flee over disturbing child content

For the second time in less than a year, major advertisers are fleeing YouTube after finding their ads were paired with offensive content — this time, directed at children. And the number of disturbing videos targeted at child audiences is much larger than previously known, YouTube’s response reveals.

Over the past week, YouTube says it has “terminated more than 270 accounts and removed over 150,000 videos from our platform in the last week.” The company also “turned off comments on over 625,000 videos targeted by child predators.

Advertisement

“Finally, over the past week we removed ads from nearly 2 million videos and over 50,000 channels masquerading as family-friendly content,” the Google-owned company said in a statement to VICE News. “Content that endangers children is abhorrent and unacceptable to us.”

Adidas, Mars, Hewlett-Packard, and a host of other big brands have all paused advertising on YouTube in the wake of reports revealing their ads were showing up alongside sexually explicit comments under videos of children. The tools used to screen such comments, volunteer moderators told the BBC, haven’t been working properly for over a year, allowing between 50,000 and 100,000 “predatory” accounts to remain on YouTube.

The reports on the explicit comments come amid a slew of related stories concerning kids and offensive YouTube content; those revelations include how highly trafficked YouTube channels have been tricking kids into watching disturbing videos, and how YouTube’s search results have been auto-populating with pedophiliac queries, such as “how to have s*x with your kids.”

If all this sounds familiar, that’s because it was only this past March when blue-chip advertisers suspended their ad buys on YouTube after journalists found their ads were running next to hate speech and other offensive material. Within a few months, and after some major promises from YouTube, a number of big brands returned to the platform.

Advertisement

In August, YouTube unveiled a revamped policy to better suppress or, in some cases, censor hate speech content. On November 22, after the first stories about disturbing content aimed at children started dropping, the web video network announced a new set of policies to deal with “content on YouTube that attempts to pass as family-friendly, but is clearly not.”

The reason that so much of this content gets through YouTube’s filters in the first place is because of a problem that extends beyond just YouTube, and affects other major digital platforms like Amazon and Facebook. The power of these companies is that they allow their billions of users to sell products (Amazon Marketplace) or post live broadcasts (Facebook Live), which can include everything from phone cases with old men in adult diapers to live-streamed suicides and murders. And while tech platforms insist they take these issues seriously, their technological solutions and screening have not yet delivered on their promises.

In the case of YouTube, one problem appears to be that the digital video network has so many kids watching. For many parents, YouTube serves as a de facto babysitter and portable TV set. And some of the most popular stuff on YouTube is part of the broader trend that’s taken flight on the platform in the last year as more users algorithmically generate videos to attract clicks and ad dollars, often plugging in popular keywords like “Elsa” (from the Disney movie “Frozen”) or “Spider Man,” despite their decidedly child-unfriendly content.

Toy Freaks, for example, was until this month one of the 100 most-viewed YouTube channels, with more than 8.5 million subscribers to its videos, some of which featured strange content that included children in pain and throwing up. YouTube terminated the Toy Freaks channel in mid-November, after a Medium essay highlighting its content went viral.