FYI.

This story is over 5 years old.

News

Four reasons the Cleveland killing won’t change Facebook

When Steve Stephens allegedly killed Robert Godwin in cold blood on Easter Sunday and put videos of the shooting on Facebook, there wasn’t much Stephens’ friends or followers could do to prevent it from appearing in their news feeds.

This basic fact — and the two hours from the time the videos were first posted until they were taken down — seems to have rattled Facebook, which until this week had taken a more detached approach to atrocities posted on the platform. This time, just 24 hours after the Easter shooting, Facebook released a detailed timeline of the event and a promise to review the reporting systems that led to the delay.

Advertisement

CEO Mark Zuckerberg addressed the controversy Tuesday at Facebook’s developer conference, saying that the company “will keep doing all we can” to prevent similar situations in the future.

And yet, aside from the short-term PR blow, Facebook has little incentive to change — financial or otherwise. Indeed, Facebook has a strong incentive not to change, and very good reasons to keep Facebook Live available for anyone who wants to use it for anything. This presents Facebook with a tough choice: technology can’t yet prevent videos like Stephens’ without affecting non-offensive content, but there’s probably no amount of staff resources that could effectively monitor the feeds of over a billion daily active users who can go live at any time.

So while Facebook may well want to fix this problem and may throw some engineering and PR resources at it, as it has done with fake news, it’s still highly likely we’ll see more tragedies in our news feeds. Here’s why:

The software isn’t ready.

A well-kept half-secret about Facebook is that it already has many, many people around the world monitoring the social network for everything from porn to ISIS beheading videos.

For years, services like Facebook and YouTube have used a mix of software and people to moderate content on their platforms. Some of these people are contract or salaried employees in the U.S., and others are part of the unseen and unheard thousands who do this work in far-off places like the Philippines.

Advertisement

Services like Content ID (a Google product) track copyrighted content that gets uploaded to YouTube. Others are designed to track terrorist imagery across Twitter, Microsoft, and other Silicon Valley platforms. And Facebook says it’s working on artificial intelligence technology that can help improve the efficacy of software in content moderation.

But software can’t handle this task on its own, at least not for awhile, said venture capitalist Hunter Walk, whose nine-year stint at Google included leading the YouTube product team.

“In the near term, content analysis by software can produce a candidate set of potentially troublesome videos. It can also assign a confidence score to the judgment and identify the sections of the video most at issue,” Walk said in an email to VICE News. “But unless you’re willing to tolerate a large number of ‘false positives,’ it requires human review to make a final call on policy issues of this sort.”

This idea directly collides with the thesis of Silicon Valley’s ascendance over the last two decades, which is that software scales, but humans don’t. Or as New York University Stern School of Business marketing professor Scott Galloway puts it, such “human review” introduces “friction” into companies like Facebook and Google.

“[These companies] are coming up against this tension, which is that their business models are predicated on the model of zero friction — as few humans as involved as possible,” Galloway told VICE News. “These problems have solutions, and the solutions inject friction and are contrary to the DNA of these organizations, who are completely remiss to implement any process that might get in the way of their 30 or 40 percent business growth.”

Advertisement

Unlike TV, Facebook is completely unregulated.

When a pop star’s nipple slips at Super Bowl halftime, offended broadcast viewers can complain to the Federal Communications Commission, which in turn could levy a fine. When dating sites fail to protect customer information, the Federal Trade Commission can issue a fine. And when ISIS or its supporters in the U.S. put terrorist content on social media, the FBI can open an investigation.

But under Section 230 of the Digital Millennium Copyright Act, Facebook, as a platform, can’t be held legally accountable for stuff posted to Facebook by its users, with some exceptions. This is what makes it different from CBS getting fined for the Super Bowl nipple-slip, and helps explain why the FBI tends to go after people who send revenge porn on Facebook, instead of the company itself.

Sophia Cope, staff attorney for the Electronic Frontier Foundation, said over email that although Facebook is “immunized” legally under Section 230, that doesn’t release the company from moral responsibility.

“We don’t disagree that online companies like Facebook have discretion to set terms of service that ban violent content and enforce those terms. We have consistently called on companies, though, to define content that will be removed as clearly and specifically as possible,” Cope said in an email to VICE News.

Of course, Facebook is hardly the only social network where shockingly violent viral content shows up. There are many cases of this on private social services like WhatsApp or Snapchat, and on video-hosting networks like YouTube. But unlike these other services, Facebook’s job is to surface content to the widest number of people, whether they want to see it or not.

Advertisement

Live video is growing, and young users love it.

A key part of the story that digital platforms tell advertisers about why they need to start moving ad dollars from TV to the web is because the audiences are younger online.

A recent UBS survey showed that 63 percent of millennials had watched a livestreamed video, and 24 percent had made one between June and November last year. This same survey also found that while fewer people were watching livestreamed videos across social platforms, Facebook was bucking the trend. In the same period, the number of U.S. internet users who had seen a Facebook Live video rose from 14 percent to 17 percent.

(Facebook Stories, a cloned feature from Snapchat, meanwhile, has apparently struggled to gain traction in the first few weeks since launch).

Those numbers match up with what Facebook said last fall about Live’sperformance, which was that the number of people watching Live videos had increased fourfold between May and October. And if Facebook Live is outperforming its competitors among a highly coveted younger demographic, that doesn’t present a strong incentive for Facebook to monkey with it in any way that might inhibit the creation of content.

The ad industry has no reason to inflict pain.

Over the last month, blue-chip advertisers such as JPMorgan Chase and Johnson & Johnson have pulled videos from YouTube, angry at how the video network has served their ads on videos by people like white supremacist David Duke.

Advertisement

But this scenario seems unlikely to happen to Facebook. The research firm eMarketer anticipates Facebook’s display ad business to grow from a 35 percent share of the market to 44 percent by 2019, whereas Google’s market share is expected to drop from 14 percent to 12 percent. And that’s because with Facebook, eMarketer points out, “time spent on the social network is growing, but only on mobile,” which is far more important than desktops or tablets.

Though Facebook may hold a lot more power than any one platform that advertisers have dealt with before, its current dilemma isn’t a new one. Rob Norman, Chief Digital Officer of the media buyer conglomerate GroupM, said to VICE News that the real question is about whether advertisers push Facebook to create more ad-friendly content to compensate for the not-so-ad-friendly content.

“Advertisers have always had a choice about whether they advertise in every element of a platform or not,” Norman said. “On the one hand, how much of the [advertising] supply that is available in the Facebook and Google universe falls within the boundaries of acceptability? And on the other hand, will that in turn create pressure [on Google and Facebook] — whatever that means – to increase the volume of acceptable supply?”

One potential strategy for Facebook: investing in its own original content to increase that supply. In December, the company revealed it’s been working on developing its own TV shows and in February, the company hired an eMTV exec to spearhead those efforts.

But even if brands can negotiate better ad deals because of the unsavory stuff on these platforms, that doesn’t necessarily affect the balance of power in a meaningful way. Scott Galloway thinks that the Cleveland shooting “is going to amount to little more than a speed bump for Facebook.”

“Brands will have influence, and the platforms will attempt to change. But it’s almost becoming as if you find out your local coal-fire plant is spewing carbon dioxide into the air, so you stop using the plant’s electricity,” Galloway said. “As angry as you are about the carbon, you’re still using the electricity.”