FYI.

This story is over 5 years old.

News

Charlottesville could make Silicon Valley treat white supremacists like ISIS

In August 2015, roughly 2,500 ISIS-affiliated Twitter users were each sending about 8.5 tweets a day. By January 2016, such accounts were sending an average of 6 tweets a day — and to a smaller network of people.

The difference: Twitter introduced an aggressive account and content takedown effort later that fall, which according to one study, had the effect of “devastating the reach of specific users” who were repeatedly targeted by Twitter. Since then, a general consensus has emerged among extremism experts that Twitter’s pro-active removal of hundreds of thousands of ISIS accounts — along with similar efforts by YouTube and Facebook — worked.

Advertisement

Social media platforms are now facing a similar challenge with the rise of violent homegrown right-wing extremism. Can Silicon Valley do the same thing to white supremacism and the alt-right that it did to ISIS?

“Twitter has done a horrific job of cleaning up its act. It’s been great on ISIS, but terrible on racism in the U.S.”

Blocking hate speech presents a bigger technical challenge because its source and content is diffuse and often couched in irony. Then there’s the matter of free speech: Silicon Valley has essentially drawn the line at harassment or threats of violence. But how, and how forcefully, Silicon Valley acts is a question of politics, and whether the government pushes tech companies to get serious about right-wing extremism.

Silicon Valley’s war on ISIS presents an interesting test case. At the outset, Silicon Valley’s approach to ISIS was hardly a pro-active affair. Instead, companies took a reactive approach, dealing with incidents like the James Foley beheading video on a case-by-case basis. But under collective pressure from the FBI, intelligence agencies, and members of Congress, Silicon Valley began doing more. The effort has been so successful that this past February ISIS reportedly warned Twitter, Facebook, and Google to stop pushing the terror group around (after previously mocking such efforts).

The political pressure from Washington was the “impetus” for tech platforms to move beyond their skittishness over stifling free speech and move toward mass deletions based on affiliation with hate groups, regardless of the content, according to Seamus Hughes, deputy director of George Washington University’s Program on Extremism.

Advertisement

To sufficiently address right-wing extremism, a similar pressure will be needed here. “You’re likely gonna see tech companies handle it the same way they did during the start of ISIS,” Hughes said. “It is clear tech companies want to be libertarian in the way they police their sites. The only reason they get to [a point where they change] is when they are pushed by a PR campaign, or pressured from regulators or Capitol Hill.”

But Charlottesville is hardly the first time hate monitors and users have attacked tech giants like Twitter, Facebook and Google over their often timid, hands-off approach to racism, harassment and hostile right-wing users.

“Twitter has done a horrific job of cleaning up its act. It’s been great on ISIS, but terrible on racism in the U.S.,” said Heidi Beirich, who runs the Intelligence Project at the Southern Poverty Law Center.

But the events in Charlottesville may change that. A growing group of tech companies, likely sensing the surge of public anger over neo-Nazis and white supremacist violence on display this weekend, have already begun backing away from such customers.

Web-hosting company and domain registrar GoDaddy booted neo-Nazi site the Daily Stormer on Sunday night — less than two months after it publicly declined to do just that because the site’s content was not “morally” offensive. Google took the same stance hours later when the Daily Stormer attempted to re-register its domain on the tech giant’s platform Monday morning. The rebuttals forced the neo-Nazi site to move itself to the dark web.

Advertisement

Both GoDaddy and Google said in separate statements to VICE News that the Daily Stormer had crossed a line by “inciting violence.” Their push against the Daily Stormer accompanied similar responses from GoFundMe, Uber, Airbnb, and the chat app Discord, which all banned white-supremacist and far-right users because of events in Charlottesville.

But after the initial goodwill gestures and bromides, tech companies will run up against additional questions of plausible culpability and free speech.

Charlottesville demonstrated the potency of platforms as organizational tools for groups looking to demonstrate and incite violence. Leading up to the event, right-wing extremists promoted their Unite the Right rally and organized on services like YouTube and Facebook. The effort brought in activists from around the country, all converging on the college town to protest the removal of a statue of a Confederate general.

The majority of the content posted before the event — men yelling into their phone cameras about the right to free assembly — would not have violated most tech companies’ terms of service. Though the granular detail varies company to company when pressed on issues of “free speech,” the standard that tech platforms adhere to is that most content is fine, with the exception of targeted harassment or inciting of violence.

“It’s stickier. It’s an easier thing to do when you have a designated terrorist organization like ISIS,” Hughes said. “But [Silicon Valley] has to be consistent; you can’t just focus on ISIS-inspired accounts because they’re the most overt.”

Advertisement

When reached for comment, representatives from YouTube, Facebook, and Twitter declined to comment on right-wing extremism specifically, and instead pointed to their respective content policies regarding hateful and extremist content. On Tuesday, Facebook confirmed it had deleted a number of white supremacist pages associated with the violence in Charlottesville.

Ultimately, the biggest obstacle to banning violent right-wing users wholesale will likely be political, not technical. Successful models are already available, with Germany being the most obvious. Since at least 2012, Twitter has blocked far-right tweets in accordance with Germany’s anti-Nazism laws. And in June, the German parliament passed a law blanketly requiring tech platforms to take down illegal content (which includes Nazi stuff) or face a $57 million fine.

But President Trump is notoriously reluctant to condemn even his most extreme far-right supporters, and failed to offer a coherent condemnation of the violent neo-Nazis and white supremacists.

In his initial Saturday statement addressing Charlottesville, Trump condemned the violence on “many sides,” declining to call out racism and right-wing extremism. Resistant to saying anything further and reportedly bristling at bipartisan opprobrium at his first remarks, Trump eventually held a hasty press conference on Monday in which he acknowledged that violent neo-Nazis and white supremacists were “criminals and thugs.”

A few hours later, the president was back on Twitter to vent his frustration.