FYI.

This story is over 5 years old.

News

The EU is done warning tech companies about removing extremist content — now they want to act

But critics say this new bill is flawed and will likely be ineffectual in moving the dial on the viewing and sharing of extremist content.

In March, European Union officials warned tech companies that they weren't doing enough to curb the spread of terrorist content and gave them a list of demands.

Apparently, their response didn’t cut it.

On Wednesday, the EU published details of how the bloc will seek to make these recommendations obligatory.

The new legislation will force tech companies to comply with at least one of the recommendations passed earlier this year: that they remove terror content within an hour of it being reported by local law enforcement agencies. If the bill is passed — it would need to be approved by the European Parliament and a majority of the member states — they would face fines for noncompliance, although it’s unclear what those would be.

Advertisement

The legislation is the latest attempt by Europe to regulate the U.S.-based tech monoliths. Earlier this year, Germany introduced a hate speech law that requires companies to remove “evidently unlawful” material within 24 hours of it being posted — or face fines of up to $58 million per infraction. The EU forced Ireland to collect billions in back taxes from Apple, while Brussels has fined Google $5 billion for abusing its dominant market position in mobile.

The EU said it has put in place “effective, proportionate and dissuasive penalties” if providers fail to comply with individual removal orders and in the event of “systematic failures to remove such content within 1 hour” the bloc will levy fines of up to 4 percent of a company’s global turnover for the last business year.

In the case of Google, Facebook and Twitter, that could mean fines of up to $4.3 billion, $1.6 billion and $96 million respectively.

But critics say this new bill is flawed and will likely be ineffectual in moving the dial on the viewing and sharing of extremist content. They say it’s too narrow to be effective and could result in terrorist networks accelerating their migration onto smaller platforms such as encrypted messaging app Telegram, text-sharing site JustPaste, and video-sharing service LiveLeak.

The European Commission’s bill comes after years of frustration among lawmakers who believe Silicon Valley has dragged its feet on tackling extremist content, according to EU officials and experts.

Advertisement

“You wouldn't get away with handing out fliers inciting terrorism on the streets of our cities — and it shouldn't be possible to do it on the internet, either,” Julian King, the EU’s commissioner for security and the driving force behind the proposed law, said in a statement Wednesday.

“While we have made progress on removing terrorist content online through voluntary efforts, it has not been enough. We need to prevent it from being uploaded and, where it does appear, ensure it is taken down as quickly as possible – before it can do serious damage.”

“It wasn't that they couldn't do it — they honestly didn't want to do it.”

Spokespeople at Google, Facebook, and Twitter declined to comment on the legislation before it was made public, but they defended their record on tackling extremist content online.

Under pressure from governments, Silicon Valley has in the last year taken steps to crack down on extremist messaging. In April, YouTube said its investment in machine learning means that 98 percent of extremist material is removed automatically. That same month, Twitter said its spam-fighting tools helped it suspend almost 250,000 terrorist-linked accounts in the last six months of 2017. Facebook said that in the first quarter of 2018, it removed or added a warning to 1.9 million pieces of ISIS and al-Qaeda content, 99 percent of which was taken down before being reported by a user.

But a study published in July by NGO the Counter Extremism Project showed that while there had been progress, content was still slipping through (sizeable) cracks. On YouTube between March and June, ISIS members and supporters uploaded 1,348 videos, drawing a total of 163,391 views. Twenty-four percent of the posts remained on the site for more than two hours, and 60 percent of accounts that posted videos identified as extremist were allowed to stay on the platform, according to Scientific American.

Advertisement

Hany Farid, a digital forensics expert at Dartmouth University and senior adviser to the Counter Extremism Project, said that the tech giants are still not doing enough, and that the legislation doesn’t go far enough to force them to solve that problem.

“The dragging of their feet for the last three, four, five years is particularly offensive given that we had already solved this problem in the child pornography space,” he said. “It wasn't that they couldn't do it — they honestly didn't want to do it.”

Tech companies have a shared database of terror content that they can use to prevent the same videos or photos being uploaded repeatedly. By the end of 2018, the database will have more than 100,000 entries, according to The Global Internet Forum to Counter Terrorism, which maintains the database — a fraction of the number of photos and videos identified and removed by tech giants.

Farid believes the new legislation should make it mandatory for companies to prevent this reuploading of content.

“If you don't want to play the whack-a-mole problem with this content and with these groups, once it has been identified, you should say this must be entered into your hashing database — which the tech companies claim to have.”

Going underground

Still, undoubtedly, tech giants have gotten better at policing their platforms.

“The new law is probably not needed, because the big tech firms already are extremely good at identifying and eliminating content,” Adam Hadley, the director of Tech Against Terrorism, an initiative launched by the United Nations' Counter-Terrorism Committee Executive Directorate, told VICE News.

Advertisement

And this has had an unintended side effect: Terrorists have moved to smaller platforms that have fewer resources available to weed out this type of content.

“From our point of view, the biggest threat today is coming from the smallest platforms that the average person on the street wouldn't have heard of.”

Experts are concerned that the proposed law would further drive terrorists onto these networks precisely because there is less policing.

“From our point of view, the biggest threat today is coming from the smallest platforms that the average person on the street wouldn't have heard of,” said Hadley. “Right now we find that most of the terrorist use of the internet is on the smaller platforms, both in absolute and relative terms.”

The proposed legislation in theory also impacts small companies, with the EU saying “all hosting service providers offering services in the EU, irrespective of their size or where they are based.”

But in practical terms, given that smaller platforms are based outside the EU and often have limited infrastructure to process legal requests, Hadley said it’s difficult to imagine how they can be forced to comply with this new law.

Terrorists are also increasingly turning to cloud storage services like Google Cloud and Amazon Web Services, while other manage their own servers — methods the new law will not address.

Laura-May Coope, director and co-founder of social media agency Social Life, said that at the end of the day, the threat of fines is unlikely to make companies do more than they are already doing to tackle the problem.

“The increased risk of hefty EU fines does nothing to change the fact that all the social networks are massively struggling with truly effective ways to identify and then remove extremist content from their platforms.”

Cover image: Icons are seen on a screen of smart phone in Ankara, Turkey on September 04, 2018. (Muhammed Selim Korkutata/Anadolu Agency/Getty Images)