Policing the Internet: Europe’s Bid to Regulate Hate Online

 
 
Photo by  Fabrice Florin
 

When Brenton Tarrant walked into the mosque in Christchurch, New Zealand earlier this year, he had already indicated his intentions on various platforms on the Internet, alerting renowned radical right channels where they could spectate. Two hundred people watched the minutes of horror unfurling at the mosque, live streamed in real time on Facebook. Not one of these two hundred people reported it. By the time Europe woke up the next morning, both the video and Tarrant’s manifesto had gone viral. Since then, El Paso, Dayton, and Oslo were all attacks that were prefaced with activity online, two of which citing Tarrant as an inspiration. 

In September 2001, terrorists chose to hit the North Tower of the World Trade Centre first and the South a few minutes later. The attack was intentionally coordinated so that by the time the second tower was struck, all the world’s news cameras would be trained directly on it. The entire world was able to watch it live. Nearly two decades later, social media has given hate a similar theatre.

In the wake of pressing issues such as data privacy and disinformation, the past few years have seen a concerted political push towards bringing social media companies to heel. But the centralized nature of today’s Internet has also allowed for the exponential amplification of hateful and violent extremist sentiment, opening up another front for policymakers. Governments, fearful of the role that social media plays in fuelling the bi-partisan and xenophobic chasms evident in our society, and distrustful of tech companies’ intentions, have begun to mobilize. Whilst many have concluded that there is indeed an urgent need to regulate, that governments have not done do so from a rule of law standpoint has had serious implications for democratic and fundamental human rights.  

If the Internet is the modern public sphere then social media is our principal agora, our most crucial means of accessing information and exchanging views. This means that the main companies are the marshals of our public space; their responsibility to the wellbeing and functioning of our societies cannot be understated. In some countries, such as Facebook in Myanmar, these big platforms are the Internet. In such places, social media are citizens’ only means of accessing information, often in contest to a state-controlled media. In this regard, governments are justifiably  alarmed that responsibility for our public sphere lies in the hands of private actors, who may put competition and profit before public interest. Indeed, one only has to look at Facebook's role in the genocide of the Rohingya by the Burmese military to see what happens when these companies do not carefully honor that responsibility.

“The era of social media firms regulating themselves is over,” said Theresa May on the launch of the UK’s Online Harms White Paper. A clear threat—a threat delivered by various European leaders over the past year, citing threats to our national security. If you have ever watched an American political hearing with big social media companies on the stand, you will invariably notice patterns. Questions from the presiding committee will usually betray a basic understanding of the technologies at play or a preoccupation with partisanship, such as why right-wing voices are being silenced by these platforms. Consequently, productive debate is never really broached. But, while America remains immobilized, constitutionally handcuffed to its First Amendment, Europe has been leading the charge to the front.

In 2017, Germany introduced its new Netzwerkdurchsetzunggesetz law, or NetzDG, to combat hate speech and misinformation online. It requires platforms with more than 2 million subscribers to remove “manifestly unlawful” content (under German law) within 24 hours or be fined up to 50 million euros. The law was immediately met with widespread criticism, in particular from human rights groups who called for its overturn at once. Wenzel Michalski, Director of Human Rights Watch Germany, stated the law was “vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.” The implications don't stop there, though. 

In NetzDG Germany had set a strong precedent, and it was only a matter of time before the dominoes began to topple. In September 2018, nine months after the NetzDG law came into full effect, the European Commission drafted a regulation to prevent the dissemination of terrorist content online. The resolution stated that online platforms would have a one-hour deadline to remove “terrorist content” and required companies to have a 24/7 point of contact. Systematic failures could lead to a fine of up to 50 million euros. Fast forward half a year, across the Channel, the UK government publishes their Online Harms White Paper. Alongside establishing an independent regulator and a statutory duty of care to make companies take more responsibility, the proposal threatened fines and criminal liability “in the context of harmful content or activity that may not cross the criminal threshold but can be particularly damaging to children or other vulnerable users.” What exactly constitutes “harm” is not clearly defined, but it includes activities and materials that are legal. 

Though they are well intentioned and include some promising tenets, these regulations are problematic—unimaginative at best and dangerous at worst. In what is essentially an act of outsourcing, they leave it to the platforms to adjudicate what is “legal” under German law, or, in the case of the Online Harms Paper, “harmful” but without even defining a legal framework to refer to. This merely increases private power over public speech, whilst insufficiently increasing accountability around adjudication and removal decisions at the same time. As David Kaye, UN Special Rapporteur on Freedom of Expression and Opinion said to a room full of myself and forty others in London last month—“These governments are not thinking creatively or energetically about where and how to insert public institutions, such as domestic courts and judiciaries, into this space.” The bid to control the public sphere is happening in an environment of changing governments and populism. Generally, we don’t want excessive government control in offline and online space. We can see what’s happening in Budapest, in Warsaw. “But even in rule of law countries, like Germany,” continues Kaye, “Why didn’t these countries think about these laws in a more considered way, with a rule of law framework?”

We cannot expect tech companies to be consistently mindful of human rights when confronted with liability and 50 million euro fines. Terms such as “harmful content” or “terrorist content” are also irreducibly vague.

In the face of sanctions, social media companies will likely become over-zealous in their content moderation. The consequences of this are probable, if not inevitable. Legal content will be removed, legitimate political opinion will be silenced, advocacy will be scrubbed. We cannot expect tech companies to be consistently mindful of human rights when confronted with liability and 50 million euro fines.  Terms such as “harmful content” or “terrorist content” are also irreducibly vague. How can YouTube accurately differentiate, at scale, between a video uploaded by a Sudanese man that documents his friends and compatriots being shot by the RSF and the military government — and an NRA video that glorifies gun violence? Add to this irregular political and cultural contexts, and navigating a universal problem for a global consumer base becomes even more complex. As Professor Nadine Strossen said to the U.S. Homeland Security Hearing on social media companies this summer, “Reasonable people will disagree [when moderating content], no matter how good their intentions are. There will be at best arbitrary enforcement, and at worst discriminatory enforcement.” While many of us can agree that certain speech should indeed be suppressed, the devil is in the detail.

In Western Europe, we often take the cultural traditions and institutions that preserve our rights for granted, particularly freedom of speech. For Hannah Arendt, freedom was the central raison d’etre of politics, exercised through engaging in a political community and experienced through action and speech in a public space. Our ability to say what we like and when we like is essential to exercising our political rights, and vice versa. Yet one thousand miles east of Brussels, many Europeans are not so lucky. Throughout the winter of 2013-14, social media became a central tenet of the Maidan protests in Ukraine. Amidst the chaos, Facebook and Twitter became paramount in coordinating whereabouts, flagging areas or individuals that needed help, and informing families and friends of the status of their loved ones. But social media also allowed the Ukrainian people to oppose President Yanukovych and Putin’s pursuit of propaganda in the state-run official media. In this way, it was integral to the protest movement, also referred to as the “Revolution of Dignity”. 

Uploading and circulating first-hand videos and texts was a means of ensuring that whatever the outcome of the Revolution, the Havelian truth would prevail: the regime could be exposed and the revolution could be humanized, if just for a moment. The resonance of this idea is perhaps best demonstrated in one of the first anecdotes in Marcie Shore’s book, The Ukrainian Night, which describes a young paramedic, Olesia Zhukovska, bleeding out on the ground of the Maidan square after being snipered in the neck. “I am dying,” she tweets.

As authoritarian governments bend domestic media outlets to their purpose, social media has become increasingly important as a vessel for marginalized and discriminated voices. If anti-democratic state actors take hold of the Internet, many of these voices are silenced and citizens are denied their fora for discourse. As a Freedom House report from last year expressed, democracies are famously slow at responding to crises—their systems of checks and balances, open deliberation, and public participation are not conducive to rapid decision-making. But this built-in caution has also helped some semi-democratic countries fend off authoritarian-style Internet controls. 

Recently however, the perceived spread of bad actors online has been used by governments of “partly free” countries, such as in Kenya and Malaysia, to justify introducing laws that seek to curb freedom of the Internet, and with it, dissenting voices. Europe’s rush to regulate in this way too risks legitimating digital authoritarianism. As prominent Internet freedom advocate Rebecca McKinnon put it in her book, Consent of the Networked, Internet freedom is not only threatened by authoritarian governments but also by “Western companies and democratically elected leaders who do not understand the global impact of their action––or, more ominously, do not care.”

How can Europe, then, set a responsible example for the rest of the world, as it did with GDPR? In his recent book, Speech Police, David Kaye expounds the need to position individual and democratic rights at the centre of the conversation again. Article 19 of the Universal Declaration of Human Rights (UDHR) asserts the freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers. 

Centring the policing of the online sphere around human rights should not exonerate haters and extremists alike, as is often believed to be the case. After all, in Europe, hate speech and incitement to violence aren’t protected by the right to free speech. But it will equip companies, as Kaye reminds us, with “the capacity to make principled arguments to protect their users in the face of authoritarian and democratic governments alike.” And for governments it means avoiding relying on broad assertions of public order and security: expression should only be restricted where it is demonstrably necessary ´, proportionate, and protects legitimate interests. 

Social media companies need to be more transparent. The opacity of their processes and technologies has only served to breed public distrust towards them and create the illusion to policymakers that they are doing very little, which isn’t the case. Instead of our current preoccupation with content moderation on individual platforms, which typically just pushes bad actors elsewhere (like a game of “Whack-a-mole,” as strategist and author Ali Fisher called it), regulation could instead focus on bringing democratic oversight back into the picture. Regulatory frameworks could allow Europe’s notably strong public institutions, such as European and (national) domestic courts, to play a role in adjudicating what kind of speech should be allowed in our online space, and what should be censored. European policymakers could also call for a higher level of transparency and accountability from companies, not only through clear content standards and data-dumps in annual reports, but more granular and “meaningful” transparency reporting, i.e. companies should outline exactly what they are removing and why they are removing it. And, for when they do get it wrong, either by removing legal content or failing to remove illegal content, ensuring that there are sufficient mechanisms of appeal and redress that allow users clear and expedient channels of communication with the companies. This is particularly important in marginalized and vulnerable communities, such as conflict zones.

In a sense, this looks a lot like companies having to work within their own rule of law framework. In the words of former judge Tom Bingham, arbitrariness is the antithesis of the rule of law. Consistency, clarity, and equality before the law is the aim of the game. The rule of law is hard to follow, however, if there is a lack of clarity around where the law rules. 

Take the far right. As the right continues to fragment and splinter, there remains a large section of right-wing extremism that adjoins the mainstream. Many of the ideas coming out of these subsections also have strong political support, meaning that it is, as Director of the Centre for the Analysis of the Radical Right Matthew Feldman put it in a podcast due to be released soon, much harder to identify where the fringe elements of the mainstream stop and where the radical right starts. This lack of clarity is exacerbated by a lack of political will to find consensus around these issues, especially in the context of the West where white supremacy and identitarianism are concerned. Though there is a clear increase in bigotry online, how to nail down what exactly hate speech, incitement to violence, terrorism constitute and where these can be distinguished from offensive, political speech is a mountainous task. For a start: designating rightwing terrorist actors, as the international landscape is slow to do (the Government of Canada broke the mould this summer when it designated white supremacist group Blood and Honour), would provide platforms with a clearer legal framework through which to prohibit the support and praise of violent extreme views and actions. As it is, where exactly does the law rule? Our governments’ and our own reticence in answering these questions is leading us into a situation whereby we force private tech companies to make these decisions for us, and, ultimately, decide where our society’s cordon sanitaire lies.

Where exactly does the law rule? Our governments’ and our own reticence in answering these questions is leading us into a situation whereby we force private tech companies to make these decisions for us.

As the spike in radical right activity this year has proved, it is imperative that militant bigotry and extremism in the online sphere be curbed. Yet as multibillion-dollar private corporations stalk our public space, deploying engagement algorithms that work to pre-select the content that we see and by extension define our online communities, the stakes for our open and democratic systems are only getting higher. Free expression and free access to information remain essential to maintain open and democratic political systems. The people have to be empowered to choose between conflicting opinions and courses of action. Free expression is also a necessary precondition to the enjoyment of other rights ß to vote, to assemble, to associate. So we can only hope that governments respond to these threats within the rule of law framework—to consider those most vulnerable amongst us, to protect our human rights, and to shield our democracies. From this place can we go about clearly condemning those in our midst who rally hate and preach harm.


 

To read more on this issue, you can order Are We Europe #5: Code of Conscience


Q5-mockup.jpg
Code of Conscience
10.00
Quantity:
Add to Cart
 
 
 

RELATED STORIES