Private companies are policing online hate without independent oversight or regulation, which has serious implications and poses risks for basic human rights and freedoms. (Shutterstock)
The torch-lit march by armed white supremacists recently in Charlottesville, Va., continues to generate debate about how hate groups should be regulated. Amid growing public pressure following the march, internet companies rushed to remove from their platforms websites espousing violent hate speech.
GoDaddy terminated its domain services to neo-Nazi website The Daily Stormer, as did Google. Cloudflare, a company that protects websites from online attacks, also banned the hate website from its platform. Russia ordered the site barred from being hosted in the country.
My research and my book Chokepoints: Global Private Regulation on the Internet demonstrate that many internet companies already remove content and ban users “voluntarily” — that is, in the absence of legislation or any judicial processes. Major intermediaries including Google, PayPal, GoDaddy, Twitter and Facebook voluntarily police their platforms for child sexual abuse content, extremism and the illicit trade in counterfeit goods.
Many people understandably applaud these efforts to stamp out hate speech and other objectionable content. However, internet companies’ efforts as de facto regulators of speech raises serious questions: How should online content be regulated? By whom?
I do not support white supremacists and I am not arguing against some policing of such speech. Rather, I am saying that we need to consider seriously how to regulate online content as the next case may not be as clear cut.
There are significant problems with relying upon powerful companies to police the internet because their enforcement practices are troublingly opaque and prone to arbitrary interpretation.
Disturbing precedent
In a sobering contrast to the cheering of internet companies for their public opposition to the Daily Stormer, Cloudflare’s CEO Matthew Prince offered a nuanced, cautionary perspective, warning that withdrawing services from hate groups in response to public pressure sets a troubling precedent in policing online speech.
In a blog post explaining Cloudflare’s actions against the Daily Stormer, Prince argued that the company considers due process a “more important principle” than freedom of speech. Due process, he said, means that “you should be able to know the rules a system will follow if you participate in that system.” This statement aptly captures the inherent problems with intermediaries working as de facto regulators of content and online behaviour.
Earlier this year, Shopify employees and hundreds of thousands of people urged and petitioned the online commerce platform to stop hosting far-right Breitbart Media’s internet store. Reinstated executive chairman Stephen Bannon calls Breitbart “the platform for the alt-right.” The so-called “alt-right” – a term popularized by Richard Bertrand Spencer – covers a mix of white supremacist, separatist, neo-Nazi, fascist, racist, anti-Semitic, Islamophobic and populist conservative ideologies.
Shopify CEO Tobias Lütke said he was defending free speech as the Ottawa company continued to host Breitbart’s online store under threat of employees resigning. After public pressure and a grassroots campaign dubbed #DeleteShopify led to scrutiny that revealed more questionable business, Shopify was forced to adopt an “Acceptable use policy.”
The contrasting examples of The Daily Stormer and its deletion by internet companies, and Shopify’s steadfast support for Breitbart, demonstrate extremes of a dilemma that only promises to intensify.
Arbitrary policies, regulation
Internet intermediaries have the potential to be powerful regulators on a wide variety of issues because they can act swiftly and without court orders. Importantly, they have latitude to censor any content or ban users under their terms-of-service agreements.
PayPal reserves the right to terminate its services to users “for any reason and at any time,” language that is echoed in most intermediaries’ service agreements. The capacity for arbitrary regulation is thus baked into intermediaries’ internal rules.
Prince cautioned that Cloudflare’s action against the Daily Stormer sets a precedent for intermediaries to police speech without court orders requiring them to do so.
These intermediaries often act at the behest of governments that prefer companies to be the public (but largely unaccountable) face of internet regulation. But those firms are generally ill-equipped to distinguish legality from illegality, causing wrongful takedowns and mistakenly targeting lawful behaviour.
Equally problematic: Intermediaries’ enforcement processes are often opaque as their content moderators arbitrarily interpret their complex, fast-changing internal rules. These problems are compounded by intermediaries’ growing use of automated tools to identify and remove problematic content on their platforms.
There is also the concern of so-called mission-creep when rules first enacted against child abuse or terrorism – noteworthy catalysts for enforcement action – are later applied to other distinctly less-harmful issues, such as the unauthorized downloading of copyrighted content.
Dystopian future is here
Regulatory efforts commonly expand from censoring violent hate speech to other speech that may be considered controversial by some, such as that of Black Lives Matter. As well, governments worldwide regularly pressure intermediaries to censor and track critics and political opponents.
When major intermediaries become go-to regulators responsible for policing content on behalf of governments or in response to high-profile protests, their already considerable power increases. U.S.-based internet companies already dominate many industry sectors, including search, advertising, domain registration, payment and social media. Cloudflare’s Prince rightly warned that by depending on a “few giant networks,” a “small number of companies will largely determine what can and cannot be online.”
This dystopian future is already here.
The takedown of the Daily Stormer undoubtedly makes the world a better place. But do we really want companies like Facebook and Twitter to decide – independently, arbitrarily and secretly – what content we can access and share?
Given these seemingly intractable problems, what can we do? First, we should avoid governing on the basis of protests or media pressure. Instead, we need a clear set of rules to enable intermediaries to respond consistently, transparently and with respect for due process, as Prince recommended.
Governments should clarify the nature of and, importantly, the limitations of intermediaries’ regulatory responsibilities. Finally, we must stop governing in response to specific crises – so-called “fake news,” terrorism and hate groups – and instead think critically about how we can and should govern the internet.
About The Author
Natasha Tusikov, Assistant Professor, Criminology, Department of Social Science, York University, Canada
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Related Books
at InnerSelf Market and Amazon