The responsibility of communications platforms like Google, Twitter, and Facebook for their users’ online activities is a key factor affecting innovation and free speech in the 21st century. Still, the degree of liability for intermediaries – and the extent of online speech regulation – varies greatly, lacking a unified approach to regulation. Heightening the monitoring responsibilities of intermediaries is, in some ways, positive because intermediaries can work to support a healthy speech environment online. However, strengthening a gatekeeping function could also lead to greater censorship and restrictions on free speech. There must be a careful balance to prevent the abridgment of fundamental rights online.
Consider the disparity between Europe and the United States on the topic. Attitudes toward regulation of intermediaries in Europe differs greatly from that in the United States, due in part to different understandings of fundamental rights like freedom of expression and privacy. Whereas the United States has what some scholars refer to as a “free speech maximalist” approach, in the European Union – and internationally – there is a greater focus on achieving a balance between competing rights, placing free expression and information rights against other human rights. A number of the earliest and largest online platforms came from the United States, and the early development of the internet thereby reflected a distinctly American value system — one centered on free speech and permissionless innovation. Yet, a growing influence of EU internet policy worldwide, evidenced by the increasingly widespread application of the General Data Protection Regulation (GDPR), suggests that debates about intermediary liability and the regulation of users’ online activities must incorporate components of each system.
There is much to be learned by considering the different attitudes and legal mechanisms related to regulation in each place. In the United States, regulation of platforms is largely concerned with First Amendment and property rights. The First Amendment of the US Constitution prevents the government from making any law that abridges the freedom of speech, freedom of the press, or the right to peaceably assemble. For example, the U.S. telecom company Verizon has argued that net neutrality protections that interfere in the companies ability to exercise control over users’ internet traffic limits the companies First Amendment right. The court sided with Verizon. Though this right is not absolute, it remains a central value within American law and society. Historically, some regulations on speech have been connected to property ownership. As one report on the issue puts it:
Generally speaking, property owners have the right to not have First Amendment speech regulations interfere with their private property rights. If lawmakers seek to apply this to the intermediary context, the key problem then is defining what counts as property or ownership over content or speech online.
In the case of intermediary liability in the United States, questions of ownership, property, and speech rights in semi-open private spaces, like internet platforms, must still be better answered.
In contrast, the E.U. approach to intermediaries weighs human rights and consumer law more heavily than freedom of speech and property rights. Discussion on platform regulation is particularly active among civil society, academics, and lawyers in the E.U., as there is a high probability that this area will be regulated in the 2019-2024 legislative period. As opposed to the U.S., which takes a more absolutist approach to free online speech, the European approach generally weighs freedom of speech against other ends, such as limiting the spread of hate speech or extremist content online. Take-down decisions are not, however, always clean-cut. In particular, the same report notes: “through the E-Commerce Directive, the European Union is indirectly incentivizing intermediaries to interfere with private speech.” With the E-Commerce Directive, as well as the 2019 Copyright Directive, intermediaries face possible financial penalties for carrying certain user content. Caution then causes some intermediaries to “overblock” content, removing content that is flagged even if it is not against the Terms of Services and is otherwise legal.
A uniting factor that fuels the appetite of lawmakers to regulate big tech giants are the shortcomings in existing content moderation regimes. Currently, most content moderation is undertaken in low wage countries and under harsh working conditions. Only a few seconds are devoted to a decision about any significant piece of content, and this decision comes without knowledge about the cultural or legal context of the content itself. While the debate often focuses on which types of content shall or shall not be permissible on a platform, it might be a more tangible aim to focus on increasing the quality of content moderation decisions. A significant difference in the regulatory approaches also lies in the means via which they shall be enforced. While many groups in the US would favor additions to the Terms of Services of platforms, many voices in the EU would see this at putting even more power about the permissability of certain content into the hands of powerful tech giants. Yet, the EU has chosen a hybrid approach in the recent Audio Visual Media Directive which legally obliges video online platforms to prohibit certain types of content in their Terms of Service. What might be at stake in both approaches is the lack of mechanisms for due process, nor recourse for individuals whose content is unduly removed. Such attempts to form a healthy online environment through regulation can result in silence and censorship of users.
American and European policy makers and legal scholars have much to learn from each other with respect to speech protection and intermediary liability. Despite differing attitudes, we face similar problems, including prior restraint of speech by government and private entities and a lack of transparency. The E.U. attempt to balance rights is admirable; yet, transparency and due process are essential if the fundamental rights of users online are to be protected. The goal for online speech regulation should be for intermediaries to protect spaces for free expression online, while also still guarding against destructive speech. Platform regulation is a global issue, affecting users around the world — neither government nor industry alone should have the final say about our fundamental rights online.