With the 2016 American Election, it became clear that social media platforms allowed fake news to spread, at unprecedented speed. Facebook, in particular, proved to be the main source of exposure to misleading information on the elections. Four years later, however, tech companies have taken a new stance on the proliferation of fake news. From redirecting users to official guidelines on covid-19, to flagging and blocking unverified stories, Facebook, Instagram and Twitter have made efforts in tackling this problem.
Case in point, on October 14, an unverified story by the New York Post, incriminating Hunter Biden through allegedly leaked emails, got moderated by Facebook and Twitter, in an attempt to limit its traffic. While the restriction of the article was only one of the efforts taken in countering fake news, both companies were accused by conservative voices of “liberal censorship”.
However, as Kyle Langvardt, law professor at the University of Detroit, explains in his academic article, this is at the core of the dilemma of moderators: “whether it is condemned as “censorship” or accepted as “content moderation,” [this system] sits in tension with an American free speech tradition that was founded on hostility toward ex ante administrative licensing schemes” .
The debate on whether or not online content moderation constitutes censorship over free speech is not only limited to the ‘editorial powers’ of Tech Companies. It equally fits within the broader discussion about digital governance and the need for online rules to solve the social threats that exude from the online world. As Edoardo Celeste, a fellow researcher on Digital Constitutionalism, explains through his publications, the new legal challenges created by digital technology have led to the “emergence of normative counteractions”. Through the lenses of Digital Constitutionalism, the author further explains that the need for new norms in the digital environment stems from the necessity to re-establish a “constitutional equilibrium”. While the role to guarantee such equilibrium used to belong to State actors in the physical world, in the digital one, “private actors emerge beside nation-states as new dominant actors, thus potential guarantors and infringers of fundamental rights”.
While the participation of private companies in defining the new rules of the digital realm seems inevitable, the question on how social media companies should work as moderators of free speech still stands.
As Michal Lavi, a researcher at the University of Haifa, points out in his research, the role of moderators for social media companies is often a consequence of the liability regime in which they operate. Digital technology, in fact, allows to both “empower individuals and promote important social objectives” and “create a setting for speech-related torts, harm, and abuse”. Because of this, tech companies are often considered liable for the moderation of content that could potentially constitute a threat to its users.
The argument over liability of the online platforms is, indeed, a strong one. Yet, the way they operate as moderators is not always as transparent as it should be. In stark contrast with their role of “democratic fora”, social media’s decision-making remains opaque and erratic (Langvardt, 2018). As Spandana Singh explains, “in part, this lack of transparency is due to the black box nature of such tools, which prevents comprehensive insight into algorithmic decision-making processes, even by their creators”. However, in the context of elections, the lack of transparency and the accountability of potential negative decisions, “the moderation of elections by private platform companies can erode, rather than protect, our democracies” (Bradshaw, et al., 2020).
With the importance and critical nature of the online discourse, it seems necessary to include other stakeholders within this framework. Drawing from the Digital Constitutionalism theory by Edoardo Celeste, other entities could participate alongside State actors and platform companies.
An alternative model that has been presented by Article 19, a Human Rights organisation that defends freedom of speech, foresees a decentralised network of moderators within the digital realm. As the organisation explains on their website, “due to the high asymmetry of information currently in the market, regulators alone do not possess the necessary information to properly shape the unbundling requirement at the contractual and at the technical layer. Nonetheless, regulators should be the ones to lead this process, and to closely monitor the compliance of market operators”. Through this new model, the decentralized network of moderators will include the so called “Social Media Councils”, a set of “open, participatory and accountable bodies made up of various actors working at the national level to safeguard freedom of expression online”.
Besides taking into consideration more actors with the role of moderators, the decentralized model proposed by Article 19 will equally constitute a more ethical solution, by “unbundling” the different services of social media companies, which control both “the hosting of the social media platform and the moderation of content on the platform”. By doing so, tech companies won’t completely lose their ability to moderate content, but they would be obliged “to allow competitors to provide competing content moderation services on their platforms”.
By potentially creating a competitive market of content moderators through a decentralised network, Article 19 might have found a solution on how to respond to the need for digital constitutionalism, around the issues of freedom of speech and censorship. This option, however, will require both State and Private actors to rethink how to build a better digital world.
Read More


Watch Our Episodes