Blog-Layout

FACEBOOK OVERSIGHT BOARD

How 'Pivotal' a Moment in Global Governance?

The Facebook Oversight Board has been introduced by Facebook as a novel experiment in global governance which is posited as having the potential to conduct independent and external assessment of controversial cases the determination of which are likely to have far-reaching implications for human rights. This article attempts to ascertain whether the Facebook Oversight Board presents true promise, or whether it is a smokescreen to gain greater legitimacy with minimal changes to its corporate structure undergirded by certain vested interests.

By Sanskriti Sanghi

August 30, 2020

The retelling of Facebook's history, if analogized to the world of sports, would come across much like a chess player practicing moves against herself. With every new frontier of the digital space scaled by Facebook, along comes a setback for its legitimacy. There is however a critical difference between Facebook and the game of chess – each ‘move’ by Facebook has far-reaching implications for a global society. The words of Savielly Tartakower for the game of chess, “The blunders are all there on the board, waiting to be made” are profoundly truer today of the past of Facebook in its attempts to regulate digital speech and in its moves to build a novel structure for global governance.



Given the global democratic culture created by digital speech and the private online platforms on which these interactions take place, the participation and key role of actors such as Facebook in these tasks does not seem to be surprising. However, the chequered past of Facebook and the threat to a truly democratic culture inherent in its every attempt at regulation and governance caution us against moves which bestow the appearance of legitimacy, and urge us to explore the extent to which they offer substantive value. In pursuance of this narrative, this article attempts to ascertain whether the Facebook Oversight Board (“FOB” or “Board” hereinafter) presents true promise, or whether it is a smokescreen to gain greater legitimacy with minimal changes to its corporate structure undergirded by certain vested interests. 





Locating the impetus for the creation of the Board within Facebook’s past



Facebook is no stranger to voices which rally for private online platforms such as itself to be better, i.e., for them to be more accountable to their users and follow due process, to be more transparent in their decision-making regarding content, to correct the ‘systemic entrenchment of algorithmic bias’ etc. Facebook has been criticized time-and-again for each of the foregoing (and more), with specific instances highlighting the role attributable to it in the spread of misinformation, data mishandling, unequal access to participation, biased algorithmic decision-making etc. With the message within each new clarion call for change, such prior practices of Facebook have raised grave public concern and received ‘unprecedented public scrutiny’ due to which it is now suffering from a ‘crisis of legitimacy’. An introduction to this crisis in this section is necessary for three reasons which help us locate the impetus or motivations for the creation of the Board within Facebook’s chequered past. 


First, it is to help us identify and juxtapose the scope and powers of the Board with the lofty goal projected by Facebook, which is revelatory of the desire to gain legitimacy without compromising on the corporate structure of Facebook and its vested interests. It is also to aid an understanding of the idea that the lofty goal projected by Facebook of deciding on “what should be acceptable speech in a community that reflects the social norms and values of people all around the world” is in itself impossible to attain as “a universal community with a universal set of norms underlying those standards does not exist at the global level”. 


Second, it is to put forth the argument that the impetus for the creation of the Board lies more in that which Facebook seeks to avoid than the goal it projects as the rationale for the introduction of the Board. ‘That which Facebook seeks to avoid’ is a reference to the threat of coercive public regulation, in contrast to which self-regulation is preferred by Facebook. For the acceptance of such self-regulation however, the garb of a loftier goal is essential, for without such a goal it would be difficult for Facebook to address the crisis of legitimacy or to instil public trust in Facebook’s regulation. The Board is the structure introduced by Facebook in pursuit of its preference for self-regulation, comprised of experts with innumerable skills but scarce power as the Board starts making its determinations to collectively exercise a check on Facebook or to balance its power. 


Third, it is to argue that the narrative of locating the impetus for the creation of the Board is shaped as much by that which is said or revealed as that which is not said or revealed. Illustrative of this argument is the absence of the mention of ‘due process’ as an endeavour of the Board which has been termed the ‘Facebook Supreme Court’. It is also evidenced by the absence of a ‘concrete commitment to the use of an international human rights framework’ in the Charter and bylaws of the FOB. Though the framework is referenced (for instance, in Article 1, Section 4.1. of the bylaws), I assert that it is more a rhetorical device than a true commitment, aimed at garnering public trust, quelling civil society outrage, and protecting Facebook’s vested interest in its autonomy. This is argued due to the contrast between the broad-and-sweeping, or rather general, reference made to human rights in the bylaws, and the prioritization of human rights norms protecting free speech specifically in the Charter (Article 2, Section 2), with a limited mandate for the Board in the backdrop.




An exercise in peeling apart the layers



The FOB, due to its novel approach to the moderation of content, theoretically presents the potential for an independent and external assessment of controversial cases the determination of which are likely to have far-reaching implications for human rights. Not only does it sound fairer that a second chance is provided to the aggrieved in the form of an appeal to an independent body, especially given the pro-active use of AI to enforce policies, but it also sounds like Facebook upon self-reflection has found itself willing to cede some of its authority in favour of the pre-eminence of the goal of the protection of human rights.



However, in its present form, the Board has limited ‘ability to make meaningful and sustaining impact’ because in its initial stages, it will only have the power to review decisions on removal of content from the platform, leaving all other decisions for an undetermined date in the future. A single reading of the Charter is also revealing of the limited mandate and scope of the powers of the Board beyond its subject matter jurisdiction, which indicate that the role of the Board is limited to “checking whether or not Facebook complies with its own rules, and this corroborates the idea that the Board is constructed to protect Facebook as such”. 


Illustrative of this are Article 1, Section 4 (2.) (“Interpret Facebook’s Community Standards and other relevant policies (collectively referred to as “content policies”) in light of Facebook’s articulated values”), Article 2 (“The board will review and decide on content in accordance with Facebook’s content policies and values”) and Article 2, Section 2 (“Facebook has a set of values that guide its content policies and decisions. The board will review content enforcement decisions and determine whether they were consistent with Facebook’s content policies and values.”).



On the basis of the foregoing, it may reasonably be argued that the Charter and the machinery made operational pursuant to it by Facebook guide the decisions of the Board in a manner favourable to (and even commercially convenient for) Facebook, and re-route them continuously in that direction. Moreover, this is done in a manner which does not allow the members of the Board to base their decisions on a ‘wider consideration of interests, rights, experience and conflicts’ which is essential for a balancing exercise of rights. Facebook thus appears to be more interested in projecting a credible impression than true oversight, so it can avoid being constrained by more stringent regulatory measures external to it.


It is also pertinent to peel apart and peer into the commitment to human rights promised by Facebook in its Charter and bylaws. While the creation of the Board in itself is in tandem with the request of Human Rights bodies for platforms such as Facebook to provide the aggrieved with access to remedies in relation to decisions made by them (e.g. as in Principle 31 of the United Nations’ Guiding Principles on Business and Human Rights), and ‘indisputably worthy and sweeping values’ such as safety, privacy, dignity, voice are invoked for consideration in the decisions of the Board, the problem lies in the Board’s mandate (as entailed in the Introduction to the Charter) which is limited to the protection of free expression. The scope of the mandate has been recognized as too narrow by BSR which was commissioned by Facebook to conduct an independent assessment of the Board (“all human rights—not only freedom of expression and personal safety and security—can be impacted by content decisions.”). 


These decisions could have an impact on multiple rights the content of which cannot be defined by Facebook through its Community Standards, and which often need to be balanced in a context-specific manner against each-other as opposed to by according pre-eminence to one among them. Consequently, such cherry-picking reveals a  “hierarchy of human rights for the board’s evaluation of content decisions, which is antithetical to an international human rights approach in principle”, and which might result in practice in prioritization of freedom of expression even in the face of severe harm. It also appears to be the use of rhetoric language to “shore up public trust and the company’s reputation globally, after many well-founded criticisms”. Facebook must also find a way for the Board to engage with limits on free speech as set by domestic laws, for a global ban on content which is locally prohibited might be non-compliant with International Human Rights law.




A novel experiment in global governance, but how pivotal? 



The narrative of this article has up till now been largely cautionary. It is yet undeniable that the FOB is a novel experiment in governance which attempts to respond to the “unprecedented amount of control by a private platform over the global public sphere” as shaped by the context – a context in which people are demanding true legitimacy from private platforms such as Facebook due to their role in shaping online democratic culture and more generally, our social environments. Scholars have gone as far as to label private online platforms the ‘new governors’, who sit between the users and the State in a ‘triadic’ model, to drive in the idea that to think of them as capable of fitting into existing categories of businesses or corporates (despite commercial interests) would be to miss the larger picture. 


However, novel as this experiment may be, the creation of the Board shall only be ‘pivotal’ to the narrative of governance in terms of being a model to be emulated or built further upon (in other words, pivotal as a ‘success’ as opposed to a ‘cautionary tale’ for others) if Facebook empowers the Board with a broader mandate and mainstreams human rights into its policies and practices even if it comes at the cost of changes to its corporate structure. Not only must the Board have the authority to adjudicate disputes on key content decisions other than regarding the removal of content (e.g. regarding algorithmic ranking decisions) and by relying on a host of considerations which are not limited to ‘Facebook’s content policies and values’, it must also have the ability to recommend revisions to the policies underlying their decisions, and highlight the weaknesses of the policy formulation process for the future. 


Given the grandeur accorded by Facebook to the creation of a structure such as the Board, and the potential for transparent public reasoning theoretically inherent in judicial structures, the Board must necessarily be so empowered. Otherwise it remains hollow and but a smokescreen of legitimacy, and short-lived legitimacy at that, for the Board lacks a ‘reservoir of legitimacy’ from which to draw upon. Moreover, it would serve this experiment in governance better should Facebook acknowledge the limitations of the Board as a decision-maker in terms of the magnitude of cases it can effectively determine and the extent to which the Board can authoritatively ascertain global norms of speech and expression, as well as accept that in order to build trust the Board will be required to
integrate with other forms of checks and balances as well. Attempts at regulation which are self-serving and devoid of true commitment to the global community are likely to replicate the threat to democratic culture, and maybe even worsen it due to their attempts at obscuring such effects. While the vague and unspecified future date at which Facebook intends to broaden the scope of the Board’s functioning acts as a thimble of hope, the growing calls to members of the Board to ‘speak up or step down’ urge each of us to not view the creation of the Board in isolation, but in continuation of the trajectory traced by Facebook since its inception. 

Sanskriti Sanghi is an alumna of the University of Cambridge, where she read for her Master's in Law (LL.M., International Law). She graduated from Cambridge in 2020 with First Class (H) and as a recipient of the Jennings Prize (Wolfson College). Prior to pursuing her graduate studies at Cambridge, Sanskriti was an undergraduate student at the Gujarat National Law University (GNLU) where she was an Aditya Birla Scholar, and from which she graduated in 2019 as a recipient of 8 gold medals. During her time at both these institutions, Sanskriti worked as a research member of the Cambridge Pro Bono Project on ‘Action on Armed Violence’, as a Research Assistant for a chapter on children’s rights for a book on ‘The Right to Life in International Law’, or as the Convenor of the research centre for Law and Society at GNLU. Presently, she is a lecturer at the Jindal Global Law School (JGLS). At our Institute, she co-leads the cycle on Digital Constitutionalism.

Read More

By Kamayani 21 Sep, 2022
Elon Musk points at Twitter's cybersecurity vulnerabilities to cancel $44 bn buyout-deal.
By Raushan Tara Jaswal 21 Sep, 2022
Time is running out on the National Security defence adopted by the Government of India for the prolonged ban on Chinese based Mobile Applications.
By Marco Schmidt 21 Sep, 2022
This article is a follow-up to “Showdown Down Under?” which was published here last year. As our cycle aims to explore jurisdictions outside the EU and North America, we will further dive into Australian competition law by outlining its basic structure, introducing the relevant actors and give an insight into the pursued policies in the realm of digital markets with a particular focus on “ad tech”.
By Linda Jaeck 16 Jan, 2022
How AI is enabling new frontiers in Mars exploration.
By Marco Schmidt 09 Aug, 2021
Regulation is gaining more traction all over the place but it is uncertain if the Australian News Media Bargain Code will become a role model for legislation in other places. There are several weaknesses to the Code and after all, it is not clear if paying publishers for their content will really alter the high levels of market concentration.
By Theint Theint Thu 09 Aug, 2021
The perseverance of Myanmar’s youth to fight for freedom is proving to be the key to the country’s democratic future.

Watch Our Episodes

Share by: