Blog-Layout

AI & FAIRNESS

Facial Recognition & Policing

Law enforcement officials around the world are using Facial Recognition Technology (FRT) to identify suspects, conduct arrests, and determine guilt through system matches. It is also being used as a tool for border control purposes, to perform identity verifications, and to monitor the safety of people in public spaces. The use of FRT by police and other law enforcement officials, however, has been pointing out the problems regarding the fairness and legitimacy of this tool.

By Daniel Rodríguez Maffioli & Alice Bryk Silveira

August 27, 2021

One of the major concerns of this AI-based technology is that they are inaccurate and induce racial profiling. As will be presented in this article, this was the case of Robert Williams; a 43-year-old African-american father in Detroit who was wrongfully arrested based on a miss identification done by the Police, which relied solely on FRT evidence. 


Mapping of subjects and purposes


In the past years, law enforcement agencies worldwide have been using Facial Recognition Technology for criminal investigation and policing purposes. For instance, according to the Department of Justice (DOJ), in the United Stated this technology has been used in the criminal justice system to help generate suspect leads and to identify victims.  The Federal Bureau of Investigation (FBI) uses FRT to support state and local law enforcement and their investigation through programs such as the Next Generation Identification–Interstate Photo System (NGI-IPS) and the Facial Analysis, Comparison, and Evaluation (FACE) program. And although the FBI claims that the use of those systems cannot be used solely to make an arrest, there remain reservations on the veracity of those claims.


In the United Kingdom, the Metropolitan Police in London also employs live facial recognition cameras. However, in 2020, the use of this technology by the South Wales Police was ruled unlawful by the Court of Appeal of England and Wales . The case was brought by civil rights group Liberty and the affected citizen, Ed Bridges, arguing that the use of automatic facial recognition (AFR) by the South Wales Police violated Brigdes’ right to privacy, which is enshrined in article 8  of the European Convention on Human Rights. The decision concluded that there was indeed a violation of the right to privacy, since there was “too broad a discretion” left to police officers in applying the technology and that the force failed to properly investigate whether the software presented any race or gender biases.


Furthermore, a company named Clearview AI has been in the news for the past two years for creating a massive database comprising millions of faces that were scraped from the web.  The company scrapes photographs voluntarily uploaded by people on sites such as Google, Facebook or Instagram and sells to law enforcement an AI-based "face finder" with the aim of identifying criminal suspects and child abusers. Some authorities, like Canada’s privacy commissioner, have stated that what Clearview AI does is illegal, since people who uploaded their faces to the web did not consent to the processing of their biometric data for the purposes for which the company is using it. 


The most frightening cases of facial recognition, however, come from the East. China has become the most outrageous example of a surveillance state in which privacy and civil liberties are nothing more than a pipe dream. The country has developed a national surveillance system that exploits the capabilities of facial recognition and artificial intelligence to identify and monitor the movements of its citizens, both in the real and virtual world, and to rate them in a social scoring system  according to their behaviour. 


Threats and challenges of FRT in policing 


Facial Recognition Technology can result in an inaccurate match, or false positive, which occurs when there is an inaccurate association between images from two different people. These inaccurate matches can result in incorrect investigative leads, wrong accusations and arrests. The case of the wrongful arrest of Robert Williams brings attention to the abusive use of FRT by law enforcement agencies. 


Robert Williams was falsely identified by the Michigan State Police department’s facial recognition software as a shoplifting suspect. He was arrested in front of his wife and two little daughters (ages 2 and 5) and was interrogated by detectives and held in custody for 30 hours before his release. This is real evidence on the fact that, when these systems are trained with insufficient and unrepresentative data of the population's diversity, they risk incurring in discriminations against minorities. 

But the use of FRT for policing, especially live biometric identification, is troubling for many other reasons too.


Constant surveillance has an oppressive and invasive effect, as human beings do not act the same when they are being watched. In addition, it turns any person freely moving in a public space into a potential suspect of a crime, which reverses the elementary presumption of innocence and lessens civil liberties. Even if we all agreed to allow FRT in policing for the sake of public safety, risks of false negatives -that a criminal is not correctly identified- are significant, so the cons of using this technology continue to outweigh the supposed benefits. 


On the other hand, this technology compromises the fundamental right to privacy and data protection, since -according to the General Data Protection Regulation (art. 9)- biometric data qualifies as a “special category” of personal data. This means the data subject should give consent to the processing of its biometric data or, at least,  the processing should be expressly authorized by law, and provide “for appropriate safeguards for the fundamental rights and the interests of the data subject”. In short, the most elementary freedoms that derive from the human condition are being sacrificed at the expense of a broad and very abstract notion of "public safety". 


Regulatory Initiatives


Criticism from civil society and calls for banning or regulating the use of FRTs have not been long in coming. In the United States, many cities have opted to impose a moratorium or ban the use of these technologies for criminal or policing purposes. Even
tech companies such as Amazon, Microsoft and IBM have announced moratoriums on the sale of their own FRT systems to law enforcement, at least until  federal regulation is enacted.


Some states in the United States have already passed legislation seeking to address the above concerns. Massachusetts, for example,
will have a law that requires police to obtain a judge’s permission before running a face recognition search and limits the authorities that can access the system. 


But the most comprehensive regulatory initiative so far has been the "
Proposal for a Regulation laying down harmonised rules on artificial intelligence", announced by the European Commission on April 21, 2021. In this risk-based regulation proposal, the Commission rightly points out that the use of AI systems for “real-time” remote biometric identification for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, “to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights”. 


Therefore, the Proposal prohibits the use of those systems for the purpose of law enforcement, except in three narrowly defined situations: 

(i) the targeted search for specific potential victims of crime, including missing children; 

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; 

(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence punishable by 3 years imprisonment or more. 


Each of those exceptions should be subject to an specific authorisation by a judicial authority or by an independent administrative authority, prior assessment of the specifics and the evidence brought up in each case 

 

There are some important loopholes, though, that should be addressed. The scope of the prohibition is too limited, because it doesn’t extend to law enforcement’s use of facial recognition on databases of “pictures or video footage” gathered previously or scraped from the web, or to other biometric identification types other than “real-time”. Besides, the exceptions to the prohibition are too broad and might be subject to different interpretations across countries and authorities. 


Closing remarks


The use of remote biometric identification and, especially, Facial Recognition Technologies for policing purposes is a controversial issue. The risks to fundamental rights, privacy and democratic values are indisputable. 


The imposition of moratoriums that some US states and countries have established is a proper measure as long as they are indeed temporary and allow for a calm debate between civil society and policy makers. The European Union's proposal, for its part, is a perfect basis for starting the conversation.  While there are some gaps and opportunities for improvement, it is important to remember that the Proposal includes multiple safeguards for the use of high-risk artificial intelligence applications. Therefore, even in the three restricted scenarios in which the use of FRT is allowed those measures must be implemented. For example, the police will need to certify that the systems put in place are accurate and that the data serving as input is sufficiently representative, in order to prevent racial or gender biases and discriminations. 


As in everything, extreme approaches fail to identify the right questions. In this case, the right policy debate is not whether to allow or prohibit the use of facial recognition by law enforcement authorities, but what should be the democratic regulations that will allow FRTs to be deployed responsibly and in line with fundamental rights, liberty and human dignity. The regulatory initiatives being laid out globally around this topic are definitely a good start. 


Alice Bryk SIlveira is a Brazilian lawyer, member of the Brazilian Bar Association and she recently completed her LLM in International Law & Human Rights at Tel Aviv University. She holds a Bachelor's Degree in Law from Pontifícia Católica do Rio Grande do Sul (PUCRS). During Law School, she has been an intern at the Federal Public Defender’s Office of Brazil and the State Public Defender’s Office. Also, she has been a volunteer at ACRI - The Association for Civil Rights in Israel doing legal research, where she has done research about the privacy impacts of Facial Recognition Technology. Her research interests are the intersection between human rights and technology. She is fluent in Portuguese and English and can communicate in in French and Spanish.


Daniel is a lawyer from Costa Rica, specializing in Public Law and Technology Regulation. He holds an LLM in Regulation and a Diploma in EU Law, both by the Universidad Carlos III de Madrid (UC3M). Also, he recently obtained a specialization in Data Protection and Algorithmic Regulation by the London School of Economics and Political Science (LSE). Currently he practices as Senior Associate at the Public Law and Regulation practice of a full-service law firm in Costa Rica. Daniel is a member of the Ibero-American Association of Regulation Studies (ASIER) and a lecturer on technology regulation matters. He also runs algoritlaw.com, a Latin-American blog and podcast on AI regulation, privacy and internet governance. He’s fluent in English and Spanish

Read More

By Kamayani 21 Sep, 2022
Elon Musk points at Twitter's cybersecurity vulnerabilities to cancel $44 bn buyout-deal.
By Raushan Tara Jaswal 21 Sep, 2022
Time is running out on the National Security defence adopted by the Government of India for the prolonged ban on Chinese based Mobile Applications.
By Marco Schmidt 21 Sep, 2022
This article is a follow-up to “Showdown Down Under?” which was published here last year. As our cycle aims to explore jurisdictions outside the EU and North America, we will further dive into Australian competition law by outlining its basic structure, introducing the relevant actors and give an insight into the pursued policies in the realm of digital markets with a particular focus on “ad tech”.
By Linda Jaeck 16 Jan, 2022
How AI is enabling new frontiers in Mars exploration.
By Marco Schmidt 09 Aug, 2021
Regulation is gaining more traction all over the place but it is uncertain if the Australian News Media Bargain Code will become a role model for legislation in other places. There are several weaknesses to the Code and after all, it is not clear if paying publishers for their content will really alter the high levels of market concentration.
By Theint Theint Thu 09 Aug, 2021
The perseverance of Myanmar’s youth to fight for freedom is proving to be the key to the country’s democratic future.

Watch Our Episodes

Share by: