In 2019, media outlets worldwide published articles describing what was happening in Hong Kong, where people were protesting in response to the introduction of the Fugitive Offenders amendment bill by the Hong Kong government. One detail sparked a lot of attention: law enforcement authorities used artificial intelligence software that can match faces from video footage to police databases to track down protesters. Coincidentally, facial recognition technology and information on its deployment in urban spaces also popped up in the news in London, where the private owner of the newly redeveloped site located near King's Cross Station was found to be using a biometric surveillance system to track random pedestrians.
Face recognition is used to identify or verify the identity of an individual using their face. These systems use algorithms trained to pick out distinctive details about a person’s face, such as distance between the eyes or shape of the chin, which are then converted into mathematical representations and, finally, compared to data available in a face recognition database.
Scary, eh? Imagine you are walking to go to your favourite place to grab a coffee, and while doing that, your face is captured by a camera placed just next to the building you are passing by. In the precise instant, your face is scanned, and a system is working to match it with your local police database to find out if you are a criminal. You might don't see any problem with this; after all, you didn't do anything wrong, right?
Well, multiple research found that the accuracy percentage is relatively low (95% inaccurate!), and it further decreases when it comes to black people. This is as a result of it being usually trained on Caucasian faces, which then result in systematically misidentifying and mislabelling racialized individuals. The technology reflects and further builds on long-standing social divisions that are deeply intertwined with racism, sexism, homophobia, colonialism and other forms of structural oppression. Also, the full extent of the discrimination created by facial recognition system is still largely unknown (The use of live facial recognition technology by law enforcement in public places, 2019).
Recently more and more organisations have taken a stand against facial recognition, stressing how this violates people's human rights. Campaigns have been launched to ask governments and local authorities to ban the use of such technology worldwide. We decided to list some we think it is worth watching out for (and that you might want to sign):
In addition to these, the European Commission is due to publish a new "European approach for artificial intelligence" with more indications on the use of facial recognition technologies in April 2021. We hope to see their call to ban facial recognition systems everywhere in Europe, following the examples of cities such as San Francisco and Boston that already took a stand against mass surveillance and human rights violations.
Read More
Watch Our Episodes