Abstract

Facial recognition technology is being introduced without legal boundaries or thoughts about privacy invasion, argues
“Technology is neither good nor bad; nor is it neutral,” the US historian Melvin Kranzberg once wrote, and the technologies being developed now and used by governments around the world could mitigate the worst effects of the virus. But they could also create new forms of surveillance and control of society. Some may even do both. It is imperative to observe, assess and critique how governments are developing, procuring and deploying these new technologies.
One technology of growing interest to governments during the crisis is facial recognition. In Russia, thousands of security cameras across Moscow can identify individuals who breach the rules around self-isolation, even if they wear facemasks. In the USA, Clearview AI, a controversial facial recognition technology company, is in talks with at least three states and the federal government to develop contact-tracing services linked to existing surveillance cameras. And the UK government is working with Onfido, a British company, to help develop a system of “immunity passports” which would confirm fitness to return to work in part through facial recognition-enabled cameras.
Even before the crisis, facial recognition technology had long been attractive to governments – both authoritarian and democratic – for law enforcement purposes. Since its inception, concerns have been raised over its significant intrusion into people’s right to privacy, as well as its discriminatory application and bias. However, Covid-19 has given further impetus to its development and implementation, with many tech companies spying commercial opportunities (and seldom considering broader societal impacts).
CREDIT: DrAfter123/iStock
The concerns over the potential impact of these new uses of facial recognition technology on human rights are obvious. In Russia, a country where the authorities regularly crack down on political protesters, opposition politicians and journalists, the opportunity to use surveillance cameras fitted with facial recognition technology for more nefarious purposes once the pandemic ebbs is unlikely to go to waste. In the USA, Clearview has come under intense scrutiny and criticism for its secrecy and disregard for individual privacy. The technology it develops – which is sold to both the public and the private sectors – works by scraping millions of social media pages and websites for images and videos of unsuspecting individuals.
Concerns over risks to human rights are often met with derision, often on the basis that the technology is simply doing what law enforcement officials would do, but more efficiently. Responding to concerns over Clearview’s involvement in the Covid-19 response, its CEO, Hoan Ton-That, told NBC News that “[a] lot of retail spaces and gyms, they already have cameras. And there is the expectation that you’re in a public area, so there’s not necessarily an expectation of privacy”.
This type of response demonstrates a failure to understand the nature of privacy (and the right to privacy). It is not a right (or even an expectation) that disappears the moment a person leaves home. True, the level of privacy one enjoys in one’s own home cannot be matched in more public spaces, but the idea that constant surveillance and monitoring does not interfere with the right to privacy is inconsistent with the understanding that one’s private space is not physically limited to a particular building.
Dave Maass, senior investigative researcher at the Electronic Frontier Foundation, a USA-based digital rights organisation, told Index: “I worry that facial recognition technology will ultimately follow us all the time and that, eventually, you could just find out everywhere a person had ever gone: whether they had gone to a protest, where a journalist had gone and who their sources are, whether a person had gone to a gay bar, a mosque or somewhere that dispenses medical marijuana. It’s the equivalent of a police officer following you everywhere.”
It is still not clear if facial recognition technology is even effective. Maass said that policy-makers and law enforcement officials were often won over by marketing campaigns around new technology. “I’ve seen the kind of fanfare that these tech companies roll out. They have huge parties, they give out free items, and this has a huge effect on government officials. So they’re being dazzled, but without really being aware of the efficacy of these technologies.”
Covid-19 has been seen by some of these companies as a marketing opportunity. “If these companies have a product that they can imagine in some way might be helpful, or can make the argument, they will,” said Maass. Law enforcement officials, in particular, are often attracted to the technology. “They want to collect everything, as much information as possible. For them, there’s never enough. They don’t think about the consequences.”
Even if specific cases of facial recognition technology were warranted, it is almost inevitable that these would grow continually. History is full of examples of “temporary” but invasive measures being introduced in response to a crisis, and then becoming permanent. This trend has not gone unnoticed. US senator Ed Markey told NBC News: “If this company [Clearview] becomes involved in our nation’s response to the coronavirus pandemic, its invasive technology will become normalised, and that could spell the end of our ability to move anonymously and freely in public.”
Markey is one of a growing number of policymakers around the world concerned over the rapid increase in the use of facial recognition in public spaces and the lack of any effective regulatory framework governing its use. In early 2020, the US state of Washington became the first state-level jurisdiction in the world to pass legislation regulating its use by government agencies. The legislation contains a number of provisions which will help to ensure that privacy and other human rights are protected, such as the requirement for human rights impact assessments to be undertaken when the technology is developed, and for court orders to be obtained before it can be used for surveillance purposes. The law does, however, have its shortcomings, particularly the fact that it applies only to government use of the technology, and not to the private sector.
Inspired by Washington’s example, policy-makers in other jurisdictions are now taking a look. The European Commission has announced a consultation on the safeguards necessary to mitigate the risks of facial recognition. In the UK, the Equality and Human Rights Commission has called for a suspension of the use of the technology by the police until the country has a sufficient legal framework in place to avoid human rights abuses.
But even this increased attention may not be enough. Maass said: “We need to think beyond facial recognition technology to other uses of biometrics, such as body analysis, or even analysis based on the clothes that people wear. There will always be a cat-and-mouse chase as technology is regulated, and law enforcement and tech companies develop new technologies in response. Facial recognition technology isn’t the end of the story.”
