Proving your identity with your biology has quickly become a large part of our daily life, driven by exponential rise of artificial intelligence and cloud computing, with facial recognition technologies at the forefront of this trend.
Across the world, customers of Delta can use facial recognition to board at certain airports; Aibo, Sony’s robot dog, can interact differently with each person it encounters and Alibaba and Marriott are testing automated check-in and personalised guest services in China.
In the UK, supermarkets can judge how old customers are at self-checkout machines when they buy age restricted items. You can tell that a technology has reached the mainstream when Taylor Swift’s team have deployed a smart camera to identify potential stalkers.
However, these examples also raise concerns, ranging from privacy to equal treatment. The main issues include lack of transparency and the questionable reliability of algorithms, which could lead to a lack of concise information, biased results and discrimination.
For example, a test of facial recognition software conducted by The American Civil Liberties Union (ACLU) showed an Amazon program mistakenly identified 26 California lawmakers as criminals.
Meanwhile, the global facial recognition market shows no signs of slowdown – expected to reach $15 billion by 2026. A legal framework is therefore fundamental to ensure the technology is used in a way that adequately balances these concerns with the social and cultural differences among the continents that have adopted the technology.
More efforts needed for cross-border regulation
The General Data Protection Regulation (GDPR) sets out an ambitious unified privacy approach for the EU, but regulatory practice shows that each case is different within countries. Recently, a UK court found that it was necessary for the South Wales Police to carry out data analysis against facial data to identify any individuals known to the police at a large football match.
In the CEE, the Czech data protection authority found it lawful that a construction company used facial recognition to identify workers and to ensure the protection of a large building site. However, the Swedish Data Protection Authority (SDPA) was not so permissive: it issued a fine of around £16,500 to a school board that used cameras in a classroom with the aim of automating the registration process to free up teacher time. The SDPA found that relationship between the children and the school board was unequal; hence, the parents’ consent to the use of facial recognition had not been freely given.
The US is further behind – only three states (Illinois, Washington and Texas) have passed biometric legislation so far, and their practical enforcement is even more doubtful. A judge in Chicago dismissed a suit because the plaintiff did not suffer material injuries when her photo was automatically uploaded to Google Photos, and her facial features were scanned to create a unique face template without her consent.
Nevertheless, privacy concerns do exist: the ACLU criticised the Seattle City Council’s surveillance ordinance allowing the use of traffic cameras and licence-plate readers, and facial recognition has been banned in some US cities, like San Francisco, for local agencies such as the transport authority or law enforcement.
Can the industry self-regulate?
Banning facial recognition entirely is clearly not the answer, but meaningful and progressive rules are vital. Different jurisdictions may take a different tack, but the nature of the technology is global – therefore, national lawmakers and regulators must establish a borderless approach.
Unsurprisingly, technology companies are also advocating for a reasonable regulation.Brad Smith, President of the Microsoft Corporation, has emphasised that privacy and freedom of expression rights heighten responsibility for the tech companies that create these products while calling for thoughtful government regulation.
Microsoft has decided to adopt six self-regulatory principles to manage the acceptable uses of the technology: fairness, transparency, accountability, non-discrimination, notice and consent to the use of facial recognition, and lawful surveillance. To strengthen reliability, IBM just released 1 million images of faces taken from a Flickr dataset, with tags related to features including craniofacial measurements, facial symmetry, age and gender, to train facial recognition software to identify faces more fairly and accurately.
The reality of regulating facial recognition
As facial recognition grows into an important public policy issue, it will not only require active engagement by governments but also input from academics, tech companies and human rights groups internationally.
To ensure that the technology will work to the greater good, like diagnosing rare, genetic diseases by using facial analysis, however, there must be a set of strict governance standards. Minimum performance levels on accuracy and human oversight must prevent unlawful profiling and discrimination.
People should have access rights to know what images have been collected and for what purpose, accompanied with effective remedies if they believe they have been misidentified or mistreated. Facial recognition in the public sphere should only be used if there are no other less-intrusive means.
When California submitted a bill that police departments would not be allowed to use facial recognition software on body cameras, police groups expressed their fear that it would prevent them identifying potential suspects and missing persons. To avoid similar concerns, laws should also address governmental uses of the technology.
But regulation alone will not solve the issue – independent testing of facial recognition services for accuracy and unfair bias should become commonplace. For example, these tests may show how an algorithm performs after a subject’s photo aged ten years. The US National Institute of Standards has been conducting a variety of facial-recognition system assessments since 2000, which could be used as valuable benchmarks.
The technology will not wait
It is not unusual that regulators are trying to get ahead of technology – the next couple of years will show if they succeed in the case of facial recognition. The French government has already announced a plan to launch Europe’s first nationwide facial-recognition program, to provide secure digital identities for all citizens.
In the UK, the Information Commissioner’s Office is looking into the use of public safety facial recognition technology at the King’s Cross 67-acre, 50-building development. And more recently, Amazon announced that its facial recognition software, Recognition, can detect a person’s fear. The regulation put in place needs to be effective, because the deployment of the technology is continuing, and there is no way to opt out from its use.
By Márton Domokos Senior Counsel of law firm CMS, Budapest.
About the author
Márton Domokos is a senior counsel within the commercial team at global law firm CMS Budapest, focusing on data protection, intellectual property, commercial transactions and the TMT sector. He is also the Co-ordinator of the CEE Data Protection Practice (CMNO).
Founded in 1999, CMS is a full-service top 10 international law firm. With 70+ offices in 40+ countries across the world, employing over 4,800 lawyers, CMS has longstanding expertise both at advising in its local jurisdictions and across borders. CMS acts for a large number of Fortune 500 companies and the FT European 500 and for the majority of the DAX 30.
PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.
For more information on upcoming events, visit the website.
We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.
Privacy Culture: Data Privacy and Information Security Consulting, Culture & Behaviour, Training, and GDPR maturity, covered. https://www.privacyculture.com/