Facial recognition is a growing phenomenon but one that presents a potential minefield for companies who are collecting and using personal information.

With enhanced consumer rights and responsible data stewardship at the forefront of everyone’s minds, understanding how facial recognition fits with the GDPR principles of transparency, Privacy by Design and Privacy by Default has never been more important. 

Last year, there were a number of revelations about the use of facial recognition in public spaces by private companies, and the first GDPR fine for illegal facial recognition technology was issued by the Swedish Data Protection Authority (DPA). The fact that the technology is being used in secret raises many legal, ethical and political questions.

At present, it would seem that the use of facial recognition technology is not closely regulated. There needs to be a clear definition of rules and legal processes to ensure this type of data is regulated correctly. When considering the legal basis for processing this technology, fundamental rights, privacy and ethical responsibilities all need to be taken into account. The accuracy of facial recognition in practice – and the dangers of inbuilt bias – also need to be considered.

While facial recognition technology is not illegal (think how willingly and automatically we allow airports, tech giants like Facebook and Google, and even our own smartphones to document and record our faces), the issue is consent.

The Metropolitan and South Wales police forces have been trialling facial recognition technologies since 2016, London’s King Cross uses facial recognition technology in its security cameras and Canary Wharf may follow suit – raising questions about how one might opt out of this blanket surveillance. 

The problem is that are no laws, policies or guidelines to govern the use of facial recognition technology. The processing of the data captured by facial recognition technology, however, is where GDPR principles can be applied. 

“Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.”

The Information Commissioner’s Office (ICO), commenting on the use of live facial recognition technology in King’s Cross

The Metropolitan Police has self-imposed conditions on its use to ensure compliance with the Human Rights Act 1998, in particular the right to respect for private and family life (Article 8).

Consumer rights 

Because there are no governmental laws, policies or guidelines on the use of facial recognition in a public place, appealing a decision to install or use facial recognition cameras, suing the police if incorrectly identified as a suspect, or covering one’s face when approaching a face recognition camera located in a public space are, respectively, reliant on ethical guidelines, reliant on the ICO and legal advice, or subject to the police’s power to require removal of facial recognition. In all cases, there’s no legal protection for members of the public.

The ICO has said they will prioritise regulations for the use of surveillance and facial recognition technology within the next year.

As new technology emerges and sensitive personal information is collected, consumers’ legal rights must be taken into account. The technology is indirectly regulated by the Data Protection Act 2018, in relation to the images that are gathered and how they are handled; the Protection of Freedoms Act 2012, which has sections relating to a code of conduct regarding security surveillance devices; and the Human Rights Act. The pace of technological change is perhaps faster than the pace of legal change in this area.

Public safety

The overall benefits to public safety must also be considered, and they must also offset any  consumer distrust in this technology. Every time facial recognition is used, it needs to be assessed to ensure it is proportionate, and that fundamental rights have been considered, balanced and fit to a lawfulness of processing under the Data Protection Act 2018.

If a processing operation could result in a ‘high risk’ to the rights and freedoms of individuals, then a Data Protection Impact Assessment (DPIA) must be conducted. This can be based on one of several lawful bases. 

Examples of the types of conditions that would require a DPIA

  • Using new technologies
  • Tracking people’s location or behaviour
  • Systematically monitoring a publicly-accessible place on a large scale
  • Processing personal data related to “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation”
  • Using data processing to make automated decisions that could have legal (or similarly significant) effects
  • Processing children’s data

 The most appropriate lawful basis

Determining which lawful basis is the most appropriate, and how facial data is collected, managed, recorded and protected, cannot be a retrospective exercise: it must be decided before facial recognition is used.

The GDPR requirement for data protection by design and by default ensures that the appropriate technical and organisational measures to implement the data protection principles and safeguard individual rights are put in place. This means that organisations have to integrate data protection into data processing activities and business practices, from the design stage right through the lifecycle. 

In a world where consumers are increasingly informed about the use of their data, the use of facial recognition technology currently poses many more questions than it answers. In defining the regulatory landscape, all new technologies must carefully consider and assess both the fundamental and human rights of individuals.

When it comes to the application of the legal basis for processing this technology, every organisation will have to learn how to negotiate the practical and moral minefield that this presents.  

Andy Bridges is Data Quality and Governance Manager at REaD Group.