Facial Recognition and GDPR Compliance: What You Need to Know

Facial recognition software is one of the more innovative technologies shaping the way organisations operate.

It can be used for all manner of purposes, from improving security monitoring to improving user experience with tailored settings and novel authentication methods.

To use these technologies, organisations must obtain and store users’ biometric data, which comes with enhanced security risks.

It’s why the information is classed as “special category” personal data under the GDPR (General Data Protection Regulation) and is subject to stricter rules.

If your organisation is considering tools that use facial recognition, you must determine whether the benefits outweigh the additional compliance requirements.

We help you decide if that’s the case in this blog, which explains the purpose of facial recognition software, the GDPR’s compliance requirements and the steps you must take to adequately secure special categories of personal data.

What is facial recognition technology?

Facial recognition technology is any tool that can identify a specific person based on a video recording or photograph.

The information used to perform this is considered personal data under the GDPR. Even if the records don’t contain details that you would traditionally think of as personal data – such as names and contact details – it still fits within the Regulation’s scope.

That’s because the GDPR defines personal data as anything that can be used to identify a natural, living person. An image of someone’s face meets those criteria and is therefore subject to the Regulation’s requirements.

The benefits of facial recognition technology

There are many advantages of using facial recognition technology. For example, organisations often use it for:

  • Enhanced security

Facial recognition software is sometimes used alongside CCTV to help identify suspicious people caught on camera. The technology is most often used by law enforcement and intelligence services, but it can also be used by organisations in high-risk situations.

Organisations can, for instance, use it in restricted areas of a building to ensure that only authorised personnel gain entry. It can also be used in public places where there is a crime threat.

  • Targeted advertising

High-end retailers can use facial recognition technology to customise advertising strategies for shoppers. When the software identifies an individual, it can use information previously collected on them to display tailored adverts.

For example, if the organisation knows that the person supports a particular football team, or that they prefer a specific brand, it can select adverts related to those topics.

  • Enhanced social media

Social media sites encourage users to upload photographs of themselves and the places they visit. Some sites, such as Facebook, invite us to ‘tag’ our friends and family, so that users – and the site – can identify the people in the image.

Doing so helps these sites identify who we spend time with, which in turn helps them gain a stronger understanding of our social connections.

Crucially, the files contain not only the pictures themselves but also extensive amounts of metadata. This includes things such as the location where the photograph was taken and the device that was used.

The site’s users generally don’t realise just how much information social media sites gain from this practice, which is why it has raised significant concerns about data privacy.

  • User authentication

Some smartphones contain a feature that enables users to unlock their phone with facial recognition software. The device keeps an image of the individual stored on its systems, acting as a password, and users hold their face to the camera to authenticate themselves.

Why is facial recognition technology controversial?

As we have alluded to, facial recognition technology comes with significant security and privacy concerns. Organisations that can recognise people from their face alone can track their activities surreptitiously.

Although facial recognition software is a more convenient and resilient way to authenticate ourselves, it poses a more significant risk if that information is compromised or used in a way individuals would not like or expect.

Whereas users can reset their passwords if an account is hacked, there is no equivalent if criminals exploit an account protected by facial recognition software. We cannot change our faces, so there remains a permanent risk.

These issues are why biometric data such as facial scans are considered special categories of personal data under the GDPR. The classification describes any sensitive information that poses a high risk if compromised.

Other types of special category data include information about an individual’s racial or ethnic origin, their political opinions, their religious or philosophical beliefs, their health status and their sexual orientation.

It’s also worth noting special category data may be inferred from an image.

The risks related to facial scans are so extreme that the EDPS (European Data Protection Supervisor) and the EDPB (European Data Protection Board) have called for a ban on the automated processing of such data.

The bodies’ chiefs, Andrea Jelinek and Wojciech Wiewiórowski, say that the use of biometric identification in public spaces “means the end of anonymity in those places”.

They add: “Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.

“This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI.”

Do you need consent to use facial recognition technology?

The rules for obtaining special category data, such as face scans, are more complex than ordinary data processing activities.

As usual, organisations need to document a lawful basis for processing. The GDPR lists six options – including consent – but because the rules for obtaining and maintaining consent are so difficult, it is generally the least preferable option.

In most cases, organisations would be more likely to use:

  • Contractual obligations, if face scans are necessary to fulfil the terms of a contract
  • Legitimate interests, if there is any reason (including commercial benefit) to process face scans. This basis only applies if the interests are not outweighed by the negative effects to individuals’ rights and freedoms.
  • Public interest, if the face scans are necessary to perform a task in the public interest or in the exercise of official duty.

In addition to a lawful basis, organisations must document a separate condition for processing in respect of special category data under Article 9(2) of the GDPR. For the commercial use of facial recognition technology, organisations will almost certainly be required to obtain explicit consent.

Recital 32 states that consent must be given “by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement”.

An “affirmative act” means that the data subject must opt in; organisations cannot use pre-ticked boxes or other tools where consent is the default option.

Similarly, “freely given” means that the data subject must have a genuine choice. They cannot be disadvantaged if they refuse consent.

How to minimise risks when using facial recognition technology

Because of the risks associated with using facial recognition technology, organisations must take extra precautions when processing such information. Here are some of the ways they can do that:

  • DPIAs

DPIAs (data protection impact assessments) must be conducted prior to performing high-risk processing activities. The GDPR states that this applies to the use of new technologies where the processing is likely to result in a high risk to the rights and freedoms of individuals.

Facial recognition technology is likely to fit within these criteria, so DPIAs are essential if you are to remain GDPR-compliant.

  • Data minimisation

Data minimisation is one of the GDPR’s seven foundational principles. It states that organisations must only process personal data if it is directly relevant and necessary to accomplish a specific purpose.

The use of facial recognition software can potentially breach this principle. There are many novel ways to use biometric data, but if the information needed to use those tools isn’t equivalent to the benefits it provides, organisations should consider whether its proposed activities are necessary.

  • Data retention policies

Under the GDPR, organisations must dispose of personal information when it is no longer necessary to perform the task for which it was originally processed.

In some cases, organisations only need to hold on to face scans for a short time. For example, if the information is used to build up user profiles, the biometric data is only necessary for the first part of the process.

Once the profile has been built, the face scans are no longer necessary and should be disposed of. Organisations must therefore create a data retention policy that identifies the purpose for which personal information is collected and how long it must be stored.

Are you using facial recognition software for CCTV?

If your organisation monitors people using facial recognition software, we recommend DQM GRC’s CCTV Audit for Data Protection service.

With this consultancy package, our team of experts will ensure that your CCTV activities comply with the GDPR, the Data Protection Act and the Surveillance Camera Commissioner’s Code of Practice.

Using a combination of DPIA reviews, on-site audits, legal reviews and a gap analysis, our experts will determine whether you are meeting your regulatory compliance requirements.

They’ll also identify any areas of non-compliance and provide guidance on the steps you should take to shore up your practices.

Author

  • Luke Irwin

    Luke Irwin is a former writer for DQM GRC. He has a master's degree in Critical Theory and Cultural Studies, specialising in aesthetics and technology.

Add a Comment

Your email address will not be published. Required fields are marked *