EU calls for ban on the use of biometric surveillance

The EDPS (European Data Protection Supervisor) and the EDPB (European Data Protection Board) recently released a statement calling for a ban on the automated processing of biometric data.

This includes facial recognition software, fingerprints, retinal scans and voice recognition software – which are all widely used in a variety of sectors.

However, the bodies’ chiefs, Andrea Jelinek and Wojciech Wiewiórowski, say that the use of biometric identification in public spaces “means the end of anonymity in those places”.

They add: “Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.

“This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI.”

There are already rules regulating the use of biometric data. Notably, the GDPR (General Data Protection Regulation) includes specific requirements regarding the use of such information, while the European Commission has also proposed an Artificial Intelligence Act.

That proposed framework, which was released in April 2021, takes a risk-based, market-led approach to regulating AI, but digital rights experts believe it doesn’t go far enough.

Sarah Chander, a senior policy advisor at European Digital Rights, said that the proposal contains a number of loopholes, including “threat exemptions” that allow law enforcement to use biometric data.

She also noted that the framework only addresses the use of real-time biometric identification, giving organisations the freedom to collect biometric data for ongoing processing activities.

The EDPB and EDPS agree that the Artificial Intelligence Act isn’t strict enough regarding the use of biometric data.

“The problem regarding the way to properly inform individuals about this processing is still unsolved, as well as the effective and timely exercise of the rights of individuals,” said Jelinek and Wiewiórowski.

“The same applies to its irreversible, severe effect on the population’s [reasonable] expectation of being anonymous in public spaces, resulting in a direct negative effect on the exercise of freedom of expression, of assembly, of association as well as freedom of movement.”

What does the GDPR say?

Although there are justifiable privacy concerns regarding the use of AI and biometric data, there are also benefits of using such data for both organisations and individuals.

For example, biometrics can strengthen a multi-factor authentication system and vastly reduce the chances of criminal hackers breaking into users’ accounts.

There are also wider societal benefits of biometrics, with organisations able to use the information to create tailored services for customers.

For those reasons, you can understand why regulations such as the GDPR still permit the use of biometric data.

It does, however, emphasise the need for caution because biometric data used for identification purposes is special category data; sensitive information like this must be treated more carefully and afforded greater protections.

Before processing biometric data, organisations must document a lawful basis for processing the information, in addition to a separate condition for processing, and weigh up the benefits and risks associated with the processing activity.

It must also clearly explain to individuals why it is collecting this information, what it will be used for and how long it will be kept.

Only if an organisation can justify the use of biometric information is it legally allowed to use the information.

The Artificial Intelligence Act would expand upon that, using a risk-based approach to define whether the use of biometric data poses a low-, medium- or high-level risk.

It would also prohibit the use of AI systems that:

  • Use subliminal techniques to manipulate a person’s behaviour in a manner that may cause psychological or physical harm;
  • Exploit vulnerabilities of any group of people due to their age, physical, or mental disability in a manner that may cause psychological or physical harm;
  • Enable governments to use general-purpose “social credit scoring”; and
  • Provide real-time remote biometric identification in publicly accessible spaces by law enforcement except in certain time-limited public safety scenarios.

It is worth noting that the proposed framework would be European legislation. UK-based organisation may well be affected by it, but it remains to be seen if the UK government will replicate it – either in full or in part.

What about CCTV footage?

CCTV would only be covered by the Artificial Intelligence Act if the footage was used alongside facial recognition software.

However, the use of CCTV footage is still subject to certain rules. For example, you must comply with the Surveillance Camera Code of Practice 2013 as well as the GDPR.

After all, even without biometrics, people’s faces are still identifiable and as such the footage may meet the GDPR’s definition of personal data.

Organisations that want help understanding their compliance requirements should take a look at our CCTV Audit for Data Protection service.

Our team of experts will review your use of CCTV and relevant documentation, helping you identify your legal requirements as well as any data protection issues that you may have overlooked.


  • Luke Irwin

    Luke Irwin is a former writer for DQM GRC. He has a master's degree in Critical Theory and Cultural Studies, specialising in aesthetics and technology.

    View all posts

Add a Comment

Your email address will not be published. Required fields are marked *