Find me: Clearview, facial recognition and the acceptable use of AI

In 2005, an alternate reality game called Perplex City was launched. It included 256 puzzle cards that players around the world competed to solve, submitting their solutions on the game’s website.

The puzzles were solved fairly quickly with the exception of two cards. One of them, number 256, was called “Billion to One”. The card featured a photo of a man and the phrase “find me” in Japanese, and the game’s hint line provided the man’s name: Satoshi. No other information was given.

At the time, the technology available to the average person meant it was effectively impossible to find Satoshi.

Fast forward to December 2020, and a man in Nagano, Japan received an email asking if he was the Satoshi from Perplex City.

The man had almost forgotten that he had allowed a friend to use his photo for the game, and was pleasantly surprised to discover that people around the world had been searching for him for 14 years. Satoshi had been found and the game came to an end.

The puzzle was solved using facial recognition software that had been unavailable in 2006, which allowed a player to match the image from the card with other publicly available images of Satoshi.

The game is an interesting case study in how far technology and access to it have come in such a short period.

However, the potential for this technology to be abused is worrying. Although Satoshi had consented to his image being used, in cases where consent isn’t sought, individuals could be harmed should the service provider act irresponsibly.

Facial recognition and AI

Six months before the Satoshi puzzle was solved, the UK’s ICO (Information Commissioner’s Office) and the Australian Information Commissioner launched a joint investigation into Clearview AI, a provider of facial recognition software.

The investigation focused on inadequate procedures for obtaining consent from data subjects. It concluded on 3 November 2021, with the ICO noting:

Clearview’s facial recognition app allows users to upload a photo of an individual’s face and match it to photos of that person’s face collected from the internet. It then links to where the photos appeared. The system is reported to include a database of more than three billion images that Clearview claims to have taken or ‘scraped’ from various social media platforms and other websites.

The Australian Information Commissioner determined that Clearview AI had failed to comply with the country’s privacy legislation, noting the organisation did not get individuals’ consent to collect their sensitive information – a requirement that applies even if that information is collected from a publicly available source.

The investigation underlines the importance of supervisory authorities having the recourse to identify emerging trends and manage them appropriately. It also highlights the imperative for service providers to undertake due diligence to reduce harm and distress.

Privacy by design

At the core of GDPR (General Data Protection Regulation) compliance is the idea that privacy should be considered in all processing activities from the outset. This involves screening new processing activities to consider whether a DPIA (data protection impact assessment) is required, conducting the assessment and adapting activities to remove unnecessary risks and minimise harm to data subjects.

An internal review of Clearview AI’s data processing activities should have raised red flags early on in the service’s development.

As the service did not check if the person in the image had consented to having a search run on them, there were clear risks that it could be used to search for somebody who doesn’t want to be found. The organisation could have addressed this by introducing a means of verifying consent.

For example, mobile dating service Tinder asks users to verify their identity by submitting a photo of themselves in a specific pose.

Although this is not a necessary step in creating a profile, doing so will add a tick to a profile confirming that the individual is who they claim to be.

In the case of Clearview AI, asking for such an image alongside a consent form would provide additional assurance that the subject of the photo knows a search is being conducted and is happy for it to happen.

While the Clearview AI case is comparatively extreme, it does highlight the need for data protection compliance to be considered at all stages in a project.

Gap analysis and compliance

A good first step for organisations looking to improve data protection compliance is to undertake a gap analysis. This involves reviewing not only data processing activities but also the governance and policy structures underpinning these activities to identify where improvements can be made.

This is where DQM GRC can help. Our GDPR Gap Analysis service provides a comprehensive overview of your organisation’s data protection practices, and outlines specific actions you can take to improve your compliance posture.

We can also help create screening procedures for DPIAs and provide templates for conducting the assessments themselves, giving assurance that any new processing activities have considered data protection compliance from the outset.

If the cases of Satoshi and Clearview AI show us anything, it is that technology develops quickly. Reviewing your GDPR compliance can help you avoid data breaches and identify where new technology can improve efficiency and service quality.

Author

Add a Comment

Your email address will not be published. Required fields are marked *