Deploying AI Systems in Compliance with Data Protection Laws

Expert insight from privacy consultant Mark James

Privacy consultant Mark has a wealth of experience working with organisations to help them achieve GDPR (General Data Protection Regulation) compliance.

He’s worked as a DPO (data protection officer) for various organisations, including Longleat Safari Park and the Salvation Army. As DPO, he undertook gap analyses, supported with documentation, conducted DPIAs (data protection impact assessments), and more.

Previously, we talked to Mark about voice cloning – as he put it, “a technology that uses machine learning algorithms and neural networks to replicate specific human voices”. We asked him about the risks of voice cloning, and what organisations can do to protect themselves.

But what about AI more generally? What are the risks, and how can organisations best navigate them while taking advantage of AI’s benefits and opportunities?

We sat down for another brain-picking session with Mark.


In this interview

  • What the data protection risks of AI are
  • The GDPR restrictions around automated decision-making
  • Under what legal bases you can process personal data via AI systems
  • Measures and technologies for addressing risks from AI data processing

As a privacy professional, what’s your take on AI?

Well, AI certainly has the potential to revolutionise lots of aspects of daily life, as well as how we do business.

That said, using AI also introduces challenges and raises concerns.

For one, AI systems can lack transparency in how they make decisions, which creates potential for bias and even discrimination. This can lead to unfair treatment of certain people.

Regulators and others have also questioned how AI systems respect fundamental rights and ethical principles – now and in the future. The UK and EU [among others] have good reason to be introducing AI regulations – two areas we’re watching in 2024.

We also have concerns around privacy. Large language models, by definition, require large data sets, possibly containing personal information. How are those databases kept secure? And how will the data be processed securely and with privacy in mind?

Other types of AI are also a concern. For example, facial recognition technology requires sensitive data – biometric data – to be processed. The risks to the individual are inherently high with this type of data, so how will you mitigate them?

These are the types of questions organisations, regulators and other stakeholders must ask themselves.

In terms of the UK GDPR, that sounds like organisations must be mindful of data subjects’ rights related to automated decision-making

Yes, the ICO [Information Commissioner’s Office] has useful guidance on that. This was a new right under the EU GDPR [published in 2016], not covered under its predecessor the DPA 1998 [Data Protection Act].

But now, as data controllers increasingly adopt AI systems, subjects’ rights around automated decision-making, including profiling, are more relevant than ever.

That said, organisations are only restricted by those rights when the decision-making is fully automated [i.e. no human involvement at all] and has a “legal or similarly significant effect” on the data subject.

When you say “restricted”, what exactly does that mean?

Organisations can’t conduct such processing unless they have one of three lawful bases for doing so:

  1. You’re authorised by law to do that processing.
  2. You can’t enter or perform a contract between you and the data subject without that processing.
  3. The data subject has given their explicit consent for that processing, for the purpose[s] you’ve stated.

How can organisations obtain “explicit consent” from a data subject?

The subject must give their consent:

  • Through a clear, unambiguous and affirmative action;
  • Fully and clearly informed, and specific to each individual purpose for processing; and
  • Freely – in other words, with a genuine ability to refuse or withdraw consent without detriment.

Is processing data via AI systems similarly restricted?

That depends on whether the system is making fully automated decisions. If so, the same restrictions apply. But if not, according to the ICO guidance, you have access to the full range of lawful bases:

  • Consent
  • Vital interests
  • Public interest
  • Legal obligation
  • Legitimate interests
  • Contractual obligation

However, the specific lawful basis or bases available to you for any given processing activity will depend on the specifics of that activity – most notably its purpose.

Isn’t the point of AI to always make automated decisions?

Sure, once you’ve deployed the system. But AI development is the bigger area regulators are concerned with, and that’s clearly distinct from deployment, with different purposes.

The ICO guidance gives facial recognition technology as an example. Such a system is always designed to recognise faces, but is it:

  • To prevent crime?
  • For access control?
  • To tag friends on Facebook?

Different purposes like these usually require different lawful bases. The risks are also different for these different types of deployment.

And when you’re still researching and developing the system, the risks are different again. That phase typically has more human involvement, too.


Download the Mastering Data Privacy in the Age of Artificial Intelligence white paper to understand more about using AI and its relationship with data privacy.


How can organisations best address those risks?

With appropriate data security measures, of course!

To decide what’s ‘appropriate’, take steps like:

  • Identifying all sensitive data in your AI systems, at every stage of the data’s life cycle;
  • Restricting access on a need-to-know basis;
  • Actively assessing and managing vulnerabilities;
  • Encrypting data at rest and in transit;
  • Data pseudonymisation and anonymisation;
  • Regularly backing up data; and
  • For third parties, conducting due diligence and establishing clear contractual agreements on data use, security and privacy.

Also, in virtually every case when you’re using AI to process personal data, you’ve got a high risk to individuals’ rights and freedoms. So, under the UK GDPR, the organisation must conduct a DPIA.

What newer technologies could organisations use for AI data protection?

Organisations have a few options, including:

  • Blockchain technology, for secure and transparent data storage and sharing;
  • Homomorphic encryption, so you can process data while it’s still encrypted;
  • Federated learning, so you can train the AI model on decentralised data sources, protecting data privacy; and
  • Privacy-enhancing technologies or ‘PETs’ – for example, differential privacy and secure multi-party computation, which ensures that one party can’t see the data of another.

Want help from the experts?

Do you need help conducting your DPIA or selecting the right data protection measures for you?

DQM GRC can help.

Our tailored AI and Data Protection Consultancy Services help you seamlessly integrate AI, reaping all its benefits while ensuring complete compliance with UK data protection laws.


We hope you enjoyed this edition of our ‘Expert Insight’ series. We’ll be back soon, chatting to another expert within GRC International Group.

In the meantime, why not check out our previous interview with Mark on voice cloning?

Alternatively, explore our full index of interviews here.

Authors

Tags:

Add a Comment

Your email address will not be published. Required fields are marked *