Published on Friday, May 17, 2019 - 15:06 by Martin Fletcher
We now live in an era where a handful of large organisations hold more information on people than at any time in history. The reality of this is not just being considered by governments, information rights groups and data protection consultants like myself. It is also fast becoming a hot topic of debate in university philosophy departments the world over. How does the use of this vast amount of information fit in with ethical principles that have been debated for centuries?
In this article I’m going to discuss the idea that up till now the digital economy has taken its cues from a libertarian philosophy that arose in the second half of the 20th century. However, the principles that these ideas are based on are not necessarily ones best suited to a world where companies are able to infer huge amounts about individuals and exploit information without data subjects really understanding what they have signed themselves up for. “The ethical concerns illuminated here are, in a very real sense, anything but ‘academic,’ and neither philosophers nor the broader human community can afford the luxury of treating them as such.” (Vallor, 2015) I will also be discussing whether regulation like the GDPR could be an early step to changing the ethical foundations upon which data processing is based.
The aim of the article is not to be a definitive overview of digital ethics, but a point of view on where some key philosophical ideas could fit into a very modern debate.
Robert Nozick and Libertarianism
I’m going to start by heading back to the 1970s, an era when floppy disks were considered a nifty new development, to look at one of the men whose ideas were a foundation for data transactions in the modern era. Robert Nozick was a professor at Harvard who advocated the idea that freely entered transactions should not be subject to external interference. This stood in opposition to many ideas of the time (and indeed today), which advocate a role for the state in redistributing wealth and moderating transactions. While his philosophy is based on the distribution of wealth, there are echoes of it in the way information is transferred today.
One of his most well known and illustrative examples to back up his ideas involves the basketball player Wilt Chamberlain. A man now even more well known among philosophy junkies than basketball fans.
“Imagine that society achieves a pattern of perfect justice by the lights of whatever principle you prefer. Then someone offers Wilt Chamberlain a dollar for the privilege of watching Wilt play basketball. Before we know it, thousands of people are paying Wilt a dollar each, every time Wilt puts on a show. Wilt gets rich. The distribution is no longer equal, and no one complains.” (Gaus, 2018)
In this situation you have a large number of people who have freely consented to giving some of their wealth to one entity (Wilt). Wilt gets rich from this because he is able to provide a service that people want. Wilt is then of course able to use this wealth in any way he sees fit. Strong redistributive controls would interfere with the liberty of parties to enter a transaction. Redistribution would also be constantly required to take account of these transactions, meaning that you can’t ever get back to your state of perfect justice you started with without doing away with people’s liberty to enter transactions entirely.
In some ways, the development of the information economy has been founded on a similar principle. Many people freely submit information to large companies in exchange for the services they provide, and there have been those who have said that intervention in this would be damaging to liberty. Whether Nozick himself would have seen things this way is impossible to say, he died in 2002 and didn’t have much to say on the subject before then. But it can be said that libertarian ideas similar to the ones he advocated have been a driving force behind the way information is handled today.
The privacy paradox and the prisoner dilemma
A core discussion within digital ethics is whether people can be said to be freely and knowingly entering into data transactions with organisations. Gordon Hull raises several points that explain why this model of self-management of personal data may under-protect privacy.
“(1) users do not and cannot know what they are consenting to; (2) privacy preferences are very difficult to effectuate; and (3) declining to participate in privacy-harming websites is increasingly not a viable option for many.” (Hull, 2015)
The first of these points states that, unlike when you pay a dollar to watch Wilt play basketball, when you hand information over to a large data controller it is beyond the expertise of most to understand what that transaction really involves. This can be seen in what is known as the Privacy Paradox, that people appear to value their personal data and take steps to protect it however when “offered the opportunity to trade private, personal information for immediate but minor benefits such as access to a website, they routinely do so.” (Hull, 2015)
Following on from this is Hull’s second point that individuals may be compromising their privacy due to a lack of transparency on the part of the Data Controller. This means the individual cannot easily manage their privacy preferences and take control over how their data will be used. A stark example of this being an app that was available called GirlsAroundMe.
This app pulled publicly available information from Facebook and FourSquare to present users with a map showing “where nearby girls are checking in, and show you what they look like, and how to get in touch.” (GirlsARoundMe.com, 2014) While there were some attempts at defending the App at the time, it’s difficult to maintain that the women using Facebook and FourSquare were expecting their information to be used in this way and had deliberately set their privacy settings to allow for it.
“It seems likely that men and women who pay close attention to privacy settings are also going to be better educated about a lot of things, and less likely to place themselves in dangerous situations across the board. The flipside to this is that women who are less likely to pay close attention to privacy settings, may also be more vulnerable in other situations (whether this is because they're younger or older or any other number of reasons.)” (Kain, 2012)
Hull’s third point is that even if data subjects are becoming savvier regarding their privacy settings, it is still becoming increasingly difficult to opt out of certain privacy harming activities. People logically acting against their interests in certain situations is shown by another classic philosophical thought experiment known as the Prisoner Dilemma.
Imagine that you and I have been arrested during a bank robbery and placed in separate interview rooms. Imagine also that we both have no particular interest in each other’s welfare (I know, I sound like a really nice guy don’t I). The prosecutor gives us each two options, we can either stay silent, or implicate the other person. If you implicate me and I don’t implicate you, or vice versa, then the person who has stayed silent will get ten years in prison and the person who talked will go free. If we both implicate each other then we’ll each get five years, if we both stay silent then we’ll each get three years. “The “dilemma” faced by the prisoners here is that, whatever the other does, each is better off confessing than remaining silent. But the outcome obtained when both confess is worse for each than the outcome they would have obtained had both remained silent.” (Kuhn, 2019)
As more of our lives become dependent on data transactions with large Data Controllers, it becomes increasingly difficult to opt-out of participating. For example, if all my friends organise their social lives through a social media service then I will suffer a penalty by not joining it too, even if I’m not comfortable with the way the company uses personal data. When people have the free choice, circumstances make data subjects act collectively against their own best interests.
For much of the last twenty years, the emphasis for protecting personal data from being unfairly exploited has been placed on the individuals themselves. The dominant thought has been that data subjects ought to make a cost-benefit analysis of whether they wish to provide their personal data or undertake other actions that compromise their privacy in exchange for a service. Much as you would make a cost-benefit analysis on whether you wanted to pay your dollar to watch Wilt play basketball. However, unlike the dollar that goes to Wilt, the information that is handed over to a data controller remains the property of the data subject and can be exploited in ways that harm them.
Perhaps a more hands on approach is needed to emphasise the responsibility data controllers have for the personal data they hold. Let’s go to 18th Century Prussia…
Kant and the Humanity Formulation
Immanuel Kant is a person whose ideas still dominate philosophy in a way that few have done since Plato. While doing my degree I and the other philosophy undergrads would joke that you could get a 2:1 from a professor by writing a 10,000 word dissertation challenging Kant or a first by writing “Kant is the best” on a scrap of paper. He lived in the city of Konigsberg (now Kaliningrad) his whole life and wrote on pretty much every area of philosophy going. Although his original writings are considered so obtuse that the urban myth is German philosophy students will read the English translation of his works because they’re easier to understand. Part of his ethical philosophy involved an argument called the Humanity Formulation.
This formulation states that the important factor in a moral decision is the purpose we have in mind when interacting with other people.
“This formulation states that we should never act in such a way that we treat humanity, whether in ourselves or in others, as a means only but always as an end in itself.” (Cureton, Johnson, 2016)
The idea that humans should not be used as a mere instrument with no value beyond this does seem like a very reasonable basis for managing interactions. Robert Nozick himself subscribed to this idea and saw his own theories as a way of realising the Humanity Formulation.
The way that the formulation works within the kind of financial transaction that you’d have seen in the 1700s or in Nozick’s era is this. Supposing that Wilt Chamberlain has freely exercised his rational capacities in pursuing his line of work, we make permissible use of these capacities as a means only if we behave in a way that he could, consent to — for instance, by paying a dollar to watch him play. It is possible for this formulation to work with financial transactions. As when we pay a dollar, we help Wilt meet his chosen ends of being a professional basketball player, and we meet our own ends in terms of being entertained watching him play.
However, this system is open to more abuse in the world of data processing. Here we meet our own ends of staying in touch with people by providing a social media service with data. This works on the assumption the company’s ends is to be a social media service. If the company’s actual ends are to make profit from selling personal data it collects, then it is treating its users as a means by using their data for a purpose other than the one the user intended. Without regulation, the temptation for large data processors to exploit information irresponsibly is huge and many have done so over the years.
The question is whether regulation such as GDPR can introduce a control into large scale data processing. The regulation encourages data controllers to address the three points Hull identified; making it clearer that people have consented to processing, making privacy preferences transparent and easy to understand and in so doing reduce the harm that can come to individuals who feel increasingly obliged to participate in the data economy. Regulation such as GDPR could be a step towards resolving the privacy paradox, countering the notion that participation in the digital economy is a willing surrender of privacy. It could also help move data ethics towards something more like what Kant intended when outlining his own ethical theory, where controllers take more responsibility for processing data fairly and data subjects are seen more as an ends rather than a means.
Curreton, Adam & Johnson, Robert, 2016 “Kant’s moral philosophy”, The Stanford Encyclopaedia of Philosophy, URL: https://plato.stanford.edu/entries/kant-moral/
Gaus, Gerald, 2018, “Liberalism”, The Stanford Encyclopaedia of Philosophy, URL: https://plato.stanford.edu/entries/liberalism/
Hull, Gordon, 2015, “Successful Failure: What Foucault can Teach Us about Privacy Self-Management in a World of Facebook and Big Data” Ethics and Information Technology, URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2533057
Kain, Erik, 2012, “The problem with the Girls Around Me app isn’t that women are lazy about privacy”, Forbes, URL: https://www.forbes.com/sites/erikkain/2012/04/06/the-problem-with-the-girls-around-me-app-isnt-that-women-are-lazy-about-privacy/#17f0e967311d
Kuhn, Steven, 2019, “Prisoner’s Dialemma”, The Stanford Encyclopaedia of Philosophy, URL: https://plato.stanford.edu/entries/prisoner-dilemma/
Vallor, Shannon, 2015, “Social Networking and Ethics”, The Stanford Encyclopaedia of Philosophy, URL: https://plato.stanford.edu/entries/ethics-social-networking/