European data protection law has always been infused with ethical considerations around data use. Under the old Data Protection Directive, even if data use had a valid legal ground, unless the proposed use was also fair the law was still broken. But what is fairness when it comes to data? Well, organisations should only handle personal data in ways that people would reasonably expect and not use it in ways that have unjustified adverse effects on them. An important ingredient of fairness is transparency. In other words, people should know what their data is being used for and be given an opportunity to exercise their rights, if they so choose. Even with a notice, there are still limits to data use; just because you can do something does not necessarily mean that you should.
This combination of ethics in EU data protection law has become more potent under the new GDPR regime which includes much greater transparency and accountability measures. Learning from the old regime, the GDPR law-makers redefined the limits of consent by constructing a higher bar to hurdle. They also decided that people should be told what the legal basis of processing is. These were in fact refinements born out of ethical motives: making sure people really understand whether they are consenting to something, and if they are, then driving home what they are consenting to.
As a result, the alternative ‘legitimate interests’ ground has become much more prominent in the GDPR era. This is a perfect example of data ethics in operation: in order to justify the legitimate interests test, you need to determine there’s a legitimate interest in the intended data use, that is not overridden by any adverse effects of the individuals represented by that data. This involves a balancing test which looks at all aspects of the intended processing, including whether the processing would benefit the individual, or whether it would negatively impact or undermine their rights. This exercise involves making certain value judgments. For example, in the marketing world, advertisers believe that, overall, getting relevant ads is “good” for people. The pursuit of marketing as a business interest is arguably in the interests of the “wider community” as it stimulates competition with companies better equipped to compete which in turn enables lower prices benefiting the individual. It also contributes to helping keep social media and other platforms free as they’re currently subsidised by marketing revenue which in turn enables freedom of speech. Whilst some of these points can be debated, and differently weighted, there is no doubt that ethics are in play here.
Once data has been curated, any downstream use of it needs to be carefully considered. What might be perfectly lawful and ethical in one person’s hands might not be in another’s. Data providers should not simply ‘wash their hands’ of what recipients are doing. To do so would be unethical. To counter this, a new set of data use terms are emerging known as Privacy Enhancing Guardrails or “PEG’s” for short. These PEG’s are a new type of contract provision designed to ensure that data use is kept in balance. They set out some of the boundaries for downstream data use such as ensuring there’s a separate legal basis to contact people if not “included” with the supplied data, or making sure more sensitive “special category” data is not derived from the supply. Depending on the type of industry we predict that different flavours of PEG’s will develop over time, but they are another example of how the GDPR promotes ethical standards of behaviour.
The tensions between consent and legitimate interests as grounds for using consumer data still rumble on. Some think that consent is the more ethical ground because people must give their “free, informed and specific” consent before the processing can proceed. This sounds good in theory. However, it is difficult for people to properly understand use cases, particularly within advanced technological ecosystems and it doesn’t seem fair that once someone indicates they are happy, that they have waived their right to ethical treatment of their data. With legitimate interests, people still need to be told what is happening with their data, and they can object if they want, but if they don’t, processing always needs to be kept ethically in balance. For example, if someone consents that their data is to be used for direct marketing purposes, then the use of that data is still lawful with aggressive call scripts, whereas under the legitimate interest ground it would not be, as the method of communication would be unethical.
Savvy organisations are taking advantage of the greater role ethics play in the GDPR era by establishing ethical data use programs. A demonstration of these frameworks are exactly what regulators would expect when checking accountability measures. As well as compliance, they also help ensure brand credibility and trustworthiness while enabling innovation, acceleration and marketplace success. Data ethics is strategic, practical and necessary – it’s here to stay.
By Alex Hazell, Head of UK Legal, Acxiom
The inaugural Data Protection World Forum (DPWF) was held on November 20th & 21st 2018 at the ExCeL London and welcomed over 3,000 delegates seeking the very latest insight on data protection and privacy.
Pre-registration for DPWF 2019 will be opening in the coming weeks.