One of the depressingly-common questions a data protection professional will hear from organisations is “what’s the bare minimum we have to do to be compliant?”. 

🙄🙄🙄

The short answer is: “all of it.” Data protection law isn’t a buffet, organisations don’t get to pick and choose which parts to comply with according to their convenience. Bunging together a policy document, outputs of a one-off data discovery exercise, a copy/pasted or template privacy notice and some standard contract clauses and calling it ‘done’ is not ‘compliance’ – it’s only the baby steps – and in the absence of strategic direction; those steps are likely to lead straight into either a brick wall or a croc-infested swamp.

The ‘bare minimum’ question signals that an organisation does not ‘take privacy/data protection seriously’ (and any organisation that asks me this goes on my ‘never use their services, they can’t be trusted’ list). It’s also counterproductive to the organisation’s interests because this approach will result in a false sense of assurance without mitigating the actual risks; commercial or human. 

Of all the compliance requirements set out by the GDPR, the one that organisations appear to have least appetite for is Data Protection by Design and by Default – probably because this can’t be reduced to a set of one-size-fits-all 2-minute copy-paste boilerplate forms, but requires understanding, lots of thinking, selection of appropriate technical and organisational measures which are suitable for mitigating risks to data subjects’ rights and freedoms, in order to deliver and demonstrate compliance with all of the GDPR. 

The bare minimum is giving a damn about other people, even when doing so doesn’t directly translate into commercial gains.

Lessons from history

In 1800s, surgery was a dicey proposition. Even if an operation went well, many people died afterwards from sepsis and gangrene. A British surgeon named Joseph Lister became aware of Louis Pasteur’s work on germ theory; (the idea that some of the wiggly little blobs one could see under a microscope might have something to do with people getting sick), and formed the hypothesis that these pesky microscopic critters were a large part of the reasons that patients would get sick and die after otherwise-successful surgery. Lister set about testing this theory; working on approaches to get his hands, his instruments and his operating theatre thoroughly clean before and after operating; along with various methods for preventing germs from getting into patients. Other surgeons taunted and scoffed at Lister, refusing to believe that something they couldn’t see with their own eyes could possibly be responsible for high rates of post-operative illness. However, over time; it became evident that Lister was indeed onto something – his patients had much lower rates of infection following surgery than those of his peers. Lister went on to develop the foundation of aseptic surgery; that is – measures and controls for preventing germs from getting anywhere near a patient during an operation.

Now, since Lister was only one man and germs (bacteria, viruses, etc) are very small and numerous; he couldn’t possibly identify the individual cells that might cause infection in a patient and hunt those down one by one – he had to use defensive measures to prevent any micro-organisms from reaching the patient at all. He had no way of knowing which of his measures – wearing gloves, boiling instruments, spraying carbolic solution around – were working at any one time, nor whether he was eliminating bacteria, viruses, fungi, or prions. That level of detail didn’t matter though, because his goal was not to meet germ-killing KPIs, or produce specimens to justify the expense of all that washing to senior management; but to prevent patients from dying of infection after surgery

Lister pioneered a protective – as opposed to targeted or defensive – approach to preventing surgical infection – making it very difficult for any germs to reach the patient while they were having holes cut in them.

The parallel.

In today’s data-soaked world, potential harms to individuals’ rights and freedoms are like the pathogens that cause infection. There are far too many to count, and they mostly go unnoticed in the vast swirl of data and systems (until after the damage is done to real people). Just as the surgeons who scoffed at Lister were not penalised at the time for losing patients to infection from their unknowingly-grubby practices; so until recently, organisations have not been held to account for the problems they cause from allowing their processing of personal data to be a vector for harms to reach individuals. 

Data protection lawmakers are the Joseph Listers of the digital age. Doing stuff with people’s data causes those data subjects to be exposed and vulnerable to harms – whether that’s a trivial infringement of unused rights that goes unnoticed, or the more serious effects of discrimination, exclusion, injury, monetary loss and unfair denial of opportunity. Even when the intent of the processing is wholly benign; failure to take appropriate precautions will still result in these adverse consequences for someone, somewhere at some point. An organisation may never know which data subjects were affected, or how; and the actors involved may never be identified as a cause of, or contributor to, those harms; but those harms happen nonetheless. 

We are now seeing the macro-effects that have swelled like infected flesh, from a general lack of protective measures. Automated discrimination. Reduction in social mobility. Data-fuelled genocides. Fraud on huge scales. Prejudice, hate-mongering, authoritarianism, coercive surveillance. All of these things existed before digital technology of course, but dense connectivity and cheap data storage have allowed them to explode across the world, presenting risks of harm to hundreds of thousands of people every minute of every day.

And so, data protection by design and default is equivalent to an aseptic surgery protocol for the data age. It would be impossible to single out each potential harm and identify how it might reach any particular data subject; there are just too many risks, too many vectors and too many data subjects for this task to be feasible. It would be like trying to hunt down individual microbes in an operating theatre, armed with only a washcloth. Instead, the intent of DPbD2 is to provide a set of protective measures which reduce the opportunities for processing to carry harms to  data subjects.

This is why a ‘bare minimum’ approach to data protection compliance misses the point of data protection entirely. The purpose of data protection law is to set out a way for organisations to get stuff done in ways that do not damage the rights, freedoms or welfare of individual living human beings. The goal is not some dry and abstract notion of ‘compliance’; the goal is keeping people safe

And so the protective measures of minimising processing of personal data for specified legitimate purposes under valid lawful bases, paying attention to data quality, adequacy and relevance, securing processing, building in the ability to recognise and uphold data subject rights, and fulfilling Controller or Processor obligations; are all required, all the time, if individual human beings are to be shielded from the risks and harms that data can convey to them.

Producing paperwork without the cultural, systemic, behavioural adjustments required is similar to faking cleaning records for surgical theatres; it might look as though something has been done, but that something is a cosmetic exercise which is focused on the organisation’s convenience instead of the actual object of the controls; protecting people.

DPbD2 requires a change in culture similar to the turnaround by the medical profession which occurred when Lister’s methods were shown to be working. Corporate decision-makers still can’t quite bring themselves to believe in data harms to individuals unless they are shown solid proof that this action here caused this harm there – proof which is as difficult to obtain as it would be to obtain evidence that this specific pathogen cell got into a surgical patient at that point, and caused an infection. As a result, DPbD2 is viewed as an optional add-on, a nuisance overhead, or a series of tickboxes rather than the end-to-end protection of people’s rights, freedoms and welfare that it is intended to facilitate.

It took over a decade for the British medical profession to admit that Lister was right and adopt infection-control measures in the operating theatre, instead of mocking, criticising and undermining him. Despite the additional time and expense involved in preventing infections, doctors did not want their patients to die! Here’s where the analogy starts to break down – it was much easier to link a single sick patient back to a specific operation by a particular surgeon; but it is extremely difficult to link a data harm effect back to a particular organisation’s processing. Data harms are distributed, diffused and can manifest in a wide variety of ways, sometimes a long time after the processing that caused them to occur. As a result, most organisations are still adopting the ‘if we can’t see a harm/won’t be held accountable for harms, we can assume they aren’t happening and don’t need to bother trying to prevent them’, stance, which conveniently also happens to be the ‘do as little as we can get away with’ stance. From this perspective, DPbD2 looks like more effort and expense than can be justified. Why bother with measures which might only be protecting a small number of people, when ‘move fast and break things’ is an option?

1. Because the things that will get broken are mostly people, and since only the very wealthy and very privileged can insulate themselves from data damage; the rest of us mere mortals need to look after each other if we want others to look after us. 

2. It’s the law. Admittedly, the chances of being caught and punished for failing to uphold data protection law are currently remote, but we have this kind of law for very good reasons which are all to do with human quality of life, autonomy and dignity. 

3. Attitudes are changing. Although the average person-in-the-street’s understanding of the data economy and the risks it brings may still be somewhat fuzzy, they know what they don’t like. They don’t like being constantly surveilled and judged. They don’t like high volumes of direct marketing. They don’t like being treated as exploitable corporate assets, and they especially don’t like it when data protection failures result in mistreatment.

4. An organisation which has a culture, processes, systems and strategy which give human risks equal consideration to corporate risks will be more resilient to future changes in data protection law, more able to justify and defend its actions, less likely to get into trouble.

Lister had the right idea. Be like Lister.

(Background reading on Lister and aseptic surgery:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4530413/

https://royalsocietypublishing.org/doi/10.1098/rsnr.2013.0028

https://daily.jstor.org/joseph-lister-antiseptic-revolution/

And on DPbD2:

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-by-design-and-default/

https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_201904_dataprotection_by_design_and_by_default_v2.0_en.pdf

https://www.datatilsynet.no/en/about-privacy/virksomhetenes-plikter/innebygd-personvern/data-protection-by-design-and-by-default/)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.