Press "Enter" to skip to content

Tag: rant


Unless you’ve been living under a rock, you’ll have noticed that there are lots of people talking about GDPR – which is a good thing.
However, there is lots of nonsense being talked about GDPR – which is a bad thing.
My Twitter timeline, LinkedIn feed and email inbox are being deluged with advertising for GDPR compliance “solutions” and services – which is fine as long as the product in question is treated as a tool in the toolbox and not a magic instant-fix-in-a-box spell for instant transformation
Based on some of the twaddle I’ve seen being talked about GDPR lately, and my own experience in supporting data protection within organisations, here is a list of markers which, should they appear in an article, advertisement or slideshow, should be a warning to treat the rest of the content with a hefty pinch of salt.
  1. Banging on about fines. Yes; there is a big maximum fine. No, it’s unlikely to be enforced except for the most egregious cases of reckless negligence. The ICO has never levied the maximum penalty for any breach ever. Based on the evidence available, fines alone are not really a convincing justification for compliance.
  2. Obsessing about consent. Consent is only one of a number of possible legal basis for processing of personal data. It may not the most appropriate, desirable or “compliant” basis to select and insisting on consent where there is a statutory or contractual requirement for processing personal data; or where the individual has no real choice whether to give consent may result in “unfair processing” which could draw regulatory enforcement or litigation.
  3. Focusing on infosec and infosec tech. Information security (the “confidentiality and integrity” principle) is just 1 of 7 principles and doesn’t even start to fulfil obligations around rights or fairness. While it is important, focusing on infosec to the exclusion of the other principles is just as likely to cause serious problems as forgetting it altogether.
  4. Claiming that encryption is a mandatory requirement. Yes, it is mentioned specifically in a few places (Recital 83, Article 6, Article 32, Article 34) it is referenced as an example of a tool with which to mitigate risk. Whether you need it depends on the “scope, nature and context” of processing. Just having encryption will not make you “compliant” and not having encryption on ALL TEH THINGS will not mean that data is at risk of exposure.
  5. Making it all about “compliance”. A finding of “compliance” in an audit is merely a snapshot of a point in time, assuming that the audit itself was sufficiently robust. A compliance-focused attitude often leads to ‘gaming the system’ (as anyone who has ever had an argument about scoping for PCI-DSS or ISO2700x can attest). Ticking boxes does not produce the intended outcome on its own -the paperwork must match reality. GDPR requires your reality to uphold principles, obligations, rights. If you’re not doing this in practice, no amount of audit reports, certificates or checklists will save you when it all goes wrong. Think “maturity” and “assurance”, “quality” and “effectiveness” rather than “compliance”
  6. Insisting that only lawyers can be DPOs. There are some very good data protection lawyers out there in the wild, but an awfully large majority of lawyers who know almost nothing about privacy law. There are many experienced and competent data protection professionals who know privacy law inside-out but do not have a law degree. The only reason for insisting on having a lawyer as a Data Protection Officer or DP Lead is if the lawyer is *already* a DP specialist with business, communications & technical skills. The “lawyer” part is incidental.
  7. Marketing GDPR stuff by breaching other laws (PECR) or in breach of DPA/GDPR itself (were you given a privacy notice about the use of your information for marketing purposes? Is it a fair use of your personal data?)
  8. Calling it the “General Data Protection Regulations”. Seriously, people. It’s Regulation (EU) 2016/679, singular (even though there is a lot of it).
OK, those are the “approach with caution” signs. But how to find good advice on GDPR? Here’s some advice for spotting people who probably know what they’re talking about:
A competent privacy practitioner will tell you
  • There is no magic spell; time, effort, decision-making and resources will be required to adapt to GDPR requirements
  • There is no single tool, audit framework, self-assessment template, cut-n-paste policy or off-the-shelf training module that will make you “compliant”. You need to address systems, process AND culture at all layers and contexts.
  • Records management is just as significant as infosec (if not more so)
  • It’s not about paperwork – it’s about upholding fundamental human rights and freedoms (OK, that last one might be a step too far for many DP pro.s, but it is significant both to the intent and the implementation of GDPR.)
A few more handy tips for your Privacy Team lineup
Domain-specific knowledge is vital and valuable – but remember that specialists specialise, and so it is unlikely that someone who has only ever worked in one area of information governance (e.g. information security, records management) or context (HR, marketing, sales) will be able to address all of your GDPR needs.
The same consideration applies for lawyers – commercial, contract and general counsel-type lawyers are probably not as familiar with privacy law as with their own areas of expertise.
In summary, to find good GDPR advice, you should:
  • Get a rounded view
  • Consider risks to individuals’ privacy not just organisational impact
  • Instil and maintain privacy-aware culture and practices
  • Be deeply suspicious of any/all claims of one-stop/universal fixes

Human Error

To err is human… forgive, divine..

…(but to really screw things up, you need a computer….!)

One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

Premise; I fully believe that the statement “the breach was down to human error” is a total copout.


Let’s start with “error”. The dictionary definition says:

  1. A mistake
  2. The state or condition of being wrong in conduct or judgement
  3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

The human bit

Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.

How To Not Be An Arse

(a.k.a the futility of compliance-for-the-sake-of-it programmes)

Imagine there was a law* that says “don’t be an arse to other people” which contains a list of 8 general requirements for avoiding arse-ness, including (among others) “be fair”, “be honest”, “don’t be reckless or negligent” and “don’t deny people their rights”.

Then hundreds of thousands of hours, billions of beer tokens and litres of sweat from the brows of assorted lawyers and auditors later; there were produced a number of standards and frameworks, guidance documents and checklists for helping everyone to ensure that whatever they’re doing, they’re avoiding being an arse.

At which point, everyone’s efforts get directed towards finding some technical way to acquire a clean, shiny glowing halo; ticking all of the boxes on the checklists, generating reams of ‘compliance’ paperwork, churning out Arse Avoidance Policies…….but actually ending up as almost *twice* as much of an arse because despite all of the shouting and scribbling and hymn-singing, what they are actually doing on a day to day basis looks remarkably arse-like (despite being called a “Posterior-Located Seating and Excretion Solution”; not the same thing at all) – since as it turns out, arsing around is lucrative and being well-behaved is not so much.

And then the questions is no longer “how do we avoid being arses” or even “what do we need to do to make sure we are not accidentally not arses?” but becomes “what is the bare** minimum we have to do in order not to appear to be arses?”

And that becomes the standard that (nearly) everyone decides to work to, writing long, jargon-filled statements explaining “why we are definitely not arses at all”, insisting that you must all complete a mandatory, dry-as-dust, uninformative half-hour “Anti Arse” e-learning module once a year (and calling it a “training programme” – hah!), hiring armies of lawyers to define the boundaries of “arse” and generally forgetting what it was that the law was trying to achieve in the first place. All of that costs quite a lot of money and – surprise surprise – doesn’t actually fulfill the intent of the law in the first place.

If you have to hide, obfuscate or misdirect from what you are really doing, then it’s quite likely that you are not achieving compliance with the law, no matter how much paperwork you generate or how shiny your halo looks.

It’s quite simple……just don’t be an arse.


(*in case you didn’t get it; that would be the Data Protection Act…..)

(**yes I had to get a ‘bare’ reference in there somewhere)

WARNING - this site sets cookies! Unfortunately, I am unable to disable some of the inbuilt tracking without killing the site content. tell me more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.