Press "Enter" to skip to content

Tag: rant

Tools

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

Abraham Maslow

An open letter to the information security profession

Dear infosec people,

You do a tough job in a complex, high-stress and fast-paced environment. I admire the cleverness of your technical capabilities and respect the challenges you face.

Having said that; PLEASE SHUT THE FUCK UP ABOUT THE GDPR UNLESS YOU HAVE REALLY STUDIED DATA PROTECTION.

Seriously. You’re making the lives of privacy professionals really difficult, and that’s not going to lead to collegiate and constructive co-operation. You’re also occasionally making yourselves look like right knobbers to those of us who do know what we’re talking about in this area.

I’m generally not a fan of the ‘stay in your lane’ philosophy – breaking down silos and working together is an essential part of being effective these days. However, if you have not learned the rules of the other lanes, then carelessly blundering into them and screwing with the traffic flow is just as bad – if not worse – than hiding in a silo.

I absolutely welcome infosec people learning more about privacy/data protection – it’s the career path I took myself and have flourished upon. What sends me up the bloody wall though, is the Dunning-Kruger Effect that is evident when infosec people try to tackle data protection without having spent the time and effort to understand privacy law. Because they get it so very wrong and are uncritically parroted by other people who aren’t familiar with either professional knowledge domain, thereby spreading myths, tropes and general #GDPRubbish.

Infosec and privacy are not the same thing at all. There is overlap, but only for a small proportion of both. There is wide divergence and narrow convergence. Information security is about protecting corporate information and systems. Privacy is about protecting individuals and society. Data Protection is privacy applied to information about living people.

pie chart of data protection principles
DP principles

Data protection requires information security, but only as a small feature in a broad landscape of human right-based risks, controls, considerations and obligations. There are seven principles in data protection law, and only one of them is ‘process personal data securely’.

There are a whole bunch of individuals’ rights that have nothing to do with the security of their data. There are a pile of obligations that don’t relate to information security in any way. If you didn’t know that, or you don’t know what those principles, rights and obligations are, then please either go and learn, or belt up and refrain from undermining privacy by hijacking GDPR conversations and narrowing them to infosec-centricity.

It’s understandable that when your whole world is one topic, you’ll see everything else within those terms of reference. It’s natural to have a whole bunch of cognitive biases and assumptions. This isn’t a value judgement on your character, it’s just me pointing out an opportunity to integrate rather than colonise.

To assist you with this, here are some nuggets of data protection wisdom for you to take away and keep.

  • Privacy is not equivalent to confidentiality, it is the right to be free from unwarranted or arbitrary interference. This may involve a degree of confidentiality for information, but not necessarily. Data Protection is usually more concerned with why you’re doing stuff with/to people via their data than how secret the data is.
  • Privacy is not the binary opposite of ‘in public’. In fact, ‘in public’ is a spectrum anyway, but even if it were a single environment, it would still not be the opposite of privacy, because being amongst other people does not negate your human rights.
  • ‘Personal data’ is wider than ‘personally-identifiable information’. It’s heavily influenced by context and association, and the same piece of info may be ‘personal data’ in one scenario but not in another. There is no binary always/never threshold. Deal with it.
  • ISO27001 or any other infosec standard will NOT deliver GDPR compliance. Not even close. Not even 50%. Done properly, 2700x can help you adhere to the security and accountability principles, but does nothing to address fairness, rights, transparency, rights, lawfulness (etc)
  • No system, tool, document set or ‘solution’ can be ‘GDPR-compliant’ in itself. Only when used in accordance with all of the data protection principles, within an organisational culture of respect for privacy, in a privacy risk-managed way, can it play a small part in GDPR ‘compliance’. Which, by the way, requires the org to have integrated strong data protection risk management as business-as-usual into EVERY process, system, activity and decision.
  • The GDPR is principle-based law on purpose. It leaves room for innovation, creativity, risk appetite and context. If you’re looking for a prescriptive checklist of inflexible instructions for which no nuance is required, then stop trying to understand data protection and focus on PCI-DSS instead.
  • The only time ‘encryption’ is the ‘answer’ to data protection, is when the question is ‘what is one way to protect the confidentiality and integrity of data within a particular digital processing environment?’.
  • Security controls themselves must be assessed for privacy risk. User monitoring and profiling, authentication and verification for example, carry inherent privacy risks of their own and the security justification for using them may be negated by the privacy justification for NOT using them. This is not ‘privacy stopping you from doing your job’ but ‘the lesser of two evils’.

I believe that the disciplines of infosec and privacy can and should work collaboratively and constructively. But in order to do so, privacy pros need to be sterner about emphasising the ‘rights and freedoms’ aspect of data protection, and infosec pros need to accept that their security expertise does not equate to competence in the privacy domain.

Thank you for reading

Lots of luv and respect,

Rowenna

(This was originally going to be an exasperated sweary rant, but it turned out quite moderate and civil. Apologies to anyone I have disappointed as a result.)

More about the differences between privacy and security

Discover what the other 90% of the GDPR is all about

10 Legitimate Interests Lessons for Marketers

1. Just because you’re interested, doesn’t make it legitimate.

2. You can’t use LI to avoid getting consent when you suspect the answer will be “No”

3. Whether LI can be applied depends on your own assessment of what you’re doing, why and how – which you will be expected to justify and defend.

4. LI is not ‘unclear’ or ‘ambiguous’; it requires thinking to be done and a decision to be made.

5. Publish your Legitimate Interests Assessments (LIA) if you anticipate/plan to reject objections to processing.

6. If a law says you have to get consent for a processing activity, then forget about LI. You can’t use it. Move on.

7. LI is only a valid lawful basis for processing personal data if you’re adhering to all of the principles. It’s not a loophole around compliance.

8. If your LIA is post-hoc rationalisation of something you won’t consider ceasing to do even though you suspect it’s a bit dodgy; then you wasted your time. Just make sure you have funds set aside to deal with complaints, regulatory action and reputation damage when you get found out.

9. The ICO is not responsible for your continuing professional development

10. No-one else can do your thinking for you

“We take your privacy very seriously”

….says the intrusive ‘cookie consent’ popup which requires me to navigate through various pages, puzzle out the jargonerics and fiddle with settings before I can access the content I actually want to read on the site.

Here’s the thing. If your website is infested with trackers, if you are passing my data on to third parties for profiling and analytics, if your privacy info gets a full Bad Privacy Notice Bingo scorecard, then you DON’T take my privacy seriously at all. You have deliberately chosen to place your commercial interests and convenience over my fundamental rights and freedoms, then place the cognitive and temporal burden on me to protect myself. That’s the opposite of taking privacy very seriously, and the fact that you’re willing to lie about that/don’t understand that is a Big Red Flag for someone like me.

Bad Privacy Notice Bingo!

Snark attack!

Having spent many, many hours reviewing privacy notices lately – both for the day job and for my own personal edification – I’m discouraged to report that most of them have a long way to go before they meet the requirements of Articles 13 and 14 of the GDPR, let alone provide an engaging and informative privacy experience for the data subject.

Because I am a nerd who cares passionately about making data protection effective and accessible, but also a sarcastic know-it-all smartarse, I created this bingo scorecard to illustrate the problems with many privacy notices (or “policies” as some degenerates call them) and splattered it across social media. Hours of fun.

GDPRubbish

Unless you’ve been living under a rock, you’ll have noticed that there are lots of people talking about GDPR – which is a good thing.
However, there is lots of nonsense being talked about GDPR – which is a bad thing.
My Twitter timeline, LinkedIn feed and email inbox are being deluged with advertising for GDPR compliance “solutions” and services – which is fine as long as the product in question is treated as a tool in the toolbox and not a magic instant-fix-in-a-box spell for instant transformation
Based on some of the twaddle I’ve seen being talked about GDPR lately, and my own experience in supporting data protection within organisations, here is a list of markers which, should they appear in an article, advertisement or slideshow, should be a warning to treat the rest of the content with a hefty pinch of salt.

Human Error

To err is human…..to forgive, divine..

…(but to really screw things up, you need a computer….!)

One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

Premise; I fully believe that the statement “the breach was down to human error” is a total copout.

Why?

Let’s start with “error”. The dictionary definition says:

  1. A mistake
  2. The state or condition of being wrong in conduct or judgement
  3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

The human bit

Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.

How To Not Be An Arse

(a.k.a the futility of compliance-for-the-sake-of-it programmes)

Imagine there was a law* that says “don’t be an arse to other people” which contains a list of 8 general requirements for avoiding arse-ness, including (among others) “be fair”, “be honest”, “don’t be reckless or negligent” and “don’t deny people their rights”.

Then hundreds of thousands of hours, billions of beer tokens and litres of sweat from the brows of assorted lawyers and auditors later; there were produced a number of standards and frameworks, guidance documents and checklists for helping everyone to ensure that whatever they’re doing, they’re avoiding being an arse.

At which point, everyone’s efforts get directed towards finding some technical way to acquire a clean, shiny glowing halo; ticking all of the boxes on the checklists, generating reams of ‘compliance’ paperwork, churning out Arse Avoidance Policies…….but actually ending up as almost *twice* as much of an arse because despite all of the shouting and scribbling and hymn-singing, what they are actually doing on a day to day basis looks remarkably arse-like (despite being called a “Posterior-Located Seating and Excretion Solution”; not the same thing at all) – since as it turns out, arsing around is lucrative and being well-behaved is not so much.

And then the questions is no longer “how do we avoid being arses” or even “what do we need to do to make sure we are not accidentally not arses?” but becomes “what is the bare** minimum we have to do in order not to appear to be arses?”

And that becomes the standard that (nearly) everyone decides to work to, writing long, jargon-filled statements explaining “why we are definitely not arses at all”, insisting that you must all complete a mandatory, dry-as-dust, uninformative half-hour “Anti Arse” e-learning module once a year (and calling it a “training programme” – hah!), hiring armies of lawyers to define the boundaries of “arse” and generally forgetting what it was that the law was trying to achieve in the first place. All of that costs quite a lot of money and – surprise surprise – doesn’t actually fulfill the intent of the law in the first place.

If you have to hide, obfuscate or misdirect from what you are really doing, then it’s quite likely that you are not achieving compliance with the law, no matter how much paperwork you generate or how shiny your halo looks.

It’s quite simple……just don’t be an arse.

 

(*in case you didn’t get it; that would be the Data Protection Act…..)

(**yes I had to get a ‘bare’ reference in there somewhere)

Hello. I use privacy-friendly analytics (Matomo) to track visits to my website. Can I please set a cookie to enable this tracking? I’m afraid that various plugins and content I have on the site here also use cookies, so a ‘yes’ to cookies is a ‘yes’ to those too. Please have a look at my Privacy Info page for more info about these, and visit my advice page for tips on protecting your privacy online