Press "Enter" to skip to content

Category: Culture

“We take your privacy very seriously”

….says the intrusive ‘cookie consent’ popup which requires me to navigate through various pages, puzzle out the jargonerics and fiddle with settings before I can access the content I actually want to read on the site.

Here’s the thing. If your website is infested with trackers, if you are passing my data on to third parties for profiling and analytics, if your privacy info gets a full Bad Privacy Notice Bingo scorecard, then you DON’T take my privacy seriously at all. You have deliberately chosen to place your commercial interests and convenience over my fundamental rights and freedoms, then place the cognitive and temporal burden on me to protect myself. That’s the opposite of taking privacy very seriously, and the fact that you’re willing to lie about that/don’t understand that is a Big Red Flag for someone like me.

If you really took my privacy very seriously, you would use an analytics tool that doesn’t feed a huge surveillance behemoth – for example, Matomo instead of Google Analytics or Quantcast. Or just focus on producing high-quality, navigable content that makes me want to interact with you more without any of that stalkertech.

Your approach to consent would be discreet and respectful, allowing me to enable specific functionalities as and when they are needed, rather than demanding my attention immediately and trying to grab consent for everything straight away. Consent has to be obtained before cookies/trackers are placed/read, yes – but that doesn’t mean you should try and set as many of these as possible as soon as I land on your page.

There are several ‘consent management’ solutions popping up (literally) all over the place, interrupting people’s reading, rendering badly on mobile, requiring lowering of privacy protections to interact with, some even operating in a way which is contrary to law in the first place (I’m looking at YOU, website operators who remove the ‘Reject All’ button from the Quantcast dialogue). Everyone moans about cookie banners and consent dialogues, regarding them as an unwanted intrusion and a pain in the butt. They are both. But here’s the thing – the problem isn’t that site operators are required to inform you about tracking/profiling/mucking about with data on your device, the problem is that this is done at all – on such a large scale by so many and without accountability. Behavioural advertising, demographically-targeted marketing, personal profiling – all these are by nature, inimical to fairness, individual rights and freedoms. There’s a huge industry beavering away in the shadows trying to quantify and categorise and manipulate us for profit; and an even vaster network of ‘useful idiots’ capturing and feeding them the data they grow fat upon. Your data. My data. Your website? Your app?

Now, I accept that this is how much of the world works these days, even though I really don’t like it. I continue to campaign for change by supporting organisations such as the Electronic Frontier Foundation, Privacy International, NOYB, Liberty and the Open Rights Group, by giving professional advice based on ethics as well as risk and technicality (and making it clear which are which) and by doing as much work on educating the general public as I can spare time and energy for. I understand market[ing] forces. What I can’t bear is the slimy, self-justifying PR bullshit that’s spread like rancid butter over the surface of ‘compliance’.

Like saying “we take your privacy very seriously” while actively supporting an ecosystem which is privacy-hostile at best and privacy-abusive at worst. Like saying “we take your privacy very seriously” and then using meaningless copypasta template privacy info which bears no relation to the processing at hand. Like saying “we take your privacy very seriously” and not even bothering to take elementary precautions to limit or protect the personal data being snorted up at every turn.

One lesson I learned from my infosec days is one of distrust – the most likely time for you to hear or read “we take the security of your data very seriously” is in panicked press releases after an avoidable breach of that very data has occurred. Anecdotal, of course, but I see a very strong inverse correlation between loud blustering about how seriously security/privacy is taken, and how rigorously this is actually implemented. Its become a bit of a shortcut to analysis – anyone who feels they have to squawk about it probably shouldn’t be trusted to be actually doing it.

 

When you don’t “take privacy very seriously”, no amount of gaslighting PR camouflage is going to be a convincing substitute. So maybe just stop saying it eh? No-one believes you anyway.

It’d be so refreshing to see a statement like “There is often a compromise to be made between individual privacy and commercial advantage. We do it like this because it is more [cost]-effective for us to achieve our business objectives, even though it may have an impact on you. Here is all the stuff that the law says we have to tell you:…”. A while back, a bunch of privacy nerds were having fun with the #HonestPrivacyInfo hastag on Twitter – while amusing; this is also worth a read because many of the examples are actually much more transparent and accurate than anything you’ll read in a company’s official ‘privacy policy’.

Just be warned….if you’re going to claim you take my privacy seriously, then I will require you to demonstrate that. And I will make a fuss if you don’t.

10 Anger Management Tips for DP Pros

Grrrrr! Gah! Aaarrrggghhhh!

Sometimes it feels like an uphill struggle, bringing data protection good practice to the masses. Sometimes it feels like an vertical climb up a razor-wire-covered fortress turret while hostile archers fire flame-tipped arrows down at you from overhead. I confess that sometimes I am a little short on patience and tolerance (although I try hard not to let it show!) and I do spend quite a lot of my time with gritted teeth and clenched fists. I’m probably not the only one – which is why I wrote this blog post. Despite my naturally sarcastic tone, the sentiment is genuine – and hopefully contains at least one nugget of actual good advice.

Take care of yourselves, don’t be ashamed to reach out for help when things get on top of you, and remember that come the Zombie Apocalypse; your survival will not be based on how successfully you got an organisation to implement data protection!

I present; 10 Anger-Management Tips for DPOs (I’ve said DPOs for brevity; but this applies pretty much to anyone working in any role within privacy and information governance!)

10 Anger management tips for DPOs

  1. Accept that your colleagues don’t care about your subject as much as you do. If they did; they’d be DPOs too. Not everyone is as enlightened as we higher mortals – feel compassion, not scorn.
  2. Learn the phrase “perfect is the enemy of good enough”. Recite it 100 times a day. Convince yourself you believe it.
  3. Publish useful, informative, entertaining, educational content as often and as prominently as you can. Make sure it is all tagged, indexed, searchable and accessible. Include a liberal sprinkling of amusing gifs, memes and cat pictures. You might be the only person who ever reads it so you may as well make it amusing.
  4. Practise the Serenity Prayer. You’re gonna need it, even if you don’t end up taking to the bottle for comfort.
  5. Remember, it’s not for you to ‘sign off’ on the organisation doing something unlawful. Make sure authorisation and acceptance of the risk is firmly pinned on someone above your pay grade. Get it in writing. Keep a copy.
  6. Make friends with your colleagues in health & safety, safeguarding, and infosec. They have the same problems as you do and you can all cry together in the canteen. Solidarity, comrades.
  7. Maintain your integrity. Admit when you’re wrong, don’t repeat your mistakes, debate in good faith, own, apologise and try to fix things when you screw up. Everyone’s gonna resent you enough already without giving them reasons to disrespect you as well. Plus, it will be less likely you’ll be hunted with pitchforks when you give advice others don’t like.
  8. Don’t take anyone’s word for anything. Chances are they don’t understand what they’re talking about anyway, so you might as well double-check before it becomes a problem landing on your desk with a post it note saying “this needs fixing urgently”.
  9. Seek out your fellow DPOs and form a support group. There is much to be said for bonding with like-minded fellow warriors over therapeutic bitching sessions and lawful basis debates in the pub.
  10. Remind yourself that you’re one of the Good Folk. You care about rights, freedoms and responsibilities. You are the front line of defence against the dark arts of exploitation, discrimination, victimisation and greed. No-one else might recognise it, but the work you do is essential and worthy. *Fist bump*.

Meme Frenzy

At some point, I’m going to try and make a privacy notice delivered through the medium of internet memes. While playing about with the possibilities of this, I got totally sidetracked and ended up data-protection-ifying a load of popular memes for my own nerdy amusement.

Here are the fruits of my misdirected labour. I think I might need to get out more

doge: dis policy, many data, such privacy, mor cookies, wow

We take your privacy very seri- Shut up!

One does not simply consent by reading a policy

Not sure if Controller or non-compliant Processor

I don't always need consent, but when I do it's specific, informed, freely-given and unambiguous

If you could actually take my privacy seriously that would be great

I read your privacy policy, it say's you're tracking me, ohhhh no, SAR TIME

Brace yourselves - ePrivacy Reg is coming

Y u no tell me legal basis for processing

They said they use my data for advertising purposes. I sent them a SAR

Sells you stuff online - doesn't make you create an account

Just Culture 2: Risky Behaviour

Previously, I’ve introduced the concept of the “just culture” and explained the basic principle. In this blog post I will look at the types of behaviour that give rise to incidents and how, in a just culture, these would be addressed.

Hands up if you’ve ever done any of the following:

  • Politely held the door to your office open for a stranger without asking to see their ID
  • Re-used a password
  • Emailed work to your personal account to work on outside the office
  • Mis-addressed an email, or mistakenly used CC rather than BCC

Did it seem like a good idea at the time? (You can lower your hands now, by the way) Perhaps you were under pressure to get work done to a deadline, or maybe you couldn’t afford the cognitive effort of considering security policies at the time. These types of “incidents” occur every day, all over the place and in most cases they do not result in disaster – but one day, they could…and unfortunately, in most corporate cultures the blame will rest on the person who didn’t follow the policies.

In a just culture, blame is not appropriate and punishment is only reserved for a minority of behaviours – those which were driven by malicious intent or deliberate and knowing recklessness. None of the activities listed above really fall into that category and so even if they did result in major data leakage, disruption or loss; should not be responded to with punitive action – especially if everyone is doing the same but getting away with it. The sysadmin who runs a private file-sharing server on the corporate network, or the manager who illegally snoops on their staff’s emails should be punished – not those who are just trying to get on with their jobs.

Most incidents arise from “risky behaviour” rather than malice or knowing recklessness. Risky behaviour falls into two main categories:

  1. Genuine error (see http://missinfogeek.net/human-error/ for some further thoughts on that) – such as mis-typing a name, confusing two similar-looking people, being taken in by a highly-convincing well-crafted scam site or email or unknowingly putting your security pass in the pocket that has a hole in the bottom
  2. Underestimation or low prioritisation of the risks (perhaps due to conflicting imperatives – e.g. time pressure, budget constraints, performance goals) – this is where most risky behaviour occurs.

These behaviours should not be treated the same way, for that would be unjust.

In the case of 1), the appropriate response is consolation and a review of controls to identify whether there are any areas which could benefit from additional ‘sanity checks’ without making it too difficult for people to get their jobs done. Humans are imperfect and any system or process that relies on 100% human accuracy is doomed to fail – this is a design fault, not the fault of the errant.

The second type of behaviour is more challenging to mitigate, especially since human beings are generally rubbish at assessing risk on the fly. Add in cognitive dissonance, conflicting priorities and ego and you end up with the greatest challenge of the just culture!
Explaining the reason that the behaviour is risky, pointing out the correct approach and issuing a friendly warning not to do it again (OR ELSE) is the appropriate and fair response.

So how in general should risky behaviour be prevented? Education is the foundation here – not just a single half-hour e-learning module once a year, but frequent and engaging discussion of infosec risks using real-life anecdotes, analogies, humour and encouraging input from all.

On top of the education programme; there needs to be a candid look at business process, systems, procedure and tools – are they set up to make risky behaviour the path of least resistance or do they encourage careful thought and good habits?

Monitoring and correcting behaviour comes next and it is critical that this be done impartially and with as much vigour at senior levels than for front-line and junior staff. If the C-suite can flout policy with impunity then not only will you struggle to achieve a successful just culture, but you also have a gaping big hole in your security defences.

A just culture relies on robust procedures, a series of corrective nudges and above all, consistency of responses in order to be effective. Far too often, individuals are thrown to the wolves for simply getting unlucky – forced to use non-intuitive or badly-configured systems, under pressure from management above, with inadequate resources and insufficient training, they cut the same corners as they see everyone else doing – and pay the price of the organisation’s failures.

Next time: building a just culture in a pit of snakes*

*something like that, anyway

‘Just Culture’: an introduction

As I noted in last week’s blog post, the phrase “human error” covers a lot of ground, and fails to distinguish the causes of errors from each other; thus not being terribly helpful in incident analysis, being a generic statement of “something happened that wasn’t supposed to”.

The “something” may cover a number of scenarios, behaviours and motivations but to unpick an incident and protect against further occurrences, the conditions and actions do need to be examined, because it is those which determine the appropriate response. This is where a “Just Culture” comes in.

For those of you not familiar with the phrase, the term “Just Culture” arose from the work on aviation safety by Professor James Reason in the late 90s and early 00s. Professor Reason recognised that fear of a punitive reaction to human error is likely to discourage reporting of incidents, whereas it would be more advantageous to foster  “an atmosphere of trust in which those who provide essential safety-related information are encouraged and even rewarded, but in which people are clear about where the line is drawn between acceptable and unacceptable behaviour.”

There is much written about the principles and practices of a Just Culture, which has been adopted in many safety-conscious industries, including transport, construction and healthcare which I will refrain from regurgitating (if you’re interested, see the links at the end). My putpose here is to generally have a bit of a moan about how far the information security industry has lagged behind in adopting a similar position, and how personally, I think it’s time we caught up.  

When individuals are afraid to report information security risks and incidents for fear of ‘getting into trouble’, apathetic resignation to broken systems and processes or simply because they don’t recognise a problem when it arises, those risks and incidents will not be managed – increasing the likelihood that they will accumulate to the point of causing serious damage or disruption.

Security policies and procedures are routinely breached for various reasons – they fail to reflect the needs and risk appetite of an organisation, they are difficult to find or to understand, or they demand a higher level of technological capacity than the organisation can muster. If the only time that these breaches are identified is when the consequences are adverse -and the outcome of such occurrences is that individuals are punished for being ‘caught out’ by doing what they see everyone else doing – then human nature being what it is; more effort will go into concealing the instances of policy breach than the rectification of the core problems that cause the policy to be breached, and breaches will continue to occur. 

However, simply enforcing reporting of breaches and incidents won’t, on its own, result in any meaningful change if the root causes of incidents aren’t analysed and treated. In my next blog post I will look a bit deeper into the analysis of incident causes and the behaviours that contribute to their occurrence.

References:

“Just Culture: A Debrief” https://www.tc.gc.ca/eng/civilaviation/publications/tp185-3-2012-6286.htm

“Just Culture” http://www.eurocontrol.int/articles/just-culture

“Patient Safety and the  Just Culture” https://psnet.ahrq.gov/resources/resource/1582

“Just Culture” Sidney Dekker: http://sidneydekker.com/just-culture/

WARNING - this site sets cookies! Unfortunately, I am unable to disable some of the inbuilt tracking without killing the site content. tell me more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close