Press "Enter" to skip to content

Tag: data-protection

10 Legitimate Interests Lessons for Marketers

1. Just because you’re interested, doesn’t make it legitimate.

2. You can’t use LI to avoid getting consent when you suspect the answer will be “No”

3. Whether LI can be applied depends on your own assessment of what you’re doing, why and how – which you will be expected to justify and defend.

4. LI is not ‘unclear’ or ‘ambiguous’; it requires thinking to be done and a decision to be made.

5. Publish your Legitimate Interests Assessments (LIA) if you anticipate/plan to reject objections to processing.

6. If a law says you have to get consent for a processing activity, then forget about LI. You can’t use it. Move on.

7. LI is only a valid lawful basis for processing personal data if you’re adhering to all of the principles. It’s not a loophole around compliance.

8. If your LIA is post-hoc rationalisation of something you won’t consider ceasing to do even though you suspect it’s a bit dodgy; then you wasted your time. Just make sure you have funds set aside to deal with complaints, regulatory action and reputation damage when you get found out.

9. The ICO is not responsible for your continuing professional development

10. No-one else can do your thinking for you

Privacy vs Security: A pointless false dichotomy?

This is the text of a presentation I gave recently during Infosec18 week. By popular demand (i.e. more than three people asked), I’m re-posting it here for a wider audience. I also intend to record it as a downloadable audio file at some point when I have some free time (hahaha, what’s that???). I took out the specific case studies for the sake of brevity, but I will post those separately as Part 2.

This is how it went


Part 1: The Big Debate

You may have seen the ‘Privacy vs Security’ debate being argued in the news, on forums and at events over the past few years. Having worked in both disciplines, I find this question coming up a lot and I want to unpick it today because I’m not convinced that any of the debates I have seen have really got to the heart of the matter.

In order to answer the question “is privacy vs security a pointless false dichotomy?“, we must first define the terms we are discussing – otherwise we’ll be shouting about tangential irrelevancies at each other all day and not getting anywhere.

What are ‘privacy’ and ‘security’? They are easier to describe in comparison than to define in a vacuum.

Security is a very wide topic, and very context-dependent. There are many flavours of security, for example (nb: these are my own words for the purposes of clarity, please don’t post argumentative comments loaded with dictionary definitions)

  • Physical security – the integrity of person or premises
  • Information security – the Confidentiality/Integrity/Availability triangle model that relates to information and supporting systems
  • National security – the integrity of borders and infrastructure, often closely entangled with physical and economic security. Depending on the nation, there may also be a social and cultural element to how security is viewed.
  • Economic security – the integrity and availability of trade and financial matters.

However, I’m only going to address information security in this talk, because that’s what we’re all here for.

Privacy is the concept of personal autonomy; the integrity of both the tangible and intangible self. It’s solely focused on people (and in data protection law, those people have to be alive for the law to apply. Zombies do not get privacy rights).

Many people working in infosec are predisposed to think of privacy solely in terms of data confidentiality, but in doing so they misunderstand and misapply the concept. This actually leads to degraded privacy, so it’s definitely a bias be mindful of and adjust for.

There are also different flavours of privacy

  • Physical – being free from unwanted/unwarranted touching or restriction of movement
  • Data protection – transparency, fairness and control in relation to information about (living) people
  • Social – being able to associate with whomever you wish

These flavours of privacy are most defined in law. In the UK, we have the Data Protection Act 2018, the GDPR, the Privacy & Electronic Communications Regulations (soon to be ePrivacy Regulation) and the Human Rights Act. However, as well as formal codification into law, there are also a variety of cultural expectations and social consensus around privacy.

The ways in which we use the words ‘security’ and ‘privacy’ are varied. We use these terms to describe both the desired position we are trying to achieve, but also the process of managing factors in order to achieve the desired position. Security and privacy are not just states of being but also the activities required to bring about and maintain those states.

Which one – the position or the approach – do we actually mean when we ask the question “privacy vs security”? It makes a difference, because the process of working towards one may well undermine the state of the other, if we’re not careful.

Security is not a binary on/off position. The goal is to achieve suitable security to manage risk within tolerances and capability. A regime of absolute security would be pointless, it would prevent everyone from getting stuff done. What you want is enough security. How much is enough? Well, that depends on what you are trying to achieve and how you plan to go about it.

Security is not an end unto itself – you don’t pursue a position of security simply because it brings rainbows and butterflies into your soul. You do it because you need to protect something sufficiently to allow it to function as intended.

Privacy is more of an end unto itself, based on the ideal that people aren’t just units of exploitable animated flesh but that everyone has a unique and valuable contribution to make to the great mosaic of life (even if that contribution is merely to serve as a warning to others), and that they should be allowed a degree of autonomy, freedom and dignity in which to do so.

Your views on whether that’s a good thing may vary but (in theory), this is what civilised democratic society has collectively agreed upon.

Privacy is also not a binary – for example, it is certainly not the opposite state to ‘in public’. I have the same right to be free from unnecessary interference when I walk down a public street as when I am in my home, and so does my data. Neither myself or my data can be grabbed and used however the grabber wishes, no matter how gratifying or lucrative the grab-and-use idea may be.

Privacy rights – i.e. not being subject to unwarranted interference – are qualified rights. This means that there will be circumstances where the good of the collective takes higher priority when in conflict with the rights or preferences of the individual. For example, your right to move about freely stops when you are imprisoned after being convicted of a crime. Your right to control how information about you is used becomes limited when that use is necessary to protect other people.

There are degrees of privacy, just the same as there are degrees of security; and those are also dependent on context and risk tolerance – but additionally, on other factors such as cultural values, moral principles and social norms.

Both words – “security” and “privacy” relate to a spectrum of desired positions into which a variety of inputs are factored; and to the pursuit of achieving or maintaining those desired positions.


In considering whether security and privacy are really in conflict, it’s helpful to look first at where they align.

They are both intended to protect and defend things we consider to be worth protecting and defending.

The most obvious example of alignment is the principle within data protection (privacy in relation to information about living people), which states that

“personal data must be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical and organisational measures”. [Article 5.1(f) GDPR]

Clearly, unless personal data is protected against unintended or unauthorised uses (by securing it), then privacy will be affected – on both an abstract level (someone’s rights are infringed, although they may not realise it) and potentially on a practical level, resulting in adverse consequences such as inconvenience, harassment, fraud, discrimination or other mistreatment.

Therefore in this specific context, privacy and security are not at odds – rather privacy depends on security.


Privacy and security have a different focus, although context and circumstance can bring them closer together. Just as privacy goes beyond information security into the realms of fairness, lawfulness and transparency; so security also goes beyond privacy – extending outside the context of personal data and into business data: trade secrets, financial details, competitive advantage, regulatory requirements and operational necessities.

Privacy focuses on harm to the individual, whereas security focuses on harm to the organisation.

The question of whether ‘privacy vs security’ is a false dichotomy would require us to look at the areas where the two diverge if we were to consider it seriously. But I don’t think it’s even a question worth asking at all. It’s the wrong question – and usually only deployed to make a rhetorical and ideological point by someone with a vested interest in a particular answer.

Take, for example, the argument that increased mass surveillance of the general population is a necessary measure to keep that population safe. It is presented as a choice between ‘being watched all the time and staying safe’ vs ‘keeping other people’s noses out of your business and getting everyone blown up’. This is definitely a false dichotomy – usually followed by the maddening “nothing to hide = nothing to fear” trope. It is also nonsense, for a number of reasons. More surveillance means more data, but it does not automatically mean better analysis or response, especially when the resources for picking signal from noise are already overstretched. One does not locate more needles by adding more hay to the stack. Also, we already have mechanisms for targeted surveillance of people who the authorities think are up to no good, and this is a necessary control for a free and democratic society. Inevitably, collecting more data leads to more ways to use that data – whether well-intentioned or nefarious.

We simply cannot trust either the individual or groups of individuals to always act rationally, ethically (even if we could agree on what that looks like) and appropriately. Mass surveillance hugely increases both the likelihood and the potential impact to the victims of irrational, unethical or inappropriate action which is made possible, or justified by the uncritically-accepted data gathered by mass surveillance; but it does not benefit the desired security posture in proportion to the damage it does to individuals’ rights and freedoms.

What’s the point then?

Actually, the questions we should be asking if we want to get stuff done, stay out of trouble, not be Bad Guys and keep the organisation running are the following:

Is my security posture incurring intolerable privacy risk?

Is my privacy posture incurring intolerable security risk?

Bear in mind here that “intolerable” is not just a reference to what you or your organisation is willing to accept, but also what other individuals or society as a whole will accept; ie you must factor in legal obligations, contractual obligations and public opinion.

Neither of these questions mean that one posture invalidates the other. These are comingled analogue spectrums, not a binary OR gate.

If the answer to both questions is “no”, then the matter is settled. Keep on doing the good work and make sure you ask the questions again regularly.

If the answer to either question is “yes”, then in order to resolve the issue, you must ask more questions:

  • Can I achieve an equivalent security or privacy posture in another way?
  • If not;
    • Can I terminate or treat the risks without compromising on tolerances?
    • What is the range in cost, effort and feasibility of the options available to me?
  • How do I present this clearly to executive stakeholders?


In summary: it’s not “privacy vs security”; it’s “appropriate security AND appropriate privacy“. Managing the risks of both is not just about considering cost and reputation – there are also laws which have already defined the parameters of acceptable risk and these need to be taken into account.

Security is not privacy and privacy is not security. Confusing the two or trying to manage them as a single risk will likely lead to your failure at one or the other, if not both.

Be very suspicious of anyone who says privacy must be ‘sacrificed’ for security. There is already provision in law for balancing these. Nothing is risk-free, and even the complete negation of one would not guarantee the other. Therefore, there is no need to ‘sacrifice’ anything. Ask those people: which of YOUR rights and freedoms are they planning to take from you?

Part 2: Case studies will be posted soon

Whose Decision is it Anyway?

Controller/Processor determinations

(a.k.a how a data protection anorak spends their leisure time)

Update: Sorry that the tool is not currently working – My supposedly ‘unlimited’ free Zingtree account has expired, and they want £984 a year for me to renew it, which I can’t afford. Currently looking for alternatives – if you know of one, hit me up! I’ll post a downloadable text version of the tool very soon.

Following a lot of pre-GDPR kerfuffle online about Data Controller/Data Processor relationships (and the varying degrees to which these are direly misunderstood), I spent a geeky Sunday night putting together a decision tree tool which should – hopefully – help people who are getting confused/panicked/deeply weary of the search for answers.

It’s not intended to be legal advice, it’s not formal advice from me as a consultant and it’s not guaranteed to be absolutely 100% perfect for every possible scenario. It’s designed for the low-hanging fruit, the straightforward relationships (like standard commercial supply chain) rather than the multi-dimensional nightmare data sharing behemoths one tends to find in the public sector.

Anyway, here it is. Enjoy. If you like it, please tell others where to find it. If you have constructive criticism (that’s not “oh you missed out this incredibly niche complex scenario that would only ever happen every 100 years”) please tell me.

The Tool


Here are also some useful links:

Who’s in Control?

Human Error

To err is human… forgive, divine..

…(but to really screw things up, you need a computer….!)

One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

Premise; I fully believe that the statement “the breach was down to human error” is a total copout.


Let’s start with “error”. The dictionary definition says:

  1. A mistake
  2. The state or condition of being wrong in conduct or judgement
  3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

The human bit

Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.


This morning, I was spending my leisure time researching options for email newsletters. Just to be clear, this isn’t something I would necessarily choose to do for fun, but is linked to my role as Digital Officer for a certain professional association for information rights professionals.

All of the reviews I read seem to hold MailChimp up as cost-effective, easy to use and feature-rich. “Great”, I thought and then the privacy nerd in me started muttering….I wasn’t surprised to see that MailChimp are a US company, as their inability to spell common words such as “realise” and “harbour” had already clued me up to this, but that doesn’t necessarily present an insurmountable data protection problem for a UK organisation looking to use their services (setting aside the current kerfuffle about Safe Harbour/Privacy Seal/NSA etc etc). I thought as a prospective customer of their services, I’d check out the privacy policy (nothing more embarrassing than accidentally using personal data unfairly or unlawfully when you’re acting as a professional organisation for privacy enthusiasts…..).

And I found this:

(for the record; the annotations are mine).

Which basically translates to:

“We are going to follow you all over the web, conducting surveillance on you without telling you and then use what we have discovered to try and predict the best ways to manipulate you in order to make money for our customers, clients and suppliers.”

Oh yeah, and there’s also this: “As you use our Services, you may import into our system personal information you’ve collected from your Subscribers. We have no direct relationship with your Subscribers, and you’re responsible for making sure you have the appropriate permission for us to collect and process information about those individuals. We may transfer personal information to companies that help us provide our Services (“Service Providers.”) All Service Providers enter into a contract with us that protects personal data and restricts their use of any personal data in line with this policy. As part of our Services, we may use and incorporate into features information you’ve provided or we’ve collected about Subscribers as Aggregate Information. We may share this Aggregate Information, including Subscriber email addresses, with third parties in line with the approved uses in Section 6.[screenshot]”

Now, I have most definitely had emails from businesses that I’ve used in the past, which – upon unsubscribing – I have discovered are using MailChimp. No-one has ever told me that when I gave my email address to them, they would pass it on to a US company who would then use it for stalking and profiling me. Well, hur-hur, it’s the Internet, what did I expect?

Wait. Being “on the internet” does not mean “no laws apply”. And in the UK, for UK-registered organisations, the UK Data Protection Act does most certainly apply. You cannot contract out of your organisation’s responsibilities under DPA. Now, for those of you reading this who aren’t DP geeks (Hi, nice to see you, the party’s just getting started!), here’s a breakdown of why I think using MailChimp might be a problem for UK organisations….

The UK Data Protection Act has 8 Principles, the first of which is that “personal data shall be processed fairly and lawfully”. Part of “fair and lawful” is that you must be transparent about your use of personal data, and you mustn’t breach any of the Principles, commit any of the offences or use the data for activity which is otherwise inherenty unlawful (like scams and fraud, for example). One key requirement of being “fair and lawful” is using a Fair Processing Statement (a.k.a “Privacy Notice“) to tell people what you are doing with their data. This needs to include any activity which they wouldn’t reasonably expect – and I would think that having all of your online activity hoovered up and used to work out how best to manipulate you would fit squarely into that category. Or am I just old-fashioned?

Anyway, using MailChimp for email marketing if you don’t tell people what that implies for their privacy? Fail No.1.

Then there’s the small matter of MailChimp’s role in this relationship. Under DPA, we have Data Controllers and Data Processors. For the sake of user-friendliness, let’s call them respectively “Boss” and “Bitch”. The organisation that is the Boss gets to make the decisions about why and how personal data is used. The organisation that is the Bitch can only do what the Boss tells them. The terms of how the Boss-Bitch relationship works needs to be set out in a contract. If the Bitch screws up and breaches privacy law, the Boss takes the flak, so the Boss should put strict limitations on what the Bitch is allowed to do on their behalf.

Now, I haven’t seen the Ts and Cs that MailChimp are using or whether there is any mention of Data Controller/Data Processor relationships but I doubt very much if they could be considered a proper Bitch because they use a lot of subscriber data for their own ends, not just those of the organisation on whose behalf they are sending out emails. So if MailChimp aren’t a Bitch, then they are their own Boss – and so giving personal data to them isn’t the equivalent of using an agency for an in-house operation, it’s actually disclosure of the information to a third party to use for their own purposes (which may not be compatible with the purposes you originally gathered the data for). Now one of the things you’re supposed to tell people in a privacy notice is whether you are going to disclose their data, what for, and to whom. You’re also not supposed to re-purpose it without permission. Oops again (Fail No. 2)

I’m gonna skirt past the 8th Principle (don’t send data overseas without proper protection), because there’s just so much going on at the moment about the implications of sending data to the US, we’ll be here for hours if I get into that. Suffice to say, if the Data Controller (Boss) is a US firm, you have no rights to visibility of your data, control over its accuracy, use, security or anything else (Principles 2-7). None. Kthxbye. That might be fine with you, but unless you are informed upfront, the choice of whether or not to engage with the organisation that’s throwing your data over the pond to be mercilessly exploited, is taken away from you. Not fair. Not lawful. Fail No.3.

Aaaaand finally (for this post, anyway) there’s the PECR problem. Simplified: PECR is the law that regulates email marketing, one of the requirements of which is that marketing by email, SMS and to TPS-registered recipients requires prior consent – i.e., you can’t assume they want to receive it, you must ask permission. It does however contain a kind of loophole where if you have bought goods or services from an organisation, they are allowed to use email marketing to tell you about similar goods and services that you might be interested in (until you tell them to stop, then they can’t any more). This means that where the soft-opt in applies, you can send people email marketing without their prior consent (it’s a bit more complicated to that, but this isn’t a PECR masterclass – more info here if you’re interested)

However, PECR doesn’t cancel out DPA or contradict it, or over-ride it. You must comply with both. And this means that any company relying on the soft-opt-in to send email marketing via MailChimp is almost certainly in breach of the Data Protection Act unless they at the time they collect your email address have very clearly a) stated that they will use it for email marketing purposes and b) obtained your permission to pass it to MailChimp to use for a whole bunch of other stuff. Ever seen anything like that? Nope, me neither. Fail No. 4

So how come this is so widespread and no-one has sounded the alarm. Well, based on my observations, here are some reasons:

  1. No-one reads terms and conditions unless they are corporate lawyers. Even if tTs and Cs were read and alarm bells were rung, chances are that the Marketing department or CEO will have a different idea of risk appetite and insist on going ahead with the shiny (but potentially unlawful) option anyway.
  2. By and large, very few organisations in the UK actually ‘get’ the Data Protection Act and their responsibilities under it. They also don’t really want to pay for DP expertise either, since it will undoubtably open a can of worms that will cost money to fix and cause extra work for everyone. Much easier to take the ostrich approach and rely on the fact that….
  3. …the vast majority of UK citizens don’t understand or care about data protection either. Sometimes there is a gleam of interest when the word “compensation” pops up, but mostly they see it as a hurdle to be sneaked around rather than a leash on a snarling mongoose. Every now and again there is a spurt of outrage as another major breach is uncovered, but these are so common that “breach fatigue” has set in.
  4. Data-trading makes money, and ripping off people’s data/spying on them without giving them a choice/share of the cut/chance to behave differently makes more money than acting fairly and ethically.
  5. Fundamental cultural differences between the US and the EU’s approach to privacy. If you read this blog post by MailChimp’s General Counsel/Chief Privacy Officer, the focus is mostly on data security and disclosure to law enforcement. There’s little about the impact on personal autonomy, freedom of action or principles of fairness that EU privacy law is based on. Perhaps that’s because most of that stuff in in the US Constitution and doesn’t need restating in privacy law. Maybe it’s because the EU has had a different experience of what happens when privacy is eroded. Maybe he ran out of time/steam/coffee before getting into all that.

Anyway, if you got this far, thanks for reading – I hope there’s food for thought there. I’m not advocating that anyone boycott MailChimp or anything like that – but if you’re gonna use them, you should consult a data protection expert to find out how to protect a) your organisation b) your customers and c) the rest of us.

Right, back to web design research it is……


WARNING - this site sets cookies! Unfortunately, I am unable to disable some of the inbuilt tracking without killing the site content. tell me more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.