Press "Enter" to skip to content

Miss Info Geek Posts

Online Harms

Following on from an infuriated storm of tweets in response to the #OnlineHarms consultation, I thought the topic was worth expounding on because the sheer fucking insanity of proposing that the Government should define ‘acceptable’ legal content and then force tech companies to police it, evidently needs to be spelled out.

How did we get here?

Online harms don’t exist in a separate universe to offline harms yet they invoke the average person’s inner authoritarian far more easily. Perhaps that’s because the ‘online’ aspect enables a scale of volume that only fifty years ago would have been inconceivable, and brings complexities which our squishy little human brains have not yet adapted to be able to process without resorting to TL;DR: emotional knee-jerk reaction. Online interactions lack physical elements of communication which humans have evolved to rely on – body language, tone of voice, facial expression – leaving room for misinterpretation and, it would appear, in those gaps defensiveness breeds and contempt flowers. Empathy can be found in digital interactions, but you have to look pretty damn hard for it. It takes time and effort to absorb the other person’s reasoning and perspective, it takes humility to put oneself in the other’s position, it takes courage to examine the possibility that an adamantly-held belief may be contradicted by evidence or that a conflicting belief may have equal merit. If you read an average of a hundred tweets, a dozen Facebook posts or LinkedIn articles, watch twenty YouTube clips and read three blog posts per day, then approaching them all with empathy is probably impossible. It’s exhausting. Ultimately, more rewarding than allowing anger and incredulity to take hold, but since when have human beings been collectively good at delaying unhealthy gratification in the moment for abstract rewards in the future? (Question to which answer is ‘never’). So we have a situation where we feel more threatened by other people when we encounter them in a digital space.

Ok, ‘harmful’ content. What does that even mean? Some examples are obvious – direct threats of violence, doxxing, blackmail, sexual contact with children, fraud – and already prohibited by law. The barriers to enforcing those laws arise from the difficulties of investigation across borders (for which we have treaties) and the allocation of resources (for which we have democratic representation), rather than the digital factor itself.

It’s legal Jim, but not as we like it

The content that falls on and across the blurry edge of ‘legal’ is more problematic. One person’s religious dogma may be another person’s hate speech. One person’s erotica may be another person’s affront to decency. Companies pay silly money to other companies to skirt the edges of falsehood in order to persuade buyers that they can’t live without the thing being sold (we call this advertising, and it’s not uncommon for ‘skirting the edges’ to turn into ‘hopping across the boundary and hoping no-one notices’). Again – already regulated, but ineffectively. Putting the companies that stand to benefit from manipulation of public opinion in charge of gatekeeping ‘correct’ opinions seems….counterproductive at best. A bit like putting the world’s biggest (and disturbingly unethical) porn company in charge of allowing access to other producers’ legal porn.

Haters gonna hate

Hateful content spreads like wildfire – in fact, quicker than actual wildfires. (Maybe we should start saying that wildfires spread like hate content?). Lots of people disapprove of, dislike or even detest other people, and where in the physical world they might be circumspect about letting them know that directly; online, hatred and bullying are unleashed without even knowing the targets personally or ever having spoken directly to them. Troll armies and hate mobs can be marshalled at the click of a button – but from the platforms’ point of view, these are all users whose eyeballs are available for showing adverts to and whose data is harvested for commercial benefit. The equation is simple – get rid of the people who set off the mobs rather than the mobs themselves. It’s the philosophers trolley problem writ large (with poor spelling and grammar)

I think Piers Morgan is a dickhead – should the Government be able to prevent me from saying so simply because I’m saying it with pixels and not in the pub? As the meme goes; haters gonna hate. We already have hate speech laws (whether they are appropriate or effective is a topic for another day, but we have them and prosecution of these offences requires due process of law). Sometimes a discomforting point of view is necessary, even if it is impolitely delivered. Our collective intolerance of emotional discomfort is self-sustaining and is eclipsing our ability to analyse the merits and fallacies of opinions which we find disagreeable. The answer to this is education, not software engineering or removing fundamental rights and freedoms.

Fake news!

Propaganda, deliberate lies, wilful ignorance and intentional misinformation are spewed from a firehose of bought-and-paid-for accounts, while battles of ideology rage over the boundaries of “acceptable” opinion, turning mean and spiteful at the drop of a . All the while, tech platforms feed off our data, reducing us to datapoints which can be analysed, judged, manipulated and most importantly – monetised. Children and adults alike are targeted by predators and tricked or coerced into abusive scenarios, – sexual, financial, emotional, professional and more. Algorithms pigeonhole us according to how we measure up against the standard-issue Silicon Valley techbro specimen, and direct our behaviour towards the most profitable outcomes regardless of the social or humanitarian cost. Obviously Something Must Be Done otherwise we might just as well be in The Matrix’s hellish vision of pod-bred batteries powering the Machines.

But even though Something Must Be Done, it does not follow that anything that is done must therefore be the Right Thing To Do. (The legal blogger David Allen Green wrote an absolutely brilliant parody of this phenomena a while back in an way that almost perfectly predicts the content of the proposed Online Harms law). Unfortunately, human nature drives us reflexively towards answers which satisfy our emotional, zero-sum, fight/flight instincts, when our responses really need to be analysed and considered in order to engage our rational selves (as described by Daniel Kahneman in his book ‘Thinking Fast and Slow’). And we end up with nonsense like this (#onlineharms) and this, and this, and this. All legislating around the edges of problems with complex and multiple social factors to no great effect.

Who guards the guardians?

The proposal puts enormous power into the hands of those that evidence indicates we should trust with it least if all. The power to curtail freedoms of expression, speech, opinion and association will be bestowed on the basis of personal opinion (and inevitably; profit motive) rather than due process of law. The power to define content which is ‘harmful’ on the basis of no established metrics, no longitudinal studies, no tolerance for dissent and – it would appear – no critical thinking capabilities, is placed in the hands of people whose motivations have evidently diverged greatly from the upholding of liberal, law-based democracy. Some of them may even have noble intentions, but that doesn’t make their stupid, dangerous idea any less stupid or dangerous.

What could possibly go wrong?

Well, for a start, giving the likes of Facebook, Google and Amazon a justification for extending the dystopian degrees of surveillance they already conduct on everyone, rather than reining in their abuses of privacy, is a really bad move

Giving the green light to suppress marginalised and minority voices when the opinions they express cause discomfort to privileged majorities. Yeah, yeah, appeals processes, review boards, lessons learned, etc – but those avenues for redress are usually so convoluted and under-resourced that people become resigned – and then accepting – of their disempowerment rather than put themselves through the hassle of fighting to win back the rights that were unfairly denied them – unfair denials that can take place in their thousands every microsecond with no human intervention.

Enabling the insidious manipulation of public opinion through even more opacity and unaccountability. Far worse than Cambridge Analytica using psychographics and microtargeting to distort people’s view of the world to the benefit of their customers, this is a Government seeking to exercise control of who can say what to whom, when, who else can join that conversation when the topic is disturbing to some but NOT ACTUALLY ILLEGAL. (Expect to see all references to the damage and drawbacks of Brexit to disappear from the UK’s online presence.)

Terrorism is bad, mm’kay? We know that. But can speech which doesn’t itself contain threats of violence be terrorism in itself? And how can algorithmic judgement distinguish reporting on terrorism from threats? It doesn’t seem to be very good at that yet. Yet policy-makers’ ignorance around technology means that they are intent on legislating based on the capabilities they saw on an episode of ‘Spooks’, rather than realistic functionality.

If content is bad enough that something needs to be done, then it’s bad enough to make a law specific to that content, get it through Parliament, and enforce it with due process. We already have this for child and sexual abuse images, threats, defamation, false advertising, and incitement to violence. If these laws are already not working, then how can adding more laws be expected to succeed? One might suspect that making laws is nothing more than short-sighted posturing, an act irrelevant to the efficacy of the measures it describes, with no relevance to the publicly-stated outcome of desire.

Children are being exploited, manipulated and put on dangerous paths. I’m actually talking about the amount of data collection and profiling that goes on under the bonnet of the homework apps, games and devices pushed at them by tech platforms working through schools and parents – and we propose to put those same actors in charge of those kids’ moral and emotional protection? Why not focus efforts on educating schools and parents to identify and respond to online harms, while educating the kids themselves on how to handle issues of consent, boundary violations, distressing content and spotting nonsense? Because that would take longer than the average political term in power, would not enrich technologists and would require effort to be made by lots of people who’d rather point the finger elsewhere and be spoon-fed their decisions.

Eating disorder forums, self-harm and suicide content – definitely harmful. Not illegal. Who on earth believes that not having access to these online spaces will magically result in happy, healthy humans? Why are ‘solutions’ of silencing and excluding the people who seek this content out through their laptops or smartphones, more palatable than investment in mental health support services, fostering respect for human rights and dignity, condemning body-shaming and sexualisation of pre-adolescents, reducing pressure from schools and employers to conform to ‘productivity’ metrics, discouraging entertainment that is based on tearing other people down, shaming and humiliating them? Same answer as above.

If our Government wants to tackle ‘fake news’, it could start with some of the bare-faced lies, factual inaccuracies and magical thinking that emanates from Westminster before getting involved in what the ordinary citizens say to one another on Twitter. It could implement the Leveson 2 recommendations, scrutinise the interests of the non-dom media owners, update defamation law so that a true statement is not considered defamatory even if it pisses the subject off and costs them money. Why don’t they? I think you have your answer…

You can’t ‘fix’ human nature with technology. We can’t even all agree on what a ‘fix’ looks like, let alone how to apply it in a way that doesn’t cause more, worse, problems. Some people are rude, some obstreperous, some misguided, some deluded, some venal. None of those things are illegal, even if they are fecking annoying. Making laws that don’t fix the problem at hand while creating lots of other problems, is not the answer, and we should stop participating in the collective delusion that it is.

10 Legitimate Interests Lessons for Marketers

1. Just because you’re interested, doesn’t make it legitimate.

2. You can’t use LI to avoid getting consent when you suspect the answer will be “No”

3. Whether LI can be applied depends on your own assessment of what you’re doing, why and how – which you will be expected to justify and defend.

4. LI is not ‘unclear’ or ‘ambiguous’; it requires thinking to be done and a decision to be made.

5. Publish your Legitimate Interests Assessments (LIA) if you anticipate/plan to reject objections to processing.

6. If a law says you have to get consent for a processing activity, then forget about LI. You can’t use it. Move on.

7. LI is only a valid lawful basis for processing personal data if you’re adhering to all of the principles. It’s not a loophole around compliance.

8. If your LIA is post-hoc rationalisation of something you won’t consider ceasing to do even though you suspect it’s a bit dodgy; then you wasted your time. Just make sure you have funds set aside to deal with complaints, regulatory action and reputation damage when you get found out.

9. The ICO is not responsible for your continuing professional development

10. No-one else can do your thinking for you

“We take your privacy very seriously”

….says the intrusive ‘cookie consent’ popup which requires me to navigate through various pages, puzzle out the jargonerics and fiddle with settings before I can access the content I actually want to read on the site.

Here’s the thing. If your website is infested with trackers, if you are passing my data on to third parties for profiling and analytics, if your privacy info gets a full Bad Privacy Notice Bingo scorecard, then you DON’T take my privacy seriously at all. You have deliberately chosen to place your commercial interests and convenience over my fundamental rights and freedoms, then place the cognitive and temporal burden on me to protect myself. That’s the opposite of taking privacy very seriously, and the fact that you’re willing to lie about that/don’t understand that is a Big Red Flag for someone like me.

If you really took my privacy very seriously, you would use an analytics tool that doesn’t feed a huge surveillance behemoth – for example, Matomo instead of Google Analytics or Quantcast. Or just focus on producing high-quality, navigable content that makes me want to interact with you more without any of that stalkertech.

Your approach to consent would be discreet and respectful, allowing me to enable specific functionalities as and when they are needed, rather than demanding my attention immediately and trying to grab consent for everything straight away. Consent has to be obtained before cookies/trackers are placed/read, yes – but that doesn’t mean you should try and set as many of these as possible as soon as I land on your page.

There are several ‘consent management’ solutions popping up (literally) all over the place, interrupting people’s reading, rendering badly on mobile, requiring lowering of privacy protections to interact with, some even operating in a way which is contrary to law in the first place (I’m looking at YOU, website operators who remove the ‘Reject All’ button from the Quantcast dialogue). Everyone moans about cookie banners and consent dialogues, regarding them as an unwanted intrusion and a pain in the butt. They are both. But here’s the thing – the problem isn’t that site operators are required to inform you about tracking/profiling/mucking about with data on your device, the problem is that this is done at all – on such a large scale by so many and without accountability. Behavioural advertising, demographically-targeted marketing, personal profiling – all these are by nature, inimical to fairness, individual rights and freedoms. There’s a huge industry beavering away in the shadows trying to quantify and categorise and manipulate us for profit; and an even vaster network of ‘useful idiots’ capturing and feeding them the data they grow fat upon. Your data. My data. Your website? Your app?

Now, I accept that this is how much of the world works these days, even though I really don’t like it. I continue to campaign for change by supporting organisations such as the Electronic Frontier Foundation, Privacy International, NOYB, Liberty and the Open Rights Group, by giving professional advice based on ethics as well as risk and technicality (and making it clear which are which) and by doing as much work on educating the general public as I can spare time and energy for. I understand market[ing] forces. What I can’t bear is the slimy, self-justifying PR bullshit that’s spread like rancid butter over the surface of ‘compliance’.

Like saying “we take your privacy very seriously” while actively supporting an ecosystem which is privacy-hostile at best and privacy-abusive at worst. Like saying “we take your privacy very seriously” and then using meaningless copypasta template privacy info which bears no relation to the processing at hand. Like saying “we take your privacy very seriously” and not even bothering to take elementary precautions to limit or protect the personal data being snorted up at every turn.

One lesson I learned from my infosec days is one of distrust – the most likely time for you to hear or read “we take the security of your data very seriously” is in panicked press releases after an avoidable breach of that very data has occurred. Anecdotal, of course, but I see a very strong inverse correlation between loud blustering about how seriously security/privacy is taken, and how rigorously this is actually implemented. Its become a bit of a shortcut to analysis – anyone who feels they have to squawk about it probably shouldn’t be trusted to be actually doing it.


When you don’t “take privacy very seriously”, no amount of gaslighting PR camouflage is going to be a convincing substitute. So maybe just stop saying it eh? No-one believes you anyway.

It’d be so refreshing to see a statement like “There is often a compromise to be made between individual privacy and commercial advantage. We do it like this because it is more [cost]-effective for us to achieve our business objectives, even though it may have an impact on you. Here is all the stuff that the law says we have to tell you:…”. A while back, a bunch of privacy nerds were having fun with the #HonestPrivacyInfo hastag on Twitter – while amusing; this is also worth a read because many of the examples are actually much more transparent and accurate than anything you’ll read in a company’s official ‘privacy policy’.

Just be warned….if you’re going to claim you take my privacy seriously, then I will require you to demonstrate that. And I will make a fuss if you don’t.

10 Anger Management Tips for DP Pros

Grrrrr! Gah! Aaarrrggghhhh!

Sometimes it feels like an uphill struggle, bringing data protection good practice to the masses. Sometimes it feels like an vertical climb up a razor-wire-covered fortress turret while hostile archers fire flame-tipped arrows down at you from overhead. I confess that sometimes I am a little short on patience and tolerance (although I try hard not to let it show!) and I do spend quite a lot of my time with gritted teeth and clenched fists. I’m probably not the only one – which is why I wrote this blog post. Despite my naturally sarcastic tone, the sentiment is genuine – and hopefully contains at least one nugget of actual good advice.

Take care of yourselves, don’t be ashamed to reach out for help when things get on top of you, and remember that come the Zombie Apocalypse; your survival will not be based on how successfully you got an organisation to implement data protection!

I present; 10 Anger-Management Tips for DPOs (I’ve said DPOs for brevity; but this applies pretty much to anyone working in any role within privacy and information governance!)

10 Anger management tips for DPOs

  1. Accept that your colleagues don’t care about your subject as much as you do. If they did; they’d be DPOs too. Not everyone is as enlightened as we higher mortals – feel compassion, not scorn.
  2. Learn the phrase “perfect is the enemy of good enough”. Recite it 100 times a day. Convince yourself you believe it.
  3. Publish useful, informative, entertaining, educational content as often and as prominently as you can. Make sure it is all tagged, indexed, searchable and accessible. Include a liberal sprinkling of amusing gifs, memes and cat pictures. You might be the only person who ever reads it so you may as well make it amusing.
  4. Practise the Serenity Prayer. You’re gonna need it, even if you don’t end up taking to the bottle for comfort.
  5. Remember, it’s not for you to ‘sign off’ on the organisation doing something unlawful. Make sure authorisation and acceptance of the risk is firmly pinned on someone above your pay grade. Get it in writing. Keep a copy.
  6. Make friends with your colleagues in health & safety, safeguarding, and infosec. They have the same problems as you do and you can all cry together in the canteen. Solidarity, comrades.
  7. Maintain your integrity. Admit when you’re wrong, don’t repeat your mistakes, debate in good faith, own, apologise and try to fix things when you screw up. Everyone’s gonna resent you enough already without giving them reasons to disrespect you as well. Plus, it will be less likely you’ll be hunted with pitchforks when you give advice others don’t like.
  8. Don’t take anyone’s word for anything. Chances are they don’t understand what they’re talking about anyway, so you might as well double-check before it becomes a problem landing on your desk with a post it note saying “this needs fixing urgently”.
  9. Seek out your fellow DPOs and form a support group. There is much to be said for bonding with like-minded fellow warriors over therapeutic bitching sessions and lawful basis debates in the pub.
  10. Remind yourself that you’re one of the Good Folk. You care about rights, freedoms and responsibilities. You are the front line of defence against the dark arts of exploitation, discrimination, victimisation and greed. No-one else might recognise it, but the work you do is essential and worthy. *Fist bump*.

Meme Frenzy

At some point, I’m going to try and make a privacy notice delivered through the medium of internet memes. While playing about with the possibilities of this, I got totally sidetracked and ended up data-protection-ifying a load of popular memes for my own nerdy amusement.

Here are the fruits of my misdirected labour. I think I might need to get out more

doge: dis policy, many data, such privacy, mor cookies, wow

We take your privacy very seri- Shut up!

One does not simply consent by reading a policy

Not sure if Controller or non-compliant Processor

I don't always need consent, but when I do it's specific, informed, freely-given and unambiguous

If you could actually take my privacy seriously that would be great

I read your privacy policy, it say's you're tracking me, ohhhh no, SAR TIME

Brace yourselves - ePrivacy Reg is coming

Y u no tell me legal basis for processing

They said they use my data for advertising purposes. I sent them a SAR

Sells you stuff online - doesn't make you create an account

Privacy vs Security: A pointless false dichotomy?

This is the text of a presentation I gave recently during Infosec18 week. By popular demand (i.e. more than three people asked), I’m re-posting it here for a wider audience. I also intend to record it as a downloadable audio file at some point when I have some free time (hahaha, what’s that???). I took out the specific case studies for the sake of brevity, but I will post those separately as Part 2.

This is how it went


Part 1: The Big Debate

You may have seen the ‘Privacy vs Security’ debate being argued in the news, on forums and at events over the past few years. Having worked in both disciplines, I find this question coming up a lot and I want to unpick it today because I’m not convinced that any of the debates I have seen have really got to the heart of the matter.

In order to answer the question “is privacy vs security a pointless false dichotomy?“, we must first define the terms we are discussing – otherwise we’ll be shouting about tangential irrelevancies at each other all day and not getting anywhere.

What are ‘privacy’ and ‘security’? They are easier to describe in comparison than to define in a vacuum.

Security is a very wide topic, and very context-dependent. There are many flavours of security, for example (nb: these are my own words for the purposes of clarity, please don’t post argumentative comments loaded with dictionary definitions)

  • Physical security – the integrity of person or premises
  • Information security – the Confidentiality/Integrity/Availability triangle model that relates to information and supporting systems
  • National security – the integrity of borders and infrastructure, often closely entangled with physical and economic security. Depending on the nation, there may also be a social and cultural element to how security is viewed.
  • Economic security – the integrity and availability of trade and financial matters.

However, I’m only going to address information security in this talk, because that’s what we’re all here for.

Privacy is the concept of personal autonomy; the integrity of both the tangible and intangible self. It’s solely focused on people (and in data protection law, those people have to be alive for the law to apply. Zombies do not get privacy rights).

Many people working in infosec are predisposed to think of privacy solely in terms of data confidentiality, but in doing so they misunderstand and misapply the concept. This actually leads to degraded privacy, so it’s definitely a bias be mindful of and adjust for.

There are also different flavours of privacy

  • Physical – being free from unwanted/unwarranted touching or restriction of movement
  • Data protection – transparency, fairness and control in relation to information about (living) people
  • Social – being able to associate with whomever you wish

These flavours of privacy are most defined in law. In the UK, we have the Data Protection Act 2018, the GDPR, the Privacy & Electronic Communications Regulations (soon to be ePrivacy Regulation) and the Human Rights Act. However, as well as formal codification into law, there are also a variety of cultural expectations and social consensus around privacy.

The ways in which we use the words ‘security’ and ‘privacy’ are varied. We use these terms to describe both the desired position we are trying to achieve, but also the process of managing factors in order to achieve the desired position. Security and privacy are not just states of being but also the activities required to bring about and maintain those states.

Which one – the position or the approach – do we actually mean when we ask the question “privacy vs security”? It makes a difference, because the process of working towards one may well undermine the state of the other, if we’re not careful.

Security is not a binary on/off position. The goal is to achieve suitable security to manage risk within tolerances and capability. A regime of absolute security would be pointless, it would prevent everyone from getting stuff done. What you want is enough security. How much is enough? Well, that depends on what you are trying to achieve and how you plan to go about it.

Security is not an end unto itself – you don’t pursue a position of security simply because it brings rainbows and butterflies into your soul. You do it because you need to protect something sufficiently to allow it to function as intended.

Privacy is more of an end unto itself, based on the ideal that people aren’t just units of exploitable animated flesh but that everyone has a unique and valuable contribution to make to the great mosaic of life (even if that contribution is merely to serve as a warning to others), and that they should be allowed a degree of autonomy, freedom and dignity in which to do so.

Your views on whether that’s a good thing may vary but (in theory), this is what civilised democratic society has collectively agreed upon.

Privacy is also not a binary – for example, it is certainly not the opposite state to ‘in public’. I have the same right to be free from unnecessary interference when I walk down a public street as when I am in my home, and so does my data. Neither myself or my data can be grabbed and used however the grabber wishes, no matter how gratifying or lucrative the grab-and-use idea may be.

Privacy rights – i.e. not being subject to unwarranted interference – are qualified rights. This means that there will be circumstances where the good of the collective takes higher priority when in conflict with the rights or preferences of the individual. For example, your right to move about freely stops when you are imprisoned after being convicted of a crime. Your right to control how information about you is used becomes limited when that use is necessary to protect other people.

There are degrees of privacy, just the same as there are degrees of security; and those are also dependent on context and risk tolerance – but additionally, on other factors such as cultural values, moral principles and social norms.

Both words – “security” and “privacy” relate to a spectrum of desired positions into which a variety of inputs are factored; and to the pursuit of achieving or maintaining those desired positions.


In considering whether security and privacy are really in conflict, it’s helpful to look first at where they align.

They are both intended to protect and defend things we consider to be worth protecting and defending.

The most obvious example of alignment is the principle within data protection (privacy in relation to information about living people), which states that

“personal data must be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical and organisational measures”. [Article 5.1(f) GDPR]

Clearly, unless personal data is protected against unintended or unauthorised uses (by securing it), then privacy will be affected – on both an abstract level (someone’s rights are infringed, although they may not realise it) and potentially on a practical level, resulting in adverse consequences such as inconvenience, harassment, fraud, discrimination or other mistreatment.

Therefore in this specific context, privacy and security are not at odds – rather privacy depends on security.


Privacy and security have a different focus, although context and circumstance can bring them closer together. Just as privacy goes beyond information security into the realms of fairness, lawfulness and transparency; so security also goes beyond privacy – extending outside the context of personal data and into business data: trade secrets, financial details, competitive advantage, regulatory requirements and operational necessities.

Privacy focuses on harm to the individual, whereas security focuses on harm to the organisation.

The question of whether ‘privacy vs security’ is a false dichotomy would require us to look at the areas where the two diverge if we were to consider it seriously. But I don’t think it’s even a question worth asking at all. It’s the wrong question – and usually only deployed to make a rhetorical and ideological point by someone with a vested interest in a particular answer.

Take, for example, the argument that increased mass surveillance of the general population is a necessary measure to keep that population safe. It is presented as a choice between ‘being watched all the time and staying safe’ vs ‘keeping other people’s noses out of your business and getting everyone blown up’. This is definitely a false dichotomy – usually followed by the maddening “nothing to hide = nothing to fear” trope. It is also nonsense, for a number of reasons. More surveillance means more data, but it does not automatically mean better analysis or response, especially when the resources for picking signal from noise are already overstretched. One does not locate more needles by adding more hay to the stack. Also, we already have mechanisms for targeted surveillance of people who the authorities think are up to no good, and this is a necessary control for a free and democratic society. Inevitably, collecting more data leads to more ways to use that data – whether well-intentioned or nefarious.

We simply cannot trust either the individual or groups of individuals to always act rationally, ethically (even if we could agree on what that looks like) and appropriately. Mass surveillance hugely increases both the likelihood and the potential impact to the victims of irrational, unethical or inappropriate action which is made possible, or justified by the uncritically-accepted data gathered by mass surveillance; but it does not benefit the desired security posture in proportion to the damage it does to individuals’ rights and freedoms.

What’s the point then?

Actually, the questions we should be asking if we want to get stuff done, stay out of trouble, not be Bad Guys and keep the organisation running are the following:

Is my security posture incurring intolerable privacy risk?

Is my privacy posture incurring intolerable security risk?

Bear in mind here that “intolerable” is not just a reference to what you or your organisation is willing to accept, but also what other individuals or society as a whole will accept; ie you must factor in legal obligations, contractual obligations and public opinion.

Neither of these questions mean that one posture invalidates the other. These are comingled analogue spectrums, not a binary OR gate.

If the answer to both questions is “no”, then the matter is settled. Keep on doing the good work and make sure you ask the questions again regularly.

If the answer to either question is “yes”, then in order to resolve the issue, you must ask more questions:

  • Can I achieve an equivalent security or privacy posture in another way?
  • If not;
    • Can I terminate or treat the risks without compromising on tolerances?
    • What is the range in cost, effort and feasibility of the options available to me?
  • How do I present this clearly to executive stakeholders?


In summary: it’s not “privacy vs security”; it’s “appropriate security AND appropriate privacy“. Managing the risks of both is not just about considering cost and reputation – there are also laws which have already defined the parameters of acceptable risk and these need to be taken into account.

Security is not privacy and privacy is not security. Confusing the two or trying to manage them as a single risk will likely lead to your failure at one or the other, if not both.

Be very suspicious of anyone who says privacy must be ‘sacrificed’ for security. There is already provision in law for balancing these. Nothing is risk-free, and even the complete negation of one would not guarantee the other. Therefore, there is no need to ‘sacrifice’ anything. Ask those people: which of YOUR rights and freedoms are they planning to take from you?

Part 2: Case studies will be posted soon

Bad Privacy Notice Bingo!

Snark attack!

Having spent many, many hours reviewing privacy notices lately – both for the day job and for my own personal edification – I’m discouraged to report that most of them have a long way to go before they meet the requirements of Articles 13 and 14 of the GDPR, let alone provide an engaging and informative privacy experience for the data subject.

Because I am a nerd who cares passionately about making data protection effective and accessible, but also a sarcastic know-it-all smartarse, I created this bingo scorecard to illustrate the problems with many privacy notices (or “policies” as some degenerates call them) and splattered it across social media. Hours of fun.

Bingo scorecard showing things that don't belong in a privacy notice

I am not just about the snark

However, I am also a geek who would much rather there was no need for my hissy fits of piss-taking and so in that spirit, I shall deconstruct here; why the items on the bingo scorecard are Bad Things to find in a privacy notice.

Bad Things

“We may….”

A privacy notice is a communication that needs to convey useful information, not a guessing game. If you say you ‘may’ do something, I’m left in the dark as to whether you’re actually doing it to MY data, and when that might be, if so. If you’re going to do something, say you do it. If you’re going to do something but only under particular circumstances, then describe those circumstances. If you’re not going to do it, don’t even mention it.

“Personally Identifiable Information”

This is not the same thing as personal data, it’s a subcategory of personal data. When I see this in a privacy notice, it immediately says to me that either the organisation is either oblivious to the premise and requirements of EU privacy law, or that they are trying to pull a fast one by doing all kinds of stuff with de-identified personal data that they don’t want me to know about. More about the differences between “PII” and ‘personal data’ here:

“EU citizens”

You will not find the word “citizens” anywhere in the text of the GDPR. Feel free to do a search on the text if you don’t believe me. That’s because data protection rights are human rights, and residency status is not a variable for ascertaining humanity. It’s about data subjects located in the EU, Data Controllers carrying out activities in the EU or Data Controllers who are offering goods and services to people located in the EU, or who are monitoring the activity of people located in the EU. People. Not just citizens. If a citizen of the EU goes to a third country, they lose the protection of EU law.

“by <….>, you consent to this processing”

Consent must be informed, freely-given, specific and unambiguous. That means the data subject needs to take some kind of positive action to indicate their consent to processing which has been described to them, in circumstances where they have a genuine choice and where the consent for processing is not tied to an unrelated activity. By browsing a website and reading the privacy notice, I consent to……nothing at all. By wearing my socks on my ears, I have nice warm ears and look a bit daft but am still not consenting to anything at all.

If I were to provide my email address on a company’s website to enter into a prize draw, I would be consenting to having my email addressed used to select and notify the winner of the prize and that’s all. If the company wants to use my email address to send me marketing then they have to get entirely separate consent from me to do so.

More about consent for data processing here:

“General Data Protection Regulations

Just one Regulation. A big beast, to be sure – but a singular one. If an organisation can’t even get that right, what are the chances that they’ll be paying proper attention to what it actually says? Not great, I reckon.

ICO logo

You’re not allowed to use the ICO’s logo without their permission. If a website owner uses the ICO’s logo without permission then they are acting unlawfully by breaching copyright. If they are willing to act unlawfully in regard to intellectual property, what makes you think they will be any more ethical or diligent about processing your personal data, eh? At best, they are clueless. At worst, they are being deliberately deceptive. Either way, their privacy notice is not to be trusted and neither are they.

Refers to the DPO as the “Data Controller”

A Data Protection Officer is an individual who performs the functions described in Articles 37-39 of the GDPR for an organisation (either in-house or on an outsourced basis). A Data Controller is the organisation which determines the purpose and means of the processing of personal data. Even if the Data Controller is a sole trader, there would probably be a conflict of interest disqualifying them from being the DPO anyway (there’s one for the DP geeks to gnaw on). If an organisation doesn’t even know the difference between DPO and Data Controller, then the chances of them knowing enough about data protection obligations and rights to be able to process your personal data fairly and lawfully, are pretty slim.

“We keep your personal data as long as necessary”

See also; “as long as required by law”. More guessing games. How long is that then? Unless it’s something outrageous, unexpected or high-risk; why even bother to tell me about it? What is “necessary” and how do you justify it?

Oh, and if you’re saying there’s a law that requires you to do something with my personal data, please cite that actual law. Making a statement saying “we comply with the law” gets you no Brownie points – the whole point of the law is that you have to comply with it. You might as well make sure you say “We don’t chop off annoying people’s heads with axes” too.

One loooooong page/doc

The harder it is for me to read your privacy information, the more likely it is that I will suspect you’re up to no good and make the effort to scrutinise it. Now, that’s just me because I’m a suspicious-minded nitpicking smartarse, but even for people who don’t spend their leisure time examining privacy notices, the point of the whole exercise is – as I mentioned above – to effectively communicate information to people about what’s going on in relation to their personal data. The GDPR even says in Recital (39) that “The principle of transparency requires that any information and communication relating to the processing of [..] personal data be easily accessible and easy to understand”. Making me scroll through acres of dense small print until my brain turns to mulch, is basically doing the opposite of what the GDPR requires.

(NB: If you want to see an absolutely beautiful privacy notice, have a look at this. Seriously. It’s the best bit of UX I have ever seen. I am a little bit in love……and probably need to get out more)

“From time to time…”

This is a phrase which conveys absolutely nothing in the way of useful information. Which times would those be? 3 times a year? Once a week? Under what circumstances? Every time I [example redacted in the interests of good taste and public decency]?

It reeks of ‘we couldn’t be bothered to think about this too hard’….or even ‘we daren’t tell them what’s really going on’. Either way – not a good look. A waste of pixels/printer ink.

Lists purposes separately to legal basis

This might keep auditors happy when they review your privacy notices so they can tick the ‘Article 13 requirements” boxes, but unless there is a clear narrative for the data subject to follow in relation to their personal data; it’s not actually going to meet the obligations of transparency. I want to know what’s happening with my data, under which circumstances, and why you think that’s allowed. Separate lists will not allow me to do that. Tell me that you’re going to use my postal address to send me news about your latest offers and that you reckon this is in your legitimate interests. Tell me that you have to keep Gift Aid declarations for 6 years because the Tax and Finance Act (or whatever) says you have to. Don’t tell me that there are a number of potential purposes for processing my personal data then make me have to try and figure out which one of the potential legal basis you’ve listed somewhere else is being used to justify the processing activities that you’ve described in yet a third separate list. Not transparent. Not helpful.

“administration purposes”

Administration is an activity not a purpose. It is not an end unto itself. No-one gets up in the morning and goes “ohhh, my whole reason for living is to administrate!” What is the administration activity and why is it being carried out? Perhaps you need to make sure my contact details are up to date so that you can chase me for my membership dues, which are a requirement of my agreement with you. Maybe you need to make sure that your event tickets are not sold to more people than the venue can accommodate. Obviously, there are some legal obligations your organisation must fulfil. So please tell me about them rather than skulking behind the diaphanous skirts of “administration”

“including, but not limited to….”

If it’s worth mentioning, it’s worth telling me all of it. Examples are helpful but they do not replace the legal obligation to describe the processing, the purposes and the legal basis for the processing. If your organisation doesn’t actually know what you’re going to do with my data then I don’t want you to have it. If you know but you’re worried about telling me, then I really don’t want you to have it!

Looks and sounds like a contract.

Privacy information, a privacy notice or privacy policy (if you must) is not a legally-binding agreement. It’s not a deed or a contract. It’s a piece of marketing material that just happens to need to be scrupulously honest as well. A good privacy notice not only has to make you feel OK about how your data is being used (while not obfuscating, concealing or outright lying), it should make you want to read it because it is helpful and engaging! Privacy notices written by lawyers hoping to outsmart other lawyers are easy to spot – they’re the ones you’d rather scoop your eyes out with a spoon than spend any time reading (unless – perhaps – you’re THAT kind of lawyer). And don’t event get me started on the American convention of PUTTING REALLY IMPORTANT STUFF IN CAPITAL LETTERS OSTENSIBLY TO ‘DRAW ATTENTION TO IT’ BUT THEREBY RENDERING IT UTTERLY INCOMPREHENSIBLE TO ANYONE.

“Military-grade encryption”

Oh, do piss off.

Encryption is a tool to mitigate a particular type of risk. It is not always the appropriate tool and like any other tool, is only as good as the implementation and competence of the people using it. You could be using 3DES to protect the negotiation for your public key exchange, with your own CA in a bulletproof box, but if your sysadmin’s password is “Password” or you’ve mixed up your public and private keys, then you wasted a lot of time and money (rather like buying a rocket launcher then using it to bash your own head in).

If you couldn’t make head or tail of that last paragraph, then don’t worry – the people who write “military-grade encryption” into a privacy notice don’t know what any of it means either.

“We take data protection very seriously”

See previous comment on boasting about not axe-murdering people.

In conclusion

A privacy notice isn’t there to cover your arse. Yes, it’s a legal requirement but the purpose of that is not simply to make you jump through hoops like a Peke at Crufts. The purpose of the legal requirement to provide privacy information is not to give you something to point to to tick off the ‘transparency’ principle, it is the transparency principle. The data subject has the right to be informed. If all you’ve done is obfuscate, bore, deceive or puzzle them then you have achieved the exact opposite of what GDPR requires and must now go all the way back to the beginning and start redrafting your privacy info.


Whose Decision is it Anyway?

Controller/Processor determinations

(a.k.a how a data protection anorak spends their leisure time)

Update: Sorry that the tool is not currently working – My supposedly ‘unlimited’ free Zingtree account has expired, and they want £984 a year for me to renew it, which I can’t afford. Currently looking for alternatives – if you know of one, hit me up! I’ll post a downloadable text version of the tool very soon.

Following a lot of pre-GDPR kerfuffle online about Data Controller/Data Processor relationships (and the varying degrees to which these are direly misunderstood), I spent a geeky Sunday night putting together a decision tree tool which should – hopefully – help people who are getting confused/panicked/deeply weary of the search for answers.

It’s not intended to be legal advice, it’s not formal advice from me as a consultant and it’s not guaranteed to be absolutely 100% perfect for every possible scenario. It’s designed for the low-hanging fruit, the straightforward relationships (like standard commercial supply chain) rather than the multi-dimensional nightmare data sharing behemoths one tends to find in the public sector.

Anyway, here it is. Enjoy. If you like it, please tell others where to find it. If you have constructive criticism (that’s not “oh you missed out this incredibly niche complex scenario that would only ever happen every 100 years”) please tell me.

The Tool


Here are also some useful links:

Who’s in Control?

Tea, sex and data

Comparing consent for processing personal data with consent for sexual activity.

Many laws, professional obligations, contracts and standards make reference to “consent” as a basis or requirement for something to be done. As I’ve mentioned before in an earlier post, “consent” is not a tick box or a signature, it is a state of relationship between two (or more) parties.

With this in mind, I’m going to write about something we’re almost all enthusiastic about (sexual activity) and something I’m [also] very enthusiastic about (data protection) in the hope that comparing the two will lead to greater understanding of how to manage consent as a legal basis for processing personal data, while keeping your attention for long enough to explain…

If you haven’t already seen this, it’s an excellent analogy between sexual activity and cups of tea – almost every point made can also be related to processing of data. The main difference here is that a cup of tea is unlikely to have a lasting and damaging effect, whereas both unwanted sexual contact and unfair/unlawful processing of personal data have the potential to cause serious harm to individuals if they occur.*

Before I get into the similarities though, there are two ways in which consent for getting sexy and processing data are totally different.

1. You don’t *have* to get consent for data processing (and shouldn’t try to, if consent is not the appropriate legal basis) but you always need to make sure that your sexual activities are with consenting adults only.

2. Consent for happy fun time can be implied or inferred (carefully). A long-married couple probably don’t need to have a detailed conversation about whether to take advantage of the kids being out that evening – a speculative look in the direction of the bedroom/kitchen/sofa and a twinkle of the eye in response is probably enough to communicate “shall we?” “Yes!” effectively.

No such parallel exists with data processing – either you have an unambiguous and specific response to “can we use your data in this way for this purpose” or you don’t have consent.

Ok, those are the significant differences. So, what are the similarities between consent for sexual activity and consent for data processing?

What it’s for: specifically

Consent is not “one size fits all”, if you consent to A (whether A is a cheeky snog behind the bike sheds, or being profiled on social media in order to be served targeted advertising), that does not mean you have also consented to B (which might be a hand up your shirt – or having your social media data sold to an insurance agency to calculate your risk of having a driving accident). It doesn’t even mean that you have consented to future As (snogs or profiling), especially if those future As might take you by surprise. It certainly doesn’t mean that having consented to A with one party, that anyone else can join in without having to ask permission separately (I’m looking at you, data brokers)

Whether you have it depends on how you get it:

Evidence of consent may be a legal requirement in some scenarios, but that evidence itself is not “consent”, just a record that something was asked for and an affirmation provided.

Obviously, if you have been misled or misinformed as to the activity, not given enough information to make an educated decision or if you don’t really have a choice, then no amount of tickboxes, signatures, “I agree” buttons or recordings will suffice. You have not consented.

Obtaining consent before/during sexual activity doesn’t usually involve either paper or electronic records, although there are apps which purport to fill that……er….niche (I’m in complete agreement with Girl On The Net’s views on these apps, by the way [warning also probably NSFW]). However, asking “would you like me to….” or “how about if we…..” rather than just diving in is the right thing to do and doesn’t have to kill the mood – in fact, that kind of conversation can be quite good fun…..

A positive response is an indication of consent. No response, or a negative response is very very unlikely to be consent. If someone is impaired in some way so they can’t a) understand the decision or b) communicate their decision then they cannot consent. Back off.

Obtaining consent for processing of personal data doesn’t necessarily need to involve tickboxes or signatures although as evidence of consent is a legal requirement, those are some mechanisms you might want to consider using.

What’s important in both circumstances is that you get consent before you start getting jiggy/processing data.

It doesn’t last forever:

Once you have consent, you can do whatever it is you have obtained agreement to do, for as long as that consent was agreed to last. “Yes” can turn to “No” at any time. If you don’t give the other party the freedom to change their mind, then you don’t have valid consent.

Regret does not retrospectively turn a ‘yes’ into a ‘no’. While many of us may have woken up and thought “Oops” when recalling the night before; this does not invalidate any informed, freely-given consent that was provided at the time. The past cannot be undone, only learned from. Likewise, if I give an advertising agency permission to use my photo, while I can tell them to stop using it later, I can’t make them recall every copy of the image that they published while my consent was in place.

Withdrawal or refusal is not an invitation to try to continue:

No means no. End of. Once someone has withdrawn their consent you must stop doing whatever it was you obtained their agreement to do. Pleading, bullying, coercing, forcing – these are violations of consent and could be very serious, both for you and for the person whose preferences you have ignored. Emotional blackmail to get sex is a favourite tactic of hormone-crazed teenage boys and has (superficial) parallels with companies that send emails to opted-out addresses offering incentives to resubscribe. Teenage boys might not realise that what they are doing is wrong (educate them please, parents!) but companies have no excuse whatsoever.

It doesn’t last forever:

“Yes” now does not mean “yes” to every future occurrence. “But you liked sucking my toes last week” does not mean that person wants to suck your toes right now, or at any time in the future. Put your socks back on. Similarly, asking an organisation to send you info about a specific event you’re interested in doesn’t mean they can send you messages about any other event they run.

It’s important to be clear:

Keep checking that ‘no objection’ has not turned into “no”. Consent must be informed to be valid, so if the other party has forgotten what they agreed to then you may not still have their consent – whether that’s the prospect of getting the silk scarves out, or tracking every location they take their phone to.

Proportionality is advised:

Signed agreements are not necessarily appropriate for either sexual activity or data processing (although they are relatively common in relationships that incorporate the exotic end of sexual activities [warning possibly NSFW] where the potential for miscommunication could have serious ramifications). Likewise, a signed declaration of consent to data processing is probably overkill for the majority of scenarios and is likely to increase both your administrative overhead and the annoyance you’re going to cause to the people who’s data you’re wanting to process. However, as with exotic sexual activities; if there is potential for a high impact, especially any kind of harm to the individual from your processing then it’s likely that you will need to make your consent evidence more stringent and robust. (note: if the processing is *required* in order to carry out a contract, then you should not be asking for consent in the first place as it cannot be freely-given separately to the contract agreement itself).

Lastly; don’t be a git:

If you’re looking for ways to evade obtaining proper consent in order to exploit someone then you are a Bad Person. This applies in any context. Even if you don’t see what you’re doing as exploitation, fiddling with either someone’s physical or intangible self has real consequences – it should only happen with care, respect and communication.

So if you are considering processing someone’s personal data, first check the appropriate legal basis. If that’s consent, then ask them for it – being clear about what you want to do and why. Keep a record of their response. Check in with them after a while to make sure it’s still OK. Don’t be sneaky/deceptive/coercive/vague/ask for more than you actually need.

And practice safe sex, mm’kay?

*NB: I am *not* equating data misuse with sexual assault in terms of seriousness! Lives can be ruined by unfair/unlawful/careless data processing (the construction industry blacklist, exposing vulnerable people to their stalkers, medication errors, inaccurate criminal records, credit rating errors….) – these are all Really Bad Things, but nowhere near the horror of being assaulted.

Nothing to see here…

I read today in infosecurity magazine that the law firm Appleby whose tax-sheltering habits are currently splattered all over the news, thanks to a massive leak of internal data; have claimed that a) the attack was apparently a sophisticated professional-grade hack and b) there was no evidence of data having left their systems.

I laughed out loud

Apparently, a team of professional computer forensics geeks have been unable to identify how the data was exfiltrated. Fair enough actually; it’s entirely possible that Appleby had no access controls or security logging in place (this is very common since such things require time, money, effort and thought to set up, corporate enthusiasm for that sort of thing is usually pretty scarce) and so there was simply no breadcrumb trail to follow. This has led them to conclude that a devilishly clever outside actor was responsible rather than a leak from some git on the inside. *Sceptical face* – it’s far more likely that an intrusion would leave traces than an internal misuse of privileged access would. (I guess their insurance covers being hacked but not being stitched up by one’s own workforce #cynicalsmirk)

But wait a minute… evidence that data was exfiltrated clearly does not mean that no data was exfiltrated…… The data has been passed to a variety of media outlets, it has definitely escaped somehow.

This is an important point – how often, after a reported data leak/loss/hack/etc have we heard a statement from the organisation affected that they have “no evidence” that any data was exposed, misused or extracted? (Rhetorical question; they all say that). The absence of evidence is not evidence of absence and such claims should to be taken to mean only that the organisation has limited information as to what really happened to the data. No-one should take reassurance from an open declaration of cluelessness.

The other point; about the sophistication of the tactics used to nab the data is that everyone also claims that every information security breach is a sophisticated attack – even when most of them turn out to be teenagers operating from their bedrooms, or result from an unwittingly obliging senior exec clicking on the wrong link or email attachment. I’m not saying that this particular depth charge wasn’t a high-tech military-grade IT Ninja attack…..only that such things are awfully rare and largely unnecessary thanks to the laxity of infosec controls in most places.

Anyway, if I were wealthy enough to make using offshore tax avoidance schemes worthwhile, I would probably demand a full infosec audit report from any law firm I was considering handing my data over to…..

WARNING - this site sets cookies! Unfortunately, I am unable to disable some of the inbuilt tracking without killing the site content. tell me more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.