Press "Enter" to skip to content

Miss Info Geek Posts

Nothing to see here…

I read today in infosecurity magazine that the law firm Appleby whose tax-sheltering habits are currently splattered all over the news, thanks to a massive leak of internal data; have claimed that a) the attack was apparently a sophisticated professional-grade hack and b) there was no evidence of data having left their systems.

I laughed out loud 

Apparently, a team of professional computer forensics geeks have been unable to identify how the data was exfiltrated. Fair enough actually; it’s entirely possible that Appleby had no access controls or security logging in place (this is very common since such things require time, money, effort and thought to set up, corporate enthusiasm for that sort of thing is usually pretty scarce) and so there was simply no breadcrumb trail to follow. This has led them to conclude that a devilishly clever outside actor was responsible rather than a leak from some git on the inside. *Sceptical face* – it’s far more likely that an intrusion would leave traces than an internal misuse of privileged access would. (I guess their insurance covers being hacked but not being stitched up by one’s own workforce #cynicalsmirk)

But wait a minute…..no evidence that data was exfiltrated clearly does not mean that no data was exfiltrated…… The data has been passed to a variety of media outlets, it has definitely escaped somehow.

This is an important point – how often, after a reported data leak/loss/hack/etc have we heard a statement from the organisation affected that they have “no evidence” that any data was exposed, misused or extracted? (Rhetorical question; they all say that). The absence of evidence is not evidence of absence and such claims should to be taken to mean only that the organisation has limited information as to what really happened to the data. No-one should take reassurance from an open declaration of cluelessness.

The other point; about the sophistication of the tactics used to nab the data is that everyone also claims that every information security breach is a sophisticated attack – even when most of them turn out to be teenagers operating from their bedrooms, or result from an unwittingly obliging senior exec clicking on the wrong link or email attachment. I’m not saying that this particular depth charge wasn’t a high-tech military-grade IT Ninja attack…..only that such things are awfully rare and largely unnecessary thanks to the laxity of infosec controls in most places.

Anyway, if I were wealthy enough to make using offshore tax avoidance schemes worthwhile, I would probably demand a full infosec audit report from any law firm I was considering handing my data over to…..

I’m a muse|amused

Inspired by my #GDPRubbish rantings, the ever-droll Javvad Malik has put together a handy video guide for all those newly-minted “GDPR consultants” that have been mushrooming up; on how to make as much from this shiny new market as possible…..

(NB: this is parody and satire; anyone who actually does the things described herein has no business working in data protection at all and should GTFO ASAP)

Consent or not consent?

Following on from some of the ranting I’ve been doing about the current unhealthy obsession with consent for processing, here’s a funky tool that I have created for determining whether consent is the appropriate legal basis for processing under GDPR.

At the moment, it only covers Article 6 but I’m working on another one that addresses special categories of personal data as well.

Please let me know what you think about this tool in the comments section!

 



Verelox, insider threat and GDPR implications

If you haven’t heard about Verelox, they are a Dutch cloud hosting provider who’ve recently been wiped off the internet (along with all of the customers hosting with them) by what is reported to be an attack by an ex-sysadmin, who has wiped customer data and servers.

I’ve been seeing tweets and discussions on tech and infosec forums, some of which have queried whether this circumstance would be a breach under GDPR for which regulatory penalties could be enforced. The answer to whether this incident represents a failure of Verelox to meet the requirements of GDPR is going to depend on many details which are not currently available, however as a former infosec professional now turned to privacy; I’d be inclined if asked, to give the standard Data Protection Officer answer: “It depends”. Because it does.

The GDPR requires that organisations take “appropriate technical and organisational measures” to manage risks to the rights and freedoms of individuals whose data is being processed (Article 24.1) and specifically, to protect the confidentiality and integrity of personal data in proportion to the risks to the individual and the capabilities of available technology (Article 32.1).

In this case, it is very likely that Verelox will be a Data Processor rather than a Data Controller for any personal data that was stored/hosted/collected on their cloud platform, since they were providing infrastructure only and not making any decisions about how people’s information would be used. However, GDPR does bring in Data Processor joint liability for data breaches (defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed” (Article 4.12)) and places explicit obligations on Data Processors as well as Data Controllers to “ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services”. (Article 82). Interestingly, the right to compensation does not specify “natural persons” in regard to compensation as it does to the definition of personal data, which may leave the door open for Verelox’s customers to make claims under GDPR rather than contract law to recover some of their losses arising from the incident. I’m not familiar with Dutch law, so I’ll leave that in the realms of speculation for the moment. What GDPR does appear to say is that Verelox could potentially be jointly liable with their customers for claims for damages from individuals, as a result of this incident. Whether they are actually culpable is something that will need careful consideration, and this is where I put my infosec hat back on for a while…

Does the fact that this happened therefore mean Verelox’s measures were not appropriate? Well, again the answer is going to be “It depends”. Based on the information available in news reports at the moment, this seems to be a rare and extreme case of a malicious insider with a grudge acting independently and outside the law. Should the company be held responsible for this?

One of the factors to consider will be whether this damage was done while the individual was still an insider (i.e. employed as a systems administrator) or whether it happened later on after they left the role? If the attack was carried out later on there is a possibility that Verelox might have dropped the ball, since the individual should have had their access revoked as soon as their employment came to an end, and in such a way that it would be difficult to trigger such a meltdown from the outside. If the attack was carried out post-employment then the “technical and organisational measures” Verelox had in place may not have been “appropriate”. Questions that should be asked are:

  • was there a standard procedure for revoking leavers’ access in a timely manner,
  • was that procedure followed in this particular case,
  • was there a culture of adherence to security procedures in general?

If the answer to any of these questions is “no” then Verelox might be in for a difficult time ahead.

If the attack was planned and set in motion while the individual was an insider; could/should pre-employment vetting or line management support procedures have identified the possibility? This one is tricky, as any predictive measure of human behaviour is never going to be 100% accurate on an individual level. Previous and similar shenanigans carried out by a prospective or current employee could be an indicator of higher risk of future shenanigans occurring, but that really depends on the person and the circumstances. No record of any previous shenanigans may mean; this person has done it before but was never caught, this person has never been in circumstances where this behaviour could be provoked, or simply that this person just wouldn’t do a thing like this in any circumstances. There’s just no way to tell in advance. Maybe this guy is a nutter who has a tendency to react destructively when upset – but that doesn’t mean we should be advocating for mandatory psychological examinations of all employees who are to be trusted with privileged access as that would be a grossly disproportionate invasion of privacy (and not necessarily accurate enough to be worth the effort either…)

What about Disaster Recovery and Business Continuity Planning? Should these plans have included mitigation for this level of malicious damage by a privileged insider? Again, maybe – but it depends. Does malicious insider damage happen often enough to justify the expense, protocol and monitoring capability that would be required to prevent and detect this activity while managing both false positives and negatives? While this sort of grudge-attack is always a possibility, it may make better business sense to develop, manage and support employees so that the chances of behaviour like this are reduced, rather than make the default assumption that everyone is a potential vandal/criminal and treat them accordingly. In any case; what organisation really has the resources and support available to maintain standby equipment and datastores in a way which make them easy to fail over to in the event of an attack or disaster but too difficult for an admin with a grudge to take out alongside the live system?

Hindsight is always 20/20-sharp and there are plenty of armchair experts gleefully pontificating about what they think Verelox should have done better or differently. In the current absence of detailed information though; there’s no reason to pay any attention to any of them at the moment. It’s easy to say “well Verelox should have done x,y,z; they’re idiots for not doing it” but far harder to balance the management approach for predictable but unlikely risks. Paying attention to managing the risks that can be managed, in a proportionate way that doesn’t stop the business operating, is the fine line that infosec teams must walk; often in difficult conditions – mostly unappreciated, frequently facing opposition from people who don’t understand or have different views of the risks and dependencies, probably under-resourced and constantly firefighting seems to be the norm for most operational infosec roles. There are cases where all you can do is as much as you can to put in place quick-recovery plans and buy insurance against the things that you really have no control over (like some loony destroying your business operations out of pique). This may well be one of them.

TL;DR version – if Verelox can demonstrate that they took reasonable and appropriate precautions to mitigate the risk of the attack, then they are unlikely to be subject to penalties or remedies under GDPR. However, if they can’t demonstrate that their measures were developed and maintained to be appropriate to the risks then they may be subject to regulatory enforcement (unlikely) or civil claims (possible). Whether GDPR would be the appropriate instrument for bringing action under is not something I’m qualified to comment on.

What the GDPR does – and doesn’t – say about consent

Meme courtesy of Jenny Lynn (@JennyL_RM)
You may have noticed that the General Data Protection Regulation is rather in the news lately, and quite right too considering there is only a year left to prepare for the most stringent and wide-reaching privacy law the EU has yet seen. Unfortunately however, in the rush to jump onto the latest marketing bandwagon, a lot of misleading and inaccurate information posing as “advice” in order to promote products and services is flourishing and appears to be drowning out more measured and expert commentary. Having seen a worrying number of articles, advertisements, blog posts and comments all giving the same wrong message about GDPR’s “consent” requirements, I was compelled to provide a layperson’s explanation of what GDPR really says on the subject.

So, let me start by saying GDPR DOES NOT MAKE CONSENT A MANDATORY REQUIREMENT FOR ALL PROCESSING OF PERSONAL DATA.

and again, so we’re completely clear – GDPR DOES NOT MAKE CONSENT A MANDATORY REQUIREMENT FOR ALL PROCESSING OF PERSONAL DATA!!!

So what does GDPR say about consent? It says that to be allowed to process (i.e. do anything at all involving a computer or organised manual files) personal data, you must have at least one “legal basis” for doing do. Let’s call the list of legal basis “Good Reasons” for now, to keep the language friendly.

The Good Reasons are:

  • when you have consent to process personal data
  • when there is a contract between you and the individual (“data subject”) or between the individual and someone else which requires you to process their personal data in order to fulfil its terms. This also applies to any processing that is needed in order to prepare or negotiate entering into a contract. Example: buying a house
  • When there’s a law or legal obligation (not including a contract) that you can only comply with by processing personal data – example, accident reports for health & safety records
  • when someone’s vital interests are at stake unless personal data is processed (usually only applicable to life-or-death situations – e.g. the emergency services having a list of employee names to identify survivors after a building collapse)
  • In the public interest or when acting under official public authority – such as political parties being allowed to have a copy of the electoral register (providing they don’t take the mickey in their uses of it).
  • When personal data needs to be processed for an activity which is in the “legitimate interests” of the organisation (“Data Controller”) or the individual.
  • Now, just because consent is listed first does not mean that it is the most preferable Good Reason, the most important or the default option. It is none of those things – in fact, when considering which Good Reason applies to processing, the other options should be tested first. If you picked consent because it was top of the list and consent was later withdrawn, but you realised there was a legal obligation to continue to process the data, you would be in a pickle – either you’d be in breach of privacy law (continuing to process when consent has been withdrawn) or in breach of the other legal obligation.

    Please note that opting for “legitimate interests” as the Good Reason is not a way of dodging around the prospect that consent may be withdrawn or refused, as there is an absolute right for the individual to object to the processing of their personal data when “legitimate interests” is the Good Reason for processing. All legitimate interests does is save you the effort of having to obtain and demonstrate specific, informed and freely-given consent before you can have or start using the data.

    When it comes to special categories of personal data (formerly known as “sensitive personal data”), there is another set of legal basis (we’ll call these Damn Good reasons) which must also be met for the processing to be allowed. In fact, GDPR says that unless one of these Damn Good Reasons is applicable, then you’re not allowed to process special categories of personal data at all.

    The Damn Good Reasons are:

  • When you have explicit consent
  • OR

  • When employment law, social protection law or social security law says you have to do something that requires the processing of special categories of personal data
  • When the processing is required in someone’s vital interests but the individual is incapable of giving consent
  • When the processing is necessary and carried out by a trade union, philosophical or religious non-profit organisation to administer their membership operations
  • When the individual has already and deliberately made the data public
  • When the processing is necessary to defend legal rights, legal claims or for the justice system to function
  • When the processing is necessary in the public interest (just like in the Good Reasons list)
  • When the processing is necessary in order to provide health care, treatment and management of health care services
  • When public health may be at risk if the processing isn’t carried out
  • When the processing is necessary for archiving, historical or scientific research, or statistical analysis
  • Again, although consent tops the list it does not mean that it should be the first choice of Damn Good Reason. As with the other list, it is wise to consider first whether there are other Damn Good Reasons that apply and only choose consent where there are no alternatives.

    There is some confusion at the moment about the difference between “consent” (Good Reasons) and “explicit consent” (Damn Good Reasons), especially as GDPR says that for any consent to be valid, it must be “unambiguous”. I’m going to leave the dissection of that to greater minds than mine (see refs). However, I will say that when in doubt, go for whichever approach gives you the most solid evidence.

    So that’s what GDPR says about whether and when you need consent.

    HOWEVER – another law (the Privacy & Electronic Communications Regulations, aka “PECR”) says that you must have explicit prior consent before sending any unsolicited direct marketing by email. This is not the same as the Good Reason/Damn Good Reason “[explicit] consent for processing” but the separate requirements are often confused. It may be in your organisation’s legitimate interests to collect, store and analyse contact info but if you are emailing unsolicited direct marketing messages you will also need to have obtained consent for email marketing from the recipient.

    A few words on mechanisms vs outcomes (if you’re still reading, congratulate yourself on your fortitude!)

    ‘Consent’ is an outcome – you and the individual have achieved a defined, mutually-understood, relationship in which you as a Data Controller can process their personal data for a particular purpose and in a particular way. This outcome needs to be an ongoing state of affairs. If the individual later decides to change the relationship and no longer allow you to process their data then you no longer have consent (and must stop and current or future processing).

    Tickboxes, signatures and “click here” buttons are mechanisms for obtaining consent. However, if the agreement you have obtained using this mechanism is not specific, informed and freely-given then you do not have valid consent under data protection law.

    Transaction logs, screen prints, signed documents and call recordings are evidence for the process of obtaining consent. These are only as good as the outcome that the process supports. If the individual has been misled, or they dispute that the processing you are doing is what they actually agreed to, or the processing purpose + Good/Damn Good Reason was not made clear to them, or they have simply changed their mind then you do not have valid consent even if you have evidence that consent was asked/supplied at one point in time. Consent is not a fire-and-forget activity, and consent obtained once is not set in stone forever.

    So in order to be able to get and keep valid consent you need to have good processes for obtaining, maintaining and verifying the outcome, ie. the relationship between you and the individual. This means careful attention to training, customer service and content of privacy notices.

      So, in summary (well done for getting this far!)

    GDPR does not say “all processing requires consent”- and anyone who says that it does, clearly does not know what they are talking about. Ignore them.
    GDPR says that sometimes you will need to get consent and when that is the case; it sets out the standards that you must meet.
    Consent for unsolicited electronic marketing as required by PECR is not the same thing as consent for processing of data described in GDPR.

    I hope that clears it all up.

    More about consent under GDPR if that is the Good Reason/Damn Good Reason you need to use:

    https://www.twobirds.com/~/media/pdfs/gdpr-pdfs/23–guide-to-the-gdpr–consent.pdf?la=en
    https://www.taylorwessing.com/globaldatahub/article-understanding-consent-under-the-gdpr.html
    http://privacylawblog.fieldfisher.com/2016/the-ambiguity-of-unambiguous-consent-under-the-gdpr/
    https://www.whitecase.com/publications/article/chapter-8-consent-unlocking-eu-general-data-protection-regulation

    GDPRubbish

    Unless you’ve been living under a rock, you’ll have noticed that there are lots of people talking about GDPR – which is a good thing.

     

    However, there is lots of nonsense being talked about GDPR – which is a bad thing.

     

    My Twitter timeline, LinkedIn feed and email inbox are being deluged with advertising for GDPR compliance “solutions” and services – which is fine as long as the product in question is treated as a tool in the toolbox and not a magic instant-fix-in-a-box spell for instant transformation

     

    Based on some of the twaddle I’ve seen being talked about GDPR lately, and my own experience in supporting data protection within organisations, here is a list of markers which, should they appear in an article, advertisement or slideshow, should be a warning to treat the rest of the content with a hefty pinch of salt.

     

    1. Banging on about fines. Yes; there is a big maximum fine. No, it’s unlikely to be enforced except for the most egregious cases of reckless negligence. The ICO has never levied the maximum penalty for any breach ever. Based on the evidence available, fines alone are not really a convincing justification for compliance.
    2. Obsessing about consent. Consent is only one of a number of possible legal basis for processing of personal data. It may not the most appropriate, desirable or “compliant” basis to select and insisting on consent where there is a statutory or contractual requirement for processing personal data; or where the individual has no real choice whether to give consent may result in “unfair processing” which could draw regulatory enforcement or litigation.
    3. Focusing on infosec and infosec tech. Information security (the “confidentiality and integrity” principle) is just 1 of 7 principles and doesn’t even start to fulfil obligations around rights or fairness. While it is important, focusing on infosec to the exclusion of the other principles is just as likely to cause serious problems as forgetting it altogether.
    4. Claiming that encryption is a mandatory requirement. Yes, it is mentioned specifically in a few places (Recital 83, Article 6, Article 32, Article 34) it is referenced as an example of a tool with which to mitigate risk. Whether you need it depends on the “scope, nature and context” of processing. Just having encryption will not make you “compliant” and not having encryption on ALL TEH THINGS will not mean that data is at risk of exposure.
    5. Making it all about “compliance”. A finding of “compliance” in an audit is merely a snapshot of a point in time, assuming that the audit itself was sufficiently robust. A compliance-focused attitude often leads to ‘gaming the system’ (as anyone who has ever had an argument about scoping for PCI-DSS or ISO2700x can attest). Ticking boxes does not produce the intended outcome on its own -the paperwork must match reality. GDPR requires your reality to uphold principles, obligations, rights. If you’re not doing this in practice, no amount of audit reports, certificates or checklists will save you when it all goes wrong. Think “maturity” and “assurance”, “quality” and “effectiveness” rather than “compliance”
    6. Insisting that only lawyers can be DPOs. There are some very good data protection lawyers out there in the wild, but an awfully large majority of lawyers who know almost nothing about privacy law. There are many experienced and competent data protection professionals who know privacy law inside-out but do not have a law degree. The only reason for insisting on having a lawyer as a Data Protection Officer or DP Lead is if the lawyer is *already* a DP specialist with business, communications & technical skills. The “lawyer” part is incidental.
    7. Marketing GDPR stuff by breaching other laws (PECR) or in breach of DPA/GDPR itself (were you given a privacy notice about the use of your information for marketing purposes? Is it a fair use of your personal data?)
    8. Calling it the “General Data Protection Regulations”. Seriously, people. It’s Regulation (EU) 2016/679, singular (even though there is a lot of it).

     

    OK, those are the “approach with caution” signs. But how to find good advice on GDPR? Here’s some advice for spotting people who probably know what they’re talking about:
    A competent privacy practitioner will tell you
    • There is no magic spell; time, effort, decision-making and resources will be required to adapt to GDPR requirements
    • There is no single tool, audit framework, self-assessment template, cut-n-paste policy or off-the-shelf training module that will make you “compliant”. You need to address systems, process AND culture at all layers and contexts.
    • Records management is just as significant as infosec (if not more so)
    • It’s not about paperwork – it’s about upholding fundamental human rights and freedoms (OK, that last one might be a step too far for many DP pro.s, but it is significant both to the intent and the implementation of GDPR.)

     

    A few more handy tips for your Privacy Team lineup
    Domain-specific knowledge is vital and valuable – but remember that specialists specialise, and so it is unlikely that someone who has only ever worked in one area of information governance (e.g. information security, records management) or context (HR, marketing, sales) will be able to address all of your GDPR needs.
    The same consideration applies for lawyers – commercial, contract and general counsel-type lawyers are probably not as familiar with privacy law as with their own areas of expertise.

     

    In summary, to find good GDPR advice, you should:
    • Get a rounded view
    • Consider risks to individuals’ privacy not just organisational impact
    • Instil and maintain privacy-aware culture and practices
    • Be deeply suspicious of any/all claims of one-stop/universal fixes

    Just Culture 2: Risky Behaviour

    Previously, I’ve introduced the concept of the “just culture” and explained the basic principle. In this blog post I will look at the types of behaviour that give rise to incidents and how, in a just culture, these would be addressed.

    Hands up if you’ve ever done any of the following:

    • Politely held the door to your office open for a stranger without asking to see their ID
    • Re-used a password
    • Emailed work to your personal account to work on outside the office
    • Mis-addressed an email, or mistakenly used CC rather than BCC

    Did it seem like a good idea at the time? (You can lower your hands now, by the way) Perhaps you were under pressure to get work done to a deadline, or maybe you couldn’t afford the cognitive effort of considering security policies at the time. These types of “incidents” occur every day, all over the place and in most cases they do not result in disaster – but one day, they could…and unfortunately, in most corporate cultures the blame will rest on the person who didn’t follow the policies.

    In a just culture, blame is not appropriate and punishment is only reserved for a minority of behaviours – those which were driven by malicious intent or deliberate and knowing recklessness. None of the activities listed above really fall into that category and so even if they did result in major data leakage, disruption or loss; should not be responded to with punitive action – especially if everyone is doing the same but getting away with it. The sysadmin who runs a private file-sharing server on the corporate network, or the manager who illegally snoops on their staff’s emails should be punished – not those who are just trying to get on with their jobs.

    Most incidents arise from “risky behaviour” rather than malice or knowing recklessness. Risky behaviour falls into two main categories:

    1. Genuine error (see http://missinfogeek.net/human-error/ for some further thoughts on that) – such as mis-typing a name, confusing two similar-looking people, being taken in by a highly-convincing well-crafted scam site or email or unknowingly putting your security pass in the pocket that has a hole in the bottom
    2. Underestimation or low prioritisation of the risks (perhaps due to conflicting imperatives – e.g. time pressure, budget constraints, performance goals) – this is where most risky behaviour occurs.

    These behaviours should not be treated the same way, for that would be unjust.

    In the case of 1), the appropriate response is consolation and a review of controls to identify whether there are any areas which could benefit from additional ‘sanity checks’ without making it too difficult for people to get their jobs done. Humans are imperfect and any system or process that relies on 100% human accuracy is doomed to fail – this is a design fault, not the fault of the errant.

    The second type of behaviour is more challenging to mitigate, especially since human beings are generally rubbish at assessing risk on the fly. Add in cognitive dissonance, conflicting priorities and ego and you end up with the greatest challenge of the just culture!
    Explaining the reason that the behaviour is risky, pointing out the correct approach and issuing a friendly warning not to do it again (OR ELSE) is the appropriate and fair response.

    So how in general should risky behaviour be prevented? Education is the foundation here – not just a single half-hour e-learning module once a year, but frequent and engaging discussion of infosec risks using real-life anecdotes, analogies, humour and encouraging input from all.

    On top of the education programme; there needs to be a candid look at business process, systems, procedure and tools – are they set up to make risky behaviour the path of least resistance or do they encourage careful thought and good habits?

    Monitoring and correcting behaviour comes next and it is critical that this be done impartially and with as much vigour at senior levels than for front-line and junior staff. If the C-suite can flout policy with impunity then not only will you struggle to achieve a successful just culture, but you also have a gaping big hole in your security defences.

    A just culture relies on robust procedures, a series of corrective nudges and above all, consistency of responses in order to be effective. Far too often, individuals are thrown to the wolves for simply getting unlucky – forced to use non-intuitive or badly-configured systems, under pressure from management above, with inadequate resources and insufficient training, they cut the same corners as they see everyone else doing – and pay the price of the organisation’s failures.

    Next time: building a just culture in a pit of snakes*

    *something like that, anyway

    ‘Just Culture’: an introduction

    As I noted in last week’s blog post, the phrase “human error” covers a lot of ground, and fails to distinguish the causes of errors from each other; thus not being terribly helpful in incident analysis, being a generic statement of “something happened that wasn’t supposed to”.

    The “something” may cover a number of scenarios, behaviours and motivations but to unpick an incident and protect against further occurrences, the conditions and actions do need to be examined, because it is those which determine the appropriate response. This is where a “Just Culture” comes in.

    For those of you not familiar with the phrase, the term “Just Culture” arose from the work on aviation safety by Professor James Reason in the late 90s and early 00s. Professor Reason recognised that fear of a punitive reaction to human error is likely to discourage reporting of incidents, whereas it would be more advantageous to foster  “an atmosphere of trust in which those who provide essential safety-related information are encouraged and even rewarded, but in which people are clear about where the line is drawn between acceptable and unacceptable behaviour.”

    There is much written about the principles and practices of a Just Culture, which has been adopted in many safety-conscious industries, including transport, construction and healthcare which I will refrain from regurgitating (if you’re interested, see the links at the end). My putpose here is to generally have a bit of a moan about how far the information security industry has lagged behind in adopting a similar position, and how personally, I think it’s time we caught up.  

    When individuals are afraid to report information security risks and incidents for fear of ‘getting into trouble’, apathetic resignation to broken systems and processes or simply because they don’t recognise a problem when it arises, those risks and incidents will not be managed – increasing the likelihood that they will accumulate to the point of causing serious damage or disruption.

    Security policies and procedures are routinely breached for various reasons – they fail to reflect the needs and risk appetite of an organisation, they are difficult to find or to understand, or they demand a higher level of technological capacity than the organisation can muster. If the only time that these breaches are identified is when the consequences are adverse -and the outcome of such occurrences is that individuals are punished for being ‘caught out’ by doing what they see everyone else doing – then human nature being what it is; more effort will go into concealing the instances of policy breach than the rectification of the core problems that cause the policy to be breached, and breaches will continue to occur. 

    However, simply enforcing reporting of breaches and incidents won’t, on its own, result in any meaningful change if the root causes of incidents aren’t analysed and treated. In my next blog post I will look a bit deeper into the analysis of incident causes and the behaviours that contribute to their occurrence.

    References:

    “Just Culture: A Debrief” https://www.tc.gc.ca/eng/civilaviation/publications/tp185-3-2012-6286.htm

    “Just Culture” http://www.eurocontrol.int/articles/just-culture

    “Patient Safety and the  Just Culture” https://psnet.ahrq.gov/resources/resource/1582

    “Just Culture” Sidney Dekker: http://sidneydekker.com/just-culture/

    Human Error

    To err is human…..to forgive, divine..

    …(but to really screw things up, you need a computer….!)

    One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

    What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

    Premise; I fully believe that the statement “the breach was down to human error” is a total copout.

    Why?

    Let’s start with “error”. The dictionary definition says:

    1. A mistake
    2. The state or condition of being wrong in conduct or judgement
    3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

    The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

    But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

    Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

    The human bit

    Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

    I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

    Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

    So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

    And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.

    StalkerChimps

    This morning, I was spending my leisure time researching options for email newsletters. Just to be clear, this isn’t something I would necessarily choose to do for fun, but is linked to my role as Digital Officer for a certain professional association for information rights professionals.

    All of the reviews I read seem to hold MailChimp up as cost-effective, easy to use and feature-rich. “Great”, I thought and then the privacy nerd in me started muttering….I wasn’t surprised to see that MailChimp are a US company, as their inability to spell common words such as “realise” and “harbour” had already clued me up to this, but that doesn’t necessarily present an insurmountable data protection problem for a UK organisation looking to use their services (setting aside the current kerfuffle about Safe Harbour/Privacy Seal/NSA etc etc). I thought as a prospective customer of their services, I’d check out the privacy policy (nothing more embarrassing than accidentally using personal data unfairly or unlawfully when you’re acting as a professional organisation for privacy enthusiasts…..).

    And I found this:

    (for the record; the annotations are mine).

    Which basically translates to:

    “We are going to follow you all over the web, conducting surveillance on you without telling you and then use what we have discovered to try and predict the best ways to manipulate you in order to make money for our customers, clients and suppliers.”

    Oh yeah, and there’s also this: “As you use our Services, you may import into our system personal information you’ve collected from your Subscribers. We have no direct relationship with your Subscribers, and you’re responsible for making sure you have the appropriate permission for us to collect and process information about those individuals. We may transfer personal information to companies that help us provide our Services (“Service Providers.”) All Service Providers enter into a contract with us that protects personal data and restricts their use of any personal data in line with this policy. As part of our Services, we may use and incorporate into features information you’ve provided or we’ve collected about Subscribers as Aggregate Information. We may share this Aggregate Information, including Subscriber email addresses, with third parties in line with the approved uses in Section 6.[screenshot]”

    Now, I have most definitely had emails from businesses that I’ve used in the past, which – upon unsubscribing – I have discovered are using MailChimp. No-one has ever told me that when I gave my email address to them, they would pass it on to a US company who would then use it for stalking and profiling me. Well, hur-hur, it’s the Internet, what did I expect?

    Wait. Being “on the internet” does not mean “no laws apply”. And in the UK, for UK-registered organisations, the UK Data Protection Act does most certainly apply. You cannot contract out of your organisation’s responsibilities under DPA. Now, for those of you reading this who aren’t DP geeks (Hi, nice to see you, the party’s just getting started!), here’s a breakdown of why I think using MailChimp might be a problem for UK organisations….

    The UK Data Protection Act has 8 Principles, the first of which is that “personal data shall be processed fairly and lawfully”. Part of “fair and lawful” is that you must be transparent about your use of personal data, and you mustn’t breach any of the Principles, commit any of the offences or use the data for activity which is otherwise inherenty unlawful (like scams and fraud, for example). One key requirement of being “fair and lawful” is using a Fair Processing Statement (a.k.a “Privacy Notice“) to tell people what you are doing with their data. This needs to include any activity which they wouldn’t reasonably expect – and I would think that having all of your online activity hoovered up and used to work out how best to manipulate you would fit squarely into that category. Or am I just old-fashioned?

    Anyway, using MailChimp for email marketing if you don’t tell people what that implies for their privacy? Fail No.1.

    Then there’s the small matter of MailChimp’s role in this relationship. Under DPA, we have Data Controllers and Data Processors. For the sake of user-friendliness, let’s call them respectively “Boss” and “Bitch”. The organisation that is the Boss gets to make the decisions about why and how personal data is used. The organisation that is the Bitch can only do what the Boss tells them. The terms of how the Boss-Bitch relationship works needs to be set out in a contract. If the Bitch screws up and breaches privacy law, the Boss takes the flak, so the Boss should put strict limitations on what the Bitch is allowed to do on their behalf.

    Now, I haven’t seen the Ts and Cs that MailChimp are using or whether there is any mention of Data Controller/Data Processor relationships but I doubt very much if they could be considered a proper Bitch because they use a lot of subscriber data for their own ends, not just those of the organisation on whose behalf they are sending out emails. So if MailChimp aren’t a Bitch, then they are their own Boss – and so giving personal data to them isn’t the equivalent of using an agency for an in-house operation, it’s actually disclosure of the information to a third party to use for their own purposes (which may not be compatible with the purposes you originally gathered the data for). Now one of the things you’re supposed to tell people in a privacy notice is whether you are going to disclose their data, what for, and to whom. You’re also not supposed to re-purpose it without permission. Oops again (Fail No. 2)

    I’m gonna skirt past the 8th Principle (don’t send data overseas without proper protection), because there’s just so much going on at the moment about the implications of sending data to the US, we’ll be here for hours if I get into that. Suffice to say, if the Data Controller (Boss) is a US firm, you have no rights to visibility of your data, control over its accuracy, use, security or anything else (Principles 2-7). None. Kthxbye. That might be fine with you, but unless you are informed upfront, the choice of whether or not to engage with the organisation that’s throwing your data over the pond to be mercilessly exploited, is taken away from you. Not fair. Not lawful. Fail No.3.

    Aaaaand finally (for this post, anyway) there’s the PECR problem. Simplified: PECR is the law that regulates email marketing, one of the requirements of which is that marketing by email, SMS and to TPS-registered recipients requires prior consent – i.e., you can’t assume they want to receive it, you must ask permission. It does however contain a kind of loophole where if you have bought goods or services from an organisation, they are allowed to use email marketing to tell you about similar goods and services that you might be interested in (until you tell them to stop, then they can’t any more). This means that where the soft-opt in applies, you can send people email marketing without their prior consent (it’s a bit more complicated to that, but this isn’t a PECR masterclass – more info here if you’re interested)

    However, PECR doesn’t cancel out DPA or contradict it, or over-ride it. You must comply with both. And this means that any company relying on the soft-opt-in to send email marketing via MailChimp is almost certainly in breach of the Data Protection Act unless they at the time they collect your email address have very clearly a) stated that they will use it for email marketing purposes and b) obtained your permission to pass it to MailChimp to use for a whole bunch of other stuff. Ever seen anything like that? Nope, me neither. Fail No. 4

    So how come this is so widespread and no-one has sounded the alarm. Well, based on my observations, here are some reasons:

    1. No-one reads terms and conditions unless they are corporate lawyers. Even if tTs and Cs were read and alarm bells were rung, chances are that the Marketing department or CEO will have a different idea of risk appetite and insist on going ahead with the shiny (but potentially unlawful) option anyway.
    2. By and large, very few organisations in the UK actually ‘get’ the Data Protection Act and their responsibilities under it. They also don’t really want to pay for DP expertise either, since it will undoubtably open a can of worms that will cost money to fix and cause extra work for everyone. Much easier to take the ostrich approach and rely on the fact that….
    3. …the vast majority of UK citizens don’t understand or care about data protection either. Sometimes there is a gleam of interest when the word “compensation” pops up, but mostly they see it as a hurdle to be sneaked around rather than a leash on a snarling mongoose. Every now and again there is a spurt of outrage as another major breach is uncovered, but these are so common that “breach fatigue” has set in.
    4. Data-trading makes money, and ripping off people’s data/spying on them without giving them a choice/share of the cut/chance to behave differently makes more money than acting fairly and ethically.
    5. Fundamental cultural differences between the US and the EU’s approach to privacy. If you read this blog post by MailChimp’s General Counsel/Chief Privacy Officer, the focus is mostly on data security and disclosure to law enforcement. There’s little about the impact on personal autonomy, freedom of action or principles of fairness that EU privacy law is based on. Perhaps that’s because most of that stuff in in the US Constitution and doesn’t need restating in privacy law. Maybe it’s because the EU has had a different experience of what happens when privacy is eroded. Maybe he ran out of time/steam/coffee before getting into all that.

    Anyway, if you got this far, thanks for reading – I hope there’s food for thought there. I’m not advocating that anyone boycott MailChimp or anything like that – but if you’re gonna use them, you should consult a data protection expert to find out how to protect a) your organisation b) your customers and c) the rest of us.

    Right, back to web design research it is……