Press "Enter" to skip to content

Category: breaches and incidents

Verelox, insider threat and GDPR implications

If you haven’t heard about Verelox, they are a Dutch cloud hosting provider who’ve recently been wiped off the internet (along with all of the customers hosting with them) by what is reported to be an attack by an ex-sysadmin, who has wiped customer data and servers.

I’ve been seeing tweets and discussions on tech and infosec forums, some of which have queried whether this circumstance would be a breach under GDPR for which regulatory penalties could be enforced. The answer to whether this incident represents a failure of Verelox to meet the requirements of GDPR is going to depend on many details which are not currently available, however as a former infosec professional now turned to privacy; I’d be inclined if asked, to give the standard Data Protection Officer answer: “It depends”. Because it does.

The GDPR requires that organisations take “appropriate technical and organisational measures” to manage risks to the rights and freedoms of individuals whose data is being processed (Article 24.1) and specifically, to protect the confidentiality and integrity of personal data in proportion to the risks to the individual and the capabilities of available technology (Article 32.1).

In this case, it is very likely that Verelox will be a Data Processor rather than a Data Controller for any personal data that was stored/hosted/collected on their cloud platform, since they were providing infrastructure only and not making any decisions about how people’s information would be used. However, GDPR does bring in Data Processor joint liability for data breaches (defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed” (Article 4.12)) and places explicit obligations on Data Processors as well as Data Controllers to “ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services”. (Article 82). Interestingly, the right to compensation does not specify “natural persons” in regard to compensation as it does to the definition of personal data, which may leave the door open for Verelox’s customers to make claims under GDPR rather than contract law to recover some of their losses arising from the incident. I’m not familiar with Dutch law, so I’ll leave that in the realms of speculation for the moment. What GDPR does appear to say is that Verelox could potentially be jointly liable with their customers for claims for damages from individuals, as a result of this incident. Whether they are actually culpable is something that will need careful consideration, and this is where I put my infosec hat back on for a while…

Does the fact that this happened therefore mean Verelox’s measures were not appropriate? Well, again the answer is going to be “It depends”. Based on the information available in news reports at the moment, this seems to be a rare and extreme case of a malicious insider with a grudge acting independently and outside the law. Should the company be held responsible for this?

One of the factors to consider will be whether this damage was done while the individual was still an insider (i.e. employed as a systems administrator) or whether it happened later on after they left the role? If the attack was carried out later on there is a possibility that Verelox might have dropped the ball, since the individual should have had their access revoked as soon as their employment came to an end, and in such a way that it would be difficult to trigger such a meltdown from the outside. If the attack was carried out post-employment then the “technical and organisational measures” Verelox had in place may not have been “appropriate”. Questions that should be asked are:

  • was there a standard procedure for revoking leavers’ access in a timely manner,
  • was that procedure followed in this particular case,
  • was there a culture of adherence to security procedures in general?

If the answer to any of these questions is “no” then Verelox might be in for a difficult time ahead.

If the attack was planned and set in motion while the individual was an insider; could/should pre-employment vetting or line management support procedures have identified the possibility? This one is tricky, as any predictive measure of human behaviour is never going to be 100% accurate on an individual level. Previous and similar shenanigans carried out by a prospective or current employee could be an indicator of higher risk of future shenanigans occurring, but that really depends on the person and the circumstances. No record of any previous shenanigans may mean; this person has done it before but was never caught, this person has never been in circumstances where this behaviour could be provoked, or simply that this person just wouldn’t do a thing like this in any circumstances. There’s just no way to tell in advance. Maybe this guy is a nutter who has a tendency to react destructively when upset – but that doesn’t mean we should be advocating for mandatory psychological examinations of all employees who are to be trusted with privileged access as that would be a grossly disproportionate invasion of privacy (and not necessarily accurate enough to be worth the effort either…)

What about Disaster Recovery and Business Continuity Planning? Should these plans have included mitigation for this level of malicious damage by a privileged insider? Again, maybe – but it depends. Does malicious insider damage happen often enough to justify the expense, protocol and monitoring capability that would be required to prevent and detect this activity while managing both false positives and negatives? While this sort of grudge-attack is always a possibility, it may make better business sense to develop, manage and support employees so that the chances of behaviour like this are reduced, rather than make the default assumption that everyone is a potential vandal/criminal and treat them accordingly. In any case; what organisation really has the resources and support available to maintain standby equipment and datastores in a way which make them easy to fail over to in the event of an attack or disaster but too difficult for an admin with a grudge to take out alongside the live system?

Hindsight is always 20/20-sharp and there are plenty of armchair experts gleefully pontificating about what they think Verelox should have done better or differently. In the current absence of detailed information though; there’s no reason to pay any attention to any of them at the moment. It’s easy to say “well Verelox should have done x,y,z; they’re idiots for not doing it” but far harder to balance the management approach for predictable but unlikely risks. Paying attention to managing the risks that can be managed, in a proportionate way that doesn’t stop the business operating, is the fine line that infosec teams must walk; often in difficult conditions – mostly unappreciated, frequently facing opposition from people who don’t understand or have different views of the risks and dependencies, probably under-resourced and constantly firefighting seems to be the norm for most operational infosec roles. There are cases where all you can do is as much as you can to put in place quick-recovery plans and buy insurance against the things that you really have no control over (like some loony destroying your business operations out of pique). This may well be one of them.

TL;DR version – if Verelox can demonstrate that they took reasonable and appropriate precautions to mitigate the risk of the attack, then they are unlikely to be subject to penalties or remedies under GDPR. However, if they can’t demonstrate that their measures were developed and maintained to be appropriate to the risks then they may be subject to regulatory enforcement (unlikely) or civil claims (possible). Whether GDPR would be the appropriate instrument for bringing action under is not something I’m qualified to comment on.

Just Culture 2: Risky Behaviour

Previously, I’ve introduced the concept of the “just culture” and explained the basic principle. In this blog post I will look at the types of behaviour that give rise to incidents and how, in a just culture, these would be addressed.

Hands up if you’ve ever done any of the following:

  • Politely held the door to your office open for a stranger without asking to see their ID
  • Re-used a password
  • Emailed work to your personal account to work on outside the office
  • Mis-addressed an email, or mistakenly used CC rather than BCC

Did it seem like a good idea at the time? (You can lower your hands now, by the way) Perhaps you were under pressure to get work done to a deadline, or maybe you couldn’t afford the cognitive effort of considering security policies at the time. These types of “incidents” occur every day, all over the place and in most cases they do not result in disaster – but one day, they could…and unfortunately, in most corporate cultures the blame will rest on the person who didn’t follow the policies.

In a just culture, blame is not appropriate and punishment is only reserved for a minority of behaviours – those which were driven by malicious intent or deliberate and knowing recklessness. None of the activities listed above really fall into that category and so even if they did result in major data leakage, disruption or loss; should not be responded to with punitive action – especially if everyone is doing the same but getting away with it. The sysadmin who runs a private file-sharing server on the corporate network, or the manager who illegally snoops on their staff’s emails should be punished – not those who are just trying to get on with their jobs.

Most incidents arise from “risky behaviour” rather than malice or knowing recklessness. Risky behaviour falls into two main categories:

  1. Genuine error (see http://missinfogeek.net/human-error/ for some further thoughts on that) – such as mis-typing a name, confusing two similar-looking people, being taken in by a highly-convincing well-crafted scam site or email or unknowingly putting your security pass in the pocket that has a hole in the bottom
  2. Underestimation or low prioritisation of the risks (perhaps due to conflicting imperatives – e.g. time pressure, budget constraints, performance goals) – this is where most risky behaviour occurs.

These behaviours should not be treated the same way, for that would be unjust.

In the case of 1), the appropriate response is consolation and a review of controls to identify whether there are any areas which could benefit from additional ‘sanity checks’ without making it too difficult for people to get their jobs done. Humans are imperfect and any system or process that relies on 100% human accuracy is doomed to fail – this is a design fault, not the fault of the errant.

The second type of behaviour is more challenging to mitigate, especially since human beings are generally rubbish at assessing risk on the fly. Add in cognitive dissonance, conflicting priorities and ego and you end up with the greatest challenge of the just culture!
Explaining the reason that the behaviour is risky, pointing out the correct approach and issuing a friendly warning not to do it again (OR ELSE) is the appropriate and fair response.

So how in general should risky behaviour be prevented? Education is the foundation here – not just a single half-hour e-learning module once a year, but frequent and engaging discussion of infosec risks using real-life anecdotes, analogies, humour and encouraging input from all.

On top of the education programme; there needs to be a candid look at business process, systems, procedure and tools – are they set up to make risky behaviour the path of least resistance or do they encourage careful thought and good habits?

Monitoring and correcting behaviour comes next and it is critical that this be done impartially and with as much vigour at senior levels than for front-line and junior staff. If the C-suite can flout policy with impunity then not only will you struggle to achieve a successful just culture, but you also have a gaping big hole in your security defences.

A just culture relies on robust procedures, a series of corrective nudges and above all, consistency of responses in order to be effective. Far too often, individuals are thrown to the wolves for simply getting unlucky – forced to use non-intuitive or badly-configured systems, under pressure from management above, with inadequate resources and insufficient training, they cut the same corners as they see everyone else doing – and pay the price of the organisation’s failures.

Next time: building a just culture in a pit of snakes*

*something like that, anyway

‘Just Culture’: an introduction

As I noted in last week’s blog post, the phrase “human error” covers a lot of ground, and fails to distinguish the causes of errors from each other; thus not being terribly helpful in incident analysis, being a generic statement of “something happened that wasn’t supposed to”.

The “something” may cover a number of scenarios, behaviours and motivations but to unpick an incident and protect against further occurrences, the conditions and actions do need to be examined, because it is those which determine the appropriate response. This is where a “Just Culture” comes in.

For those of you not familiar with the phrase, the term “Just Culture” arose from the work on aviation safety by Professor James Reason in the late 90s and early 00s. Professor Reason recognised that fear of a punitive reaction to human error is likely to discourage reporting of incidents, whereas it would be more advantageous to foster  “an atmosphere of trust in which those who provide essential safety-related information are encouraged and even rewarded, but in which people are clear about where the line is drawn between acceptable and unacceptable behaviour.”

There is much written about the principles and practices of a Just Culture, which has been adopted in many safety-conscious industries, including transport, construction and healthcare which I will refrain from regurgitating (if you’re interested, see the links at the end). My putpose here is to generally have a bit of a moan about how far the information security industry has lagged behind in adopting a similar position, and how personally, I think it’s time we caught up.  

When individuals are afraid to report information security risks and incidents for fear of ‘getting into trouble’, apathetic resignation to broken systems and processes or simply because they don’t recognise a problem when it arises, those risks and incidents will not be managed – increasing the likelihood that they will accumulate to the point of causing serious damage or disruption.

Security policies and procedures are routinely breached for various reasons – they fail to reflect the needs and risk appetite of an organisation, they are difficult to find or to understand, or they demand a higher level of technological capacity than the organisation can muster. If the only time that these breaches are identified is when the consequences are adverse -and the outcome of such occurrences is that individuals are punished for being ‘caught out’ by doing what they see everyone else doing – then human nature being what it is; more effort will go into concealing the instances of policy breach than the rectification of the core problems that cause the policy to be breached, and breaches will continue to occur. 

However, simply enforcing reporting of breaches and incidents won’t, on its own, result in any meaningful change if the root causes of incidents aren’t analysed and treated. In my next blog post I will look a bit deeper into the analysis of incident causes and the behaviours that contribute to their occurrence.

References:

“Just Culture: A Debrief” https://www.tc.gc.ca/eng/civilaviation/publications/tp185-3-2012-6286.htm

“Just Culture” http://www.eurocontrol.int/articles/just-culture

“Patient Safety and the  Just Culture” https://psnet.ahrq.gov/resources/resource/1582

“Just Culture” Sidney Dekker: http://sidneydekker.com/just-culture/

Human Error

To err is human…..to forgive, divine..

…(but to really screw things up, you need a computer….!)

One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

Premise; I fully believe that the statement “the breach was down to human error” is a total copout.

Why?

Let’s start with “error”. The dictionary definition says:

  1. A mistake
  2. The state or condition of being wrong in conduct or judgement
  3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

The human bit

Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.