Press "Enter" to skip to content

Tag: incident-response

Nothing to see here…

I read today in infosecurity magazine that the law firm Appleby whose tax-sheltering habits are currently splattered all over the news, thanks to a massive leak of internal data; have claimed that a) the attack was apparently a sophisticated professional-grade hack and b) there was no evidence of data having left their systems.

I laughed out loud

Apparently, a team of professional computer forensics geeks have been unable to identify how the data was exfiltrated. Fair enough actually; it’s entirely possible that Appleby had no access controls or security logging in place (this is very common since such things require time, money, effort and thought to set up, corporate enthusiasm for that sort of thing is usually pretty scarce) and so there was simply no breadcrumb trail to follow. This has led them to conclude that a devilishly clever outside actor was responsible rather than a leak from some git on the inside. *Sceptical face* – it’s far more likely that an intrusion would leave traces than an internal misuse of privileged access would. (I guess their insurance covers being hacked but not being stitched up by one’s own workforce #cynicalsmirk)

But wait a minute… evidence that data was exfiltrated clearly does not mean that no data was exfiltrated…… The data has been passed to a variety of media outlets, it has definitely escaped somehow.

This is an important point – how often, after a reported data leak/loss/hack/etc have we heard a statement from the organisation affected that they have “no evidence” that any data was exposed, misused or extracted? (Rhetorical question; they all say that). The absence of evidence is not evidence of absence and such claims should to be taken to mean only that the organisation has limited information as to what really happened to the data. No-one should take reassurance from an open declaration of cluelessness.

The other point; about the sophistication of the tactics used to nab the data is that everyone also claims that every information security breach is a sophisticated attack – even when most of them turn out to be teenagers operating from their bedrooms, or result from an unwittingly obliging senior exec clicking on the wrong link or email attachment. I’m not saying that this particular depth charge wasn’t a high-tech military-grade IT Ninja attack…..only that such things are awfully rare and largely unnecessary thanks to the laxity of infosec controls in most places.

Anyway, if I were wealthy enough to make using offshore tax avoidance schemes worthwhile, I would probably demand a full infosec audit report from any law firm I was considering handing my data over to…..

Verelox, insider threat and GDPR implications

If you haven’t heard about Verelox, they are a Dutch cloud hosting provider who’ve recently been wiped off the internet (along with all of the customers hosting with them) by what is reported to be an attack by an ex-sysadmin, who has wiped customer data and servers.

I’ve been seeing tweets and discussions on tech and infosec forums, some of which have queried whether this circumstance would be a breach under GDPR for which regulatory penalties could be enforced. The answer to whether this incident represents a failure of Verelox to meet the requirements of GDPR is going to depend on many details which are not currently available, however as a former infosec professional now turned to privacy; I’d be inclined if asked, to give the standard Data Protection Officer answer: “It depends”. Because it does.

The GDPR requires that organisations take “appropriate technical and organisational measures” to manage risks to the rights and freedoms of individuals whose data is being processed (Article 24.1) and specifically, to protect the confidentiality and integrity of personal data in proportion to the risks to the individual and the capabilities of available technology (Article 32.1).

In this case, it is very likely that Verelox will be a Data Processor rather than a Data Controller for any personal data that was stored/hosted/collected on their cloud platform, since they were providing infrastructure only and not making any decisions about how people’s information would be used. However, GDPR does bring in Data Processor joint liability for data breaches (defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed” (Article 4.12)) and places explicit obligations on Data Processors as well as Data Controllers to “ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services”. (Article 82). Interestingly, the right to compensation does not specify “natural persons” in regard to compensation as it does to the definition of personal data, which may leave the door open for Verelox’s customers to make claims under GDPR rather than contract law to recover some of their losses arising from the incident. I’m not familiar with Dutch law, so I’ll leave that in the realms of speculation for the moment. What GDPR does appear to say is that Verelox could potentially be jointly liable with their customers for claims for damages from individuals, as a result of this incident. Whether they are actually culpable is something that will need careful consideration, and this is where I put my infosec hat back on for a while…

Does the fact that this happened therefore mean Verelox’s measures were not appropriate? Well, again the answer is going to be “It depends”. Based on the information available in news reports at the moment, this seems to be a rare and extreme case of a malicious insider with a grudge acting independently and outside the law. Should the company be held responsible for this?

One of the factors to consider will be whether this damage was done while the individual was still an insider (i.e. employed as a systems administrator) or whether it happened later on after they left the role? If the attack was carried out later on there is a possibility that Verelox might have dropped the ball, since the individual should have had their access revoked as soon as their employment came to an end, and in such a way that it would be difficult to trigger such a meltdown from the outside. If the attack was carried out post-employment then the “technical and organisational measures” Verelox had in place may not have been “appropriate”. Questions that should be asked are:

  • was there a standard procedure for revoking leavers’ access in a timely manner,
  • was that procedure followed in this particular case,
  • was there a culture of adherence to security procedures in general?

If the answer to any of these questions is “no” then Verelox might be in for a difficult time ahead.

If the attack was planned and set in motion while the individual was an insider; could/should pre-employment vetting or line management support procedures have identified the possibility? This one is tricky, as any predictive measure of human behaviour is never going to be 100% accurate on an individual level. Previous and similar shenanigans carried out by a prospective or current employee could be an indicator of higher risk of future shenanigans occurring, but that really depends on the person and the circumstances. No record of any previous shenanigans may mean; this person has done it before but was never caught, this person has never been in circumstances where this behaviour could be provoked, or simply that this person just wouldn’t do a thing like this in any circumstances. There’s just no way to tell in advance. Maybe this guy is a nutter who has a tendency to react destructively when upset – but that doesn’t mean we should be advocating for mandatory psychological examinations of all employees who are to be trusted with privileged access as that would be a grossly disproportionate invasion of privacy (and not necessarily accurate enough to be worth the effort either…)

What about Disaster Recovery and Business Continuity Planning? Should these plans have included mitigation for this level of malicious damage by a privileged insider? Again, maybe – but it depends. Does malicious insider damage happen often enough to justify the expense, protocol and monitoring capability that would be required to prevent and detect this activity while managing both false positives and negatives? While this sort of grudge-attack is always a possibility, it may make better business sense to develop, manage and support employees so that the chances of behaviour like this are reduced, rather than make the default assumption that everyone is a potential vandal/criminal and treat them accordingly. In any case; what organisation really has the resources and support available to maintain standby equipment and datastores in a way which make them easy to fail over to in the event of an attack or disaster but too difficult for an admin with a grudge to take out alongside the live system?

Hindsight is always 20/20-sharp and there are plenty of armchair experts gleefully pontificating about what they think Verelox should have done better or differently. In the current absence of detailed information though; there’s no reason to pay any attention to any of them at the moment. It’s easy to say “well Verelox should have done x,y,z; they’re idiots for not doing it” but far harder to balance the management approach for predictable but unlikely risks. Paying attention to managing the risks that can be managed, in a proportionate way that doesn’t stop the business operating, is the fine line that infosec teams must walk; often in difficult conditions – mostly unappreciated, frequently facing opposition from people who don’t understand or have different views of the risks and dependencies, probably under-resourced and constantly firefighting seems to be the norm for most operational infosec roles. There are cases where all you can do is as much as you can to put in place quick-recovery plans and buy insurance against the things that you really have no control over (like some loony destroying your business operations out of pique). This may well be one of them.

TL;DR version – if Verelox can demonstrate that they took reasonable and appropriate precautions to mitigate the risk of the attack, then they are unlikely to be subject to penalties or remedies under GDPR. However, if they can’t demonstrate that their measures were developed and maintained to be appropriate to the risks then they may be subject to regulatory enforcement (unlikely) or civil claims (possible). Whether GDPR would be the appropriate instrument for bringing action under is not something I’m qualified to comment on.

Human Error

To err is human… forgive, divine..

…(but to really screw things up, you need a computer….!)

One can’t help noticing a recurring theme in the spate of data breach news reports these days. The phrase “human error” is coming up an awful lot. I’d like to take a closer look at just what that phrase means, and whether it is at all a helpful description at all.

What do you think when you hear that something happened due to a “human error”? Do you think “aww, the poor person that made a mistake; how awful for them, I hope someone gives them a hug, a cup of tea and consolation that humans are fallible frail creatures who can’t be expected to get stuff right all the time” or do you – like me – think to yourself “h’mm, what this means is that something went wrong and that humans were involved. I wonder whether systems, processes and training were designed to robustly identify and mitigate risks, whether management support and provision of resources were adequate and whether this is just a case of someone getting unlucky while dodging around policies in a commonly-accepted and laxly-monitored way”

Premise; I fully believe that the statement “the breach was down to human error” is a total copout.


Let’s start with “error”. The dictionary definition says:

  1. A mistake
  2. The state or condition of being wrong in conduct or judgement
  3. A measure of the estimated difference between the observed or calculated value of a quantity and its true value

The first definition is probably the one that is called to mind most often when an occurrence is described as an “error”. Mistakes are common and unavoidable, everyone knows that. I believe that the phrase “human error” is used consciously and cynically to create the perception that information incidents are freak occurrences of nature (rather like hiccups or lightning) about which it would be churlish and unkind to take umbrage; and unreasonable to demand better.

But in my humble and personal opinion, (based on nothing more than anecdote and observation) the perception thus created is a false one – in fact, breaches that occur solely as a result of genuine mistakes are rare. Even if a “oops” moment was the tipping-point; the circumstances that allowed the breach to take place are just as significant – and usually indicate a wider systemic failure of risk management which could – and should – have been done better.

Risky behaviour that leads to a breach though, is not usually a sincere mistake – it is either a calculated decision of the odds, a failure to understand the risk or ignorance of the possibility that a risk exists. Risky behaviour is *not* an unavoidable whim of Mother Universe (setting aside the philosophical implications, otherwise we’ll be here all day), but the output of a deliberate act or decision. We should not regard ‘risky behaviour which led to a realisation of the risk and unwanted consequences’ in the same way that we do ‘inadvertent screwup due to human frailty’ and to lump them together under the same heading of “human error” does a disservice to us all, by blurring the lines between what is forgivable and what we should be demanding improvements to.

The human bit

Since we’re not yet at the stage of having autonomous, conscious Artificial Intelligence; it must follow therefore that errors arising from any human endeavour must therefore always be “human errors”. Humans design systems, they deploy them, they use (and misuse) them. Humans are firmly in the driving seat (discounting for the moment that based on the evidence so far, the driver is reckless, probably intoxicated, has no concept of risk management and is probably trying to run over an ex-spouse without making it look obviously like a crime). So; whether an information security or privacy breach is intentional, inadvertent or a state in which someone got caught out doing something dodgy, describing the cause as “human error” is rather tautological and – as I’ve noted above – potentially misleading.

I believe that the phrase “human error” is a technically-accurate but wholly uninformative description of what is much more likely to be better described as human recklessness, human negligence, human short-sightedness, human malice or simple human incompetence. Of course; no organisation is going to hold their hands up in public to any of that, so they deploy meaningless platitudes (such as “we take data protection very seriously – that’s a diatribe for another day!), of which “the breach occurred due to human error” is one.

Take for example, the common ‘puts all addresses in the To: field of an email instead of BCC’ screwup which was the cause of an NHS Trust being issued with a Civil Monetary Penalty after the Dean Street clinic incident in 2015. Maybe the insertion of the email addresses into the wrong field was down to the human operator being distracted, working at breakneck speed to get stuff done, being under stress or simply being blissfully unaware of the requirements of data protection law and email etiquette. But they should not carry all of the culpability for this incident – where was the training? Where were the adequate resources to do all the work that needs to be done in the time available? Most of all, where the hell was the professional bulk-emailing platform which would have obfuscated all recipient emails by default and therefore be a much more suitable mechanism to send out a patient newsletter? (provided of course, that the supplier was carefully chosen, UK-based, tied to appropriate Data Processor contract clauses and monitored for compliance…etc etc). The management would seem to have a lot more to answer for than the individual who sent the email out.

So the next time you read of a data breach, privacy abuse or in fact, any other type of incident at all, and see the phrase “human error”, stop and ask yourself: “What was the error”? Was it lack of appropriate training for staff? Cutting corners to cut costs? Failure to provide the appropriate tools for the job? Mismatch between the outputs demanded and the resources provided to deliver them? None of these are inevitable Acts of Nature, the way that occasional “Oops” moments would be.

And as long as organisations are allowed hide behind the illusion of unavoidability; the less likely they are to tackle the real problems.

WARNING - this site sets cookies! Unfortunately, I am unable to disable some of the inbuilt tracking without killing the site content. tell me more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.