Previously, I’ve introduced the concept of the “just culture” and explained the basic principle. In this blog post I will look at the types of behaviour that give rise to incidents and how, in a just culture, these would be addressed.

Hands up if you’ve ever done any of the following:

  • Politely held the door to your office open for a stranger without asking to see their ID
  • Re-used a password
  • Emailed work to your personal account to work on outside the office
  • Mis-addressed an email, or mistakenly used CC rather than BCC

Did it seem like a good idea at the time? (You can lower your hands now, by the way) Perhaps you were under pressure to get work done to a deadline, or maybe you couldn’t afford the cognitive effort of considering security policies at the time. These types of “incidents” occur every day, all over the place and in most cases they do not result in disaster – but one day, they could…and unfortunately, in most corporate cultures the blame will rest on the person who didn’t follow the policies.

In a just culture, blame is not appropriate and punishment is only reserved for a minority of behaviours – those which were driven by malicious intent or deliberate and knowing recklessness. None of the activities listed above really fall into that category and so even if they did result in major data leakage, disruption or loss; should not be responded to with punitive action – especially if everyone is doing the same but getting away with it. The sysadmin who runs a private file-sharing server on the corporate network, or the manager who illegally snoops on their staff’s emails should be punished – not those who are just trying to get on with their jobs.

Most incidents arise from “risky behaviour” rather than malice or knowing recklessness. Risky behaviour falls into two main categories:

  1. Genuine error (see https://missinfogeek.net/human-error/ for some further thoughts on that) – such as mis-typing a name, confusing two similar-looking people, being taken in by a highly-convincing well-crafted scam site or email or unknowingly putting your security pass in the pocket that has a hole in the bottom
  2. Underestimation or low prioritisation of the risks (perhaps due to conflicting imperatives – e.g. time pressure, budget constraints, performance goals) – this is where most risky behaviour occurs.

These behaviours should not be treated the same way, for that would be unjust.

In the case of 1), the appropriate response is consolation and a review of controls to identify whether there are any areas which could benefit from additional ‘sanity checks’ without making it too difficult for people to get their jobs done. Humans are imperfect and any system or process that relies on 100% human accuracy is doomed to fail – this is a design fault, not the fault of the errant.

The second type of behaviour is more challenging to mitigate, especially since human beings are generally rubbish at assessing risk on the fly. Add in cognitive dissonance, conflicting priorities and ego and you end up with the greatest challenge of the just culture!
Explaining the reason that the behaviour is risky, pointing out the correct approach and issuing a friendly warning not to do it again (OR ELSE) is the appropriate and fair response.

So how in general should risky behaviour be prevented? Education is the foundation here – not just a single half-hour e-learning module once a year, but frequent and engaging discussion of infosec risks using real-life anecdotes, analogies, humour and encouraging input from all.

On top of the education programme; there needs to be a candid look at business process, systems, procedure and tools – are they set up to make risky behaviour the path of least resistance or do they encourage careful thought and good habits?

Monitoring and correcting behaviour comes next and it is critical that this be done impartially and with as much vigour at senior levels than for front-line and junior staff. If the C-suite can flout policy with impunity then not only will you struggle to achieve a successful just culture, but you also have a gaping big hole in your security defences.

A just culture relies on robust procedures, a series of corrective nudges and above all, consistency of responses in order to be effective. Far too often, individuals are thrown to the wolves for simply getting unlucky – forced to use non-intuitive or badly-configured systems, under pressure from management above, with inadequate resources and insufficient training, they cut the same corners as they see everyone else doing – and pay the price of the organisation’s failures.

Next time: building a just culture in a pit of snakes*

*something like that, anyway