As I noted in last week’s blog post, the phrase “human error” covers a lot of ground, and fails to distinguish the causes of errors from each other; thus not being terribly helpful in incident analysis, being a generic statement of “something happened that wasn’t supposed to”.

The “something” may cover a number of scenarios, behaviours and motivations but to unpick an incident and protect against further occurrences, the conditions and actions do need to be examined, because it is those which determine the appropriate response. This is where a “Just Culture” comes in.

For those of you not familiar with the phrase, the term “Just Culture” arose from the work on aviation safety by Professor James Reason in the late 90s and early 00s. Professor Reason recognised that fear of a punitive reaction to human error is likely to discourage reporting of incidents, whereas it would be more advantageous to foster  “an atmosphere of trust in which those who provide essential safety-related information are encouraged and even rewarded, but in which people are clear about where the line is drawn between acceptable and unacceptable behaviour.”

There is much written about the principles and practices of a Just Culture, which has been adopted in many safety-conscious industries, including transport, construction and healthcare which I will refrain from regurgitating (if you’re interested, see the links at the end). My putpose here is to generally have a bit of a moan about how far the information security industry has lagged behind in adopting a similar position, and how personally, I think it’s time we caught up.  

When individuals are afraid to report information security risks and incidents for fear of ‘getting into trouble’, apathetic resignation to broken systems and processes or simply because they don’t recognise a problem when it arises, those risks and incidents will not be managed – increasing the likelihood that they will accumulate to the point of causing serious damage or disruption.

Security policies and procedures are routinely breached for various reasons – they fail to reflect the needs and risk appetite of an organisation, they are difficult to find or to understand, or they demand a higher level of technological capacity than the organisation can muster. If the only time that these breaches are identified is when the consequences are adverse -and the outcome of such occurrences is that individuals are punished for being ‘caught out’ by doing what they see everyone else doing – then human nature being what it is; more effort will go into concealing the instances of policy breach than the rectification of the core problems that cause the policy to be breached, and breaches will continue to occur. 

However, simply enforcing reporting of breaches and incidents won’t, on its own, result in any meaningful change if the root causes of incidents aren’t analysed and treated. In my next blog post I will look a bit deeper into the analysis of incident causes and the behaviours that contribute to their occurrence.

References:

“Just Culture: A Debrief” https://www.tc.gc.ca/eng/civilaviation/publications/tp185-3-2012-6286.htm

“Just Culture” http://www.eurocontrol.int/articles/just-culture

“Patient Safety and the  Just Culture” https://psnet.ahrq.gov/resources/resource/1582

“Just Culture” Sidney Dekker: http://sidneydekker.com/just-culture/

One Reply to “‘Just Culture’: an introduction”

  1. Good piece.

    Gerd Gigerenzer and Tim Harford have also written about ‘error cultures’. There’s a nice anecdote (might be in ‘Risk Savvy’) where a cleaner at a nuclear power plant upbraids the CEO for not taking his shoes off when he should have…some industries really can’t let deference trump safety!

Comments are closed.