The safety profession has an unhealthy fixation on measuring purely using negative values. OSHA recordable and lost time injuries first spring to mind – both lagging, and I would suggest negative, indicators.
Once they occur, there is nothing that can be done but to investigate and hopefully learn enough to avoid similar incidents in the future. But even these metrics are flawed in that a lack of injuries or incidents does not necessarily equate to a safe workplace. It could be a simple matter of being lucky. I will offer an example:
Case Study: Southern Company
Southern Company achieved a 94% accuracy rating in predicting the likelihood of injury on a project.
Two drivers are tasked with making a delivery.
Two drivers are tasked with making a delivery. One drives recklessly, regularly exceeds the speed limit, and generally operates with tremendous risk. This driver arrives well ahead of the deadline and is rewarded both for increased productivity as well as being safe since no incidents were sustained. The second driver follows all of the rules and regulations set by the company and the traffic jurisdiction in the travel zone. This driver is cautious, drives defensively, and ensures both he and those around him are kept from harm to the extent of controlling those things he can control. He arrives promptly but well after the first driver. His supervisor, in a passing remark, tells the second driver he should be more like the first driver. As this continues, how do you believe the drivers will begin to respond?
Realizing that simply measuring lagging data in the form of incidents and injuries isn’t enough, many companies have begun to adopt leading indicators or indicators that show conditions, behaviors, or activities that show how the safety process is actually working. The most common indicators are near miss reports and worksite observations. Near misses, however, are simply incidents that did not reach their full potential. I will offer another example:
Two workers, both working from a ladder
Two workers, both working from a ladder, overextend themselves and fall. One worker breaks several bones and suffers a significant injury that keeps him from work for months. The other worker simply stands up, dusts himself off, and walks away with no injury at all.
The only difference is the unpredictable outcome yet we use this as an evaluation tool constantly! In this example, what were consistent were the leading indicators prior to the event that could have been both observed and influenced proactively.
Going further up the leading chain are the observable inputs such as the behaviors, conditions, and activities expected from a mature and effective safety process. However, many organizations squander and even misuse this data when they rely too much on unsafe observations, typically collected through safety inspections or through behavior-based systems. Measuring failures only is inherently flawed. Let me give you an example:
Your safety observation process only collects unsafe observations.
Your safety observation process only collects unsafe observations. You dig into your data over the past week and discover one crew observed has five unsafe observations in ladder use and another crew has zero unsafe observations in ladder use. Assuming both did similar work, which one was safer? Your first inclination is to claim the crew with zero unsafe observations. However, there is a HUGE assumption made if that is the case. That assumption is that they were observed. Absence of unsafe observations could just as easily be attributed to not looking at them at all! OSHA has a philosophy: “If it is not documented, it didn’t happen”. Safe observations then serve to show proof that something did take place. We have traditionally found that there are many more safe observations than unsafe observations with a common ratio of 36:1*.
Besides the obvious advantage of recording who was observed, what was observed and the location observed, safe observations also provide you with more advantages as well.
- By using representative sampling, collecting safe and unsafe observations can provide you with a ratio of safe vs. unsafe. For example, would you be more concerned about a ratio of 50% unsafe in electrical or 2% unsafe in electrical?
- By actually counting a representative amount of safety observations and not just checking a box for the entire project, you can determine the context of your findings as well. For example, you find three unsafe observations for failure to use safety glasses. Now, if there were only three workers observed, then this is significant. If there were 300 other workers who were wearing safety glasses, then the gravity of the findings is diminished, allowing you to focus on more severe findings.
- Safe observations allow positive feedback to be employed. The idea is to coach to improve and move away from the safety “cop” mentality of busting workers for safety violations
- Only through safe observations can you measure improvement. Let’s say you found a lot of unsafe observations for a certain hazard and implemented an action plan to address it. How would you know it got better? Keep in mind that an absence of unsafe observations could mean nobody is looking! An improved ratio of safe vs. unsafe should support an improvement in the targeted, specific safety process you were concerned about.
Keep in mind that an absence in injuries does not necessarily equate to a safe worksite, as presented in the examples above. So are you ready to determine if you lucky or are you good? By documenting both safe and unsafe observations, you can now actually measure “what is safe”.
* Sample size = 123,612,674 total observations