Indicators Part 1: Where do I start with Human Error?

It has to be addressed: Indicators. Three things make up the bulk of what many Human Performance Professionals do: Coordinate, Teach, and Track... You need to know where you are so you can tell where you are going. For example, a GPS needs to know where you are before it can provide guidance to a destination. Indicators help us know how badly or well we are doing, and also, if we are improving or not.

Good news! This post will explore a little about what seems to be useful, and what seems to be a waste of time. I've recently been asked, “What constitutes a useful indicator?” In my opinion performance must have a way to be measured, otherwise you never know where you are, or if you are getting worse or better, and the only tool at your disposal is something you could call "Cognitive Assumption," which in reality would sound something like this: "I believe we are getting better, however, I have no objective evidence to support my assumption. It just feels better." Cognitive analysis is okay, but not entirely scientific. Don't let anyone mislead you; performance analysis IS science.

Remember the five steps of science?

    1. Observing
    1. Scoring
    1. Measuring
    1. Analyzing
  1. Applying

Does it sound like performance indicators are a science, yet?The first place to start is by determining what you already have for data. How are errors or events currently tracked or processed at your facility? This could be tricky and involve communicating with others even outside your department.

Each indicator should have the following parts:

    • Definition - the concept being measured
    • Parameters - What are the attributes of the measure and how do they actually impact performance?
    • Criticality - How important the measure is and why we should care about it. Does it relate to the corporate mission?
    • Data Collection - Where is the source of data coming from and when will the information be provided by?
    • Metrics - What does the visual representation of the data look like?
    • Dependencies - Does this measure correlate with another indicator in some way?
  • Analysis - (The most important part!!!) As performance changes, can you relate it to changes and efforts to improve the measure? What is causing the measure to be this way and what does that imply?

Todd Conklin weighs in (allow me to paraphrase):

At the recent HPRCT Conference in June 2014 Todd Conklin gave an amazing keynote speech, and even though I wasn't able to be there this year, I was able to watch it (three times!!! - mainly because Todd rocks). You can click here and join the Human Performance Association (307-637-0958 *hpaweb.org*). I believe it cost me $279 to become a member for a year. Todd reminded us that you can't get better until you measure, and how important it is to figure out how to measure the things you're doing correctly. I have not seen this before. It is so much easier to track failure by incident, than positive progress by task. We are stuck backwards looking and not even in a present mindset for current performance. Metrics might predict future performance and areas for interest and improvement, but still have not given a clear measure of what performance actually is, but more of a clear picture of what failure is, and if it is diminishing or getting worse.

So where are we?

Knowing what your worker hours (typically from payroll) happen to be, you should be able to calculate a monthly event-rate for your company, and perhaps even by department. What constitutes an event should not be subjective and as standardized as possible, following a strict library of codes. If you code a lot of issues, you may be able to calculate a lower-threshold error-rate as well, but that's getting to the more subjective side, because not all lower-threshold info is being reported or consistently coded. With that in mind an event-rate seems to be the best common denominator between facilities if you want to compare apples to apples.

But what about measuring good performance and not just failure?

Ah yes. This is the golden nugget we are hoping to find some solution to in the very near future. Do you have a suggestion on how to measure positive performance? How many times you've completed a work order or job satisfactorily? How many component manipulations you've performed successfully? How would you effectively measure and track that data set? Who would do it? Can it be automated? Keep in mind that HOW we get results is sometimes more important than the results, also. So a positive outcome that was performed rushed and poorly, may show on this new measure as a good thing... This is why measuring good performance is not simple. Human Performance is about the behaviors of workers and leadership team members, and how hard is it to quantify a behavior?

On this suggestion I have more questions than answers. I havent seen this yet, and I'm trying to figure out how to do this. Please send suggestions to humanperformancetools@gmail.com

Click these supporting Links:

Characteristics of a Good Measure

Effective Performance Metrics

 Videos:

Situational Metrics

How to develop KPIs

Tips for making Infographics

Example of a Human Error Infographic

What do we do with the data gathered from indicators?

Look for some answers in an upcoming post! Suggestions for topics? Any performance improvement questions or challenges you want some help solving? Send an email to the site or comment on this post. After a very busy and distracting summer, we are looking to bring new content to this site and take it new places. In case you were wondering, more posts are coming including brand new podcasts very soon, as well. Thanks for stopping by and have an event-free day.

Previous
Previous

Are all disasters preventable? (Earthquakes and Positive Train Control)

Next
Next

Are you going to post more HU-related content? Absolutely!!