Are all disasters preventable? (Earthquakes and Positive Train Control)

The Chief of Disaster Risk Reduction at United Nations Environment Programme, Muralee Thummarukudy stated, “Earthquakes don’t kill people, buildings do.” Click here for more on an environmental overview of the 2010 earthquake in Haiti – page 17. When the dust finally settled in Haiti, people will be able to identify the single most important agent of mass death and destruction: concrete. Thummarukudy continued in his TedX talk to say more to that point: poorly designed buildings kill people. Click the picture to the left to see the very impressive 16 minute TedX talk.

Thummarukudy also said, “We should always look backward before you plan forward.” – Rings true of the “Be Prepared” post. Think of how important using lessons learned can be when we properly incoprporate them into the next performance of a task. Knowledge is transfered.

Thummarukudy has a blog you can find by clicking here.

Click here for an interview (of Thummarkudy) by a student in Bangladore on what a career in disaster management is like.

A little about Engineered Barriers and PTC

So, why talk about disaster management and PTC in the same post? Engineered safeguards (barriers) are the best defense against human error and natural disaster. It reminds me of the reducing radiation exposure concept for nuclear workers – ALARA (As low as reasonably acheivable), and how time, distance, and shielding are the engineering controls designed to protect the worker.

According to a June 19, 2013 youtube clip, the American railroad infrastructure needs billions of dollars for track improvement and updating. Lawmakers are trying to figure out who is going to pay what and when it’s needed by in the wake of a large number of train disasters in 2013. One thought is to install new anti-crash signaling technology called Positive Train Control, but who will pay for it, and who needs it done and by when. These are interesting topics to follow as time progresses.

PTC stands for Positive Train Control and according to wikipedia it is a system of functional requirements for monitoring and controlling train movements to provide increased safety.

The American Railway Engineering and Maintenance-of-Way Association (AREMA) describes Positive Train Control as having these primary characteristics:

  • Train separation or collision avoidance
  • Line speed enforcement
  • Temporary speed restrictions
  • Rail worker wayside safety

Positive Train Control (PTC)

  • Designed to eliminate the verbal “read and repeat” process
  • Real-time monitoring
  • Manual Switch positions
  • Distribution of speed restriction

PTC with an Affective message here.

Click here for an older video showing how ETMS (Electronic Train Management System – BNSF’s version of Positive Train Control) works. Since the system is still a work in progress things in this video are likely to change for the better but it’s still a great demonstration of how the system works. In the end it will just make for a safer railway.

More info on PTC can be found at or at Wikipedia here.

Indicators Part 1: Where do I start with Human Error?

It has to be addressed: Indicators. Three things make up the bulk of what many Human Performance Professionals do: Coordinate, Teach, and Track… You need to know where you are so you can tell where you are going. For example, a GPS needs to know where you are before it can provide guidance to a destination. Indicators help us know how badly or well we are doing, and also, if we are improving or not.

Good news! This post will explore a little about what seems to be useful, and what seems to be a waste of time. I’ve recently been asked, “What constitutes a useful indicator?” In my opinion performance must have a way to be measured, otherwise you never know where you are, or if you are getting worse or better, and the only tool at your disposal is something you could call “Cognitive Assumption,” which in reality would sound something like this: “I believe we are getting better, however, I have no objective evidence to support my assumption. It just feels better.” Cognitive analysis is okay, but not entirely scientific. Don’t let anyone mislead you; performance analysis IS science.

Remember the five steps of science?

  1. Observing
  2. Scoring
  3. Measuring
  4. Analyzing
  5. Applying

Does it sound like performance indicators are a science, yet?

The first place to start is by determining what you already have for data. How are errors or events currently tracked or processed at your facility? This could be tricky and involve communicating with others even outside your department.

Each indicator should have the following parts:

  • Definition – the concept being measured
  • Parameters – What are the attributes of the measure and how do they actually impact performance?
  • Criticality – How important the measure is and why we should care about it. Does it relate to the corporate mission?
  • Data Collection – Where is the source of data coming from and when will the information be provided by?
  • Metrics – What does the visual representation of the data look like?
  • Dependencies – Does this measure correlate with another indicator in some way?
  • Analysis – (The most important part!!!) As performance changes, can you relate it to changes and efforts to improve the measure? What is causing the measure to be this way and what does that imply?

Todd Conklin weighs in (allow me to paraphrase):

At the recent HPRCT Conference in June 2014 Todd Conklin gave an amazing keynote speech, and even though I wasn’t able to be there this year, I was able to watch it (three times!!! – mainly because Todd rocks). You can click here and join the Human Performance Association (307-637-0958 **). I believe it cost me $279 to become a member for a year. Todd reminded us that you can’t get better until you measure, and how important it is to figure out how to measure the things you’re doing correctly. I have not seen this before. It is so much easier to track failure by incident, than positive progress by task. We are stuck backwards looking and not even in a present mindset for current performance. Metrics might predict future performance and areas for interest and improvement, but still have not given a clear measure of what performance actually is, but more of a clear picture of what failure is, and if it is diminishing or getting worse.

So where are we?

Knowing what your worker hours (typically from payroll) happen to be, you should be able to calculate a monthly event-rate for your company, and perhaps even by department. What constitutes an event should not be subjective and as standardized as possible, following a strict library of codes. If you code a lot of issues, you may be able to calculate a lower-threshold error-rate as well, but that’s getting to the more subjective side, because not all lower-threshold info is being reported or consistently coded. With that in mind an event-rate seems to be the best common denominator between facilities if you want to compare apples to apples.

But what about measuring good performance and not just failure?

Ah yes. This is the golden nugget we are hoping to find some solution to in the very near future. Do you have a suggestion on how to measure positive performance? How many times you’ve completed a work order or job satisfactorily? How many component manipulations you’ve performed successfully? How would you effectively measure and track that data set? Who would do it? Can it be automated? Keep in mind that HOW we get results is sometimes more important than the results, also. So a positive outcome that was performed rushed and poorly, may show on this new measure as a good thing… This is why measuring good performance is not simple. Human Performance is about the behaviors of workers and leadership team members, and how hard is it to quantify a behavior?

On this suggestion I have more questions than answers. I havent seen this yet, and I’m trying to figure out how to do this. Please send suggestions to

Click these supporting Links:

What makes a good Metric?

Developing Performance Measurements

 PDF – Creating and Using Effective Performance Metrics


Situational Metrics

How to develop KPIs

Tips for making Infographics

Example of a Human Error Infographic

What do we do with the data gathered from indicators?

Look for some answers in an upcoming post! Suggestions for topics? Any performance improvement questions or challenges you want some help solving? Send an email to the site or comment on this post. After a very busy and distracting summer, we are looking to bring new content to this site and take it new places. In case you were wondering, more posts are coming including brand new podcasts very soon, as well. Thanks for stopping by and have an event-free day.

Are you going to post more HU-related content? Absolutely!!

coach_time_outHello HU colleagues worldwide. This site is still being updated and will continue to be a source of new HU information and links very soon. Over the past couple of months, the main author has been working, spending extra time with his kids, designing training, and getting a new amazing HU job opportunity. A lot of exciting things are going on and the second half of 2014 is going to be phenomenal.

The HU toolbox community is over 100 and continuing to grow! It remains free to sign up and it will enable you to never miss a new blogpost or podcast.

Contact us

Emails that come in via have been great. This community is filled with interesting and amazing talent dealing with humans and behavior. Your questions, challenges, wisdom and insight remains vital for the best quality posts. Keep them coming.

Call to action – new T&D HU network needed

Please contact this site if you are an HU professional in Transmission and Distribution. This site’s main author is building a new network for benchmarking and support purposes. Please let us know what you do and email with your contact info.

Quick challenge question:

What is your favorite HU Book?

Podcast Episode 6: Error prevention at a Connecticut Hair Salon (Interview)

Podcast Cover ArtNestled in Norwich, Connecticut, you can find a hair salon/studio called, “Details.” A few months ago the owner sat down with me to talk about how our two industries relate when it comes to error-prevention.

Heidi Duff is someone who completely understands her “calling.” Even after 26 plus years in the business, she approaches her daily work with high energy, enthusiasm, and a constant desire to be one of the best in her field. I totally respect that type of attitude, and feel even more energized with every conversation I have with her. On this podcast, we share a candid conversation about what it’s like being the salon owner, and what are some common human error that needs to be avoided in this billion-dollar and VERY personal industry. We talk how important it is to the reputation of each salon and how through timely feedback, employee meetings and training, her staff not only become top-notch, but stay that way, too. I love their slogan: “Inspire. Design. Evolve.”

This was such a fun interview, and not just because home-made shepherd’s pie was involved! Anyone that has ever met Heidi knows that she has something to say AND it’s worth listening to – I was able to learn the most important step in the salon service process – the consultation with the client.

Common error avoidance, having a good community reputation, worker feedback, training, pre-job briefs…. Get ready to learn how this all relates to Human Performance Tool usage…

Are you afraid of misinterpretations of INPO’s Cumulative Impact Document?

Cumulative ImpactI don’t mean to alarm you, but I have already read that some HU practitioners are excited to get back to a “common sense approach” to error reduction. If you currently believe that, I truly hope you understand things differently by the end of this post. The cumulative Impact document needs to be understood before drastic measures are taken to counteract the effectiveness of your event-prevention and error-reduction programs. We must always be careful with recommendations that may not suit our organization, but may suit the average site. Based on reactions to this document, I have also heard some say that it is potentially the most dangerous document ever put out by INPO. This is not to say the document is flawed, but is to say that how an organization may respond to it, could be. Prior to Human Performance programs, we had a prodominate culture that blamed the last person who touched it and just wanted to move on, but since then, this industry has made many amazing strides in the area of performance improvement and should be very careful turning positive improvement gains around. Am I worried about the impact of this document? Yes.

Not to say we don’t use it today, but to me the best way to describe “common sense” is what was used prior to having human performance programs in the nuclear industry. What I see as the catalyst for changing current programs is a comprehensive Effectiveness Follow-Up (EFU) on all the things we’re spending time on that aren’t actually driving performance improvement, but where this gets us in trouble is when we cut those things that are actually “moving the needle” in the right direction. This has made ineffective observation programs a target for elimination or severe overhaul.

This portion of INPO’s (October 16, 2013) “Industry Cumulative Impact Summary Report” I’d like to highlight, and I am not taking it out of context:

“Section C: Human Performance, Supervisory time with workers will shift from observing and documenting, to engaging and reinforcing expectations. The burden associated with documenting these observations will be reduced to improve the focus on coaching worker behaviors and reducing emphasis on observation documentation. Simple methods to quickly capture key gaps will be developed to allow high-level trending without challenging coaching effectiveness.”

[Author’s note: Apologies, as I could not find a link to the entire document at this time]

It makes me want to ask what exactly is “high-level trending” as opposed to other “levels” of trending?

Do not eliminate your observation program; fix it

The number one item that scares me is the cutting out of Observation programs completely. This document is not an excuse to get rid of a bad observation program, but instead a heads up that if your program is not effective, you should change it so it will be. This is not supposed to be treated as a permission slip to stop doing something just because it has been ineffective to date. I know of some stations that are completely transforming their Observation programs, and even eliminating the formal process. I can appreciate this if it was truly causing burden on the leadership, but this type of “burden” seems to be a fallacy to me.

“My Observation Program is a leadership burden” – Is this a justifiable statement?

The answer is “No.” Formal Observations are designed to have 4 phases:

  1. Preparation
  2. Performance
  3. Feedback – where engaging and reinforcing should already be happening!
  4. Documentation

The “feedback” phase is for the workers being observed and is the most important part of the process, but the complaining I’ve heard and the burden is said to come from the “documentation” phase. Once familiar with the Observation program software, a user should be able to document an observation in 3-10 minutes. If you cannot do it in that timeframe, you need a better software solution (click here for your best option).

Why should we document Observations?

Observations are documented for; tracking and trending timely positive and negative information; for obtaining, keeping, or re-obtaining training accredidation; as a means of proving leadership enegagement for recommendations based on SOER 10-02 (which practically insists you will document paired observations to prove observing supervisors are effectively engaging the workers); as a way to document At-Risk practices linked to a condition report system; as a way to prove housekeeping areas were walked down on the appropriate intervals; to discover and document shortfalls in knowledge areas that may need a training intervention; and to have proactive performance data, not obtainable or documented by other means.

It is truly a mistake to think that overall performance will improve with less documented observations.

Is coaching different than giving feedback?

Yes. This distinction has to be clear, concise, and most importantly, consistent. In a previous post, I defined coaching as “what someone says to someone else to guide then into correcting an undesired behavior.” A lot of people weighed in on that in LinkedIn forums, and some agreed, while some said it’s also the act of reinforcement. To me reinforcement is feedback, not coaching, but I see your point, especially as it relates to sports coaching.

Human Performance Tool recommendation or requirement?

The thinking in this type of cumulative impact response reminds me of the industry pullback on the “requirement” to use the circle and slash method of placekeeping, because it is a much more robust tool than simpler versions of placekeeping (signoffs and checkblocks) – this became more of a recommendation than a requirement, based on the situation that a human performance tool may actually cause someone to not be engaged or thinking while they are performing a continuous or reference level of use procedure, because they are too wrapped up on circling and slashing each step.  No human performance tool should be used if it is known to distract workers from the task. It may sound simple, but without practice the act of circle and slashing each procedure step actually can distract the procedure reader from the actual task being performed. Reader engagment can be vital to the success of that procedure. Circle and slash is an amazing tool totally based on STAR principles, and self-checking each step, but it should be practiced and mastered before it is employed. If you don’t think so, do an observation on someone who has never used, compared to someone who is very familiar with it.

Cumulative Impact Related Links:
Powerpoint for how cumulative burden is being addressed

NEI’s version called, “Cumulative Impact of Industry and NRC Actions”

NEI Nuclear notes: Regulation, Nuclear Energy and the Cafeteria

U.S. Nuclear Power Survival Part 2 (I really appreciate this article and I highly recommend reading it.)

Check out how DevonWay is starting to help the Cumulative Impact effort (YouTube video)


NEI’s November 7, 2013 presentation on “Cumulative Impacts”

From Slide 18:
“•Changing cultures -What is perceived as important to [a] specialist may be of low relative safety significance”
I’m having a real hard time thinking that a specialist’s (or Subject Matter Expert) experience has been disregarded in such a manner. More practitioners need to be paying attention to this particular bullet-point. I interpret this as “someone else (who is not a specialist) thinking that they know better and will be trying to change “culture” based on their expertise versus actual experts.” After doing some of my own research, now I understand why some practicitioners are saying this document could be “dangerous.” Will this be a return to the 1990s culture where HU Practitioners had to convince executive management that event-elimination and error-reduction programs (including observations) were necessary and within the realm of possibility? Because of this, practitioners are going to have more work in front of them than just managing a program. I hoped this was behind us for good.
There still is a balance that needs to be struck between production versus protection (click here and go to page 15).
I want to make something absolutely crystal clear – I personally view INPO (and NEI) and the bulk of their products as excellence. I myself am constantly striving to improve what I do and how I do it, and they have a similar mission – that is something I can appreciate. I also freely admit that I am not collectively smarter than all of the people that have done hard work to implement cumulative impact reduction. I am cautiously optimistic that when site’s evaluate how this will be translated into their leadership cultures, they will still use conservative decision making and a graded approach. If you are a practitioner, my advice is to not ignore how this is being implemented at your facility.