I don’t mean to alarm you, but I have already read that some HU practitioners are excited to get back to a “common sense approach” to error reduction. If you currently believe that, I truly hope you understand things differently by the end of this post. The cumulative Impact document needs to be understood before drastic measures are taken to counteract the effectiveness of your event-prevention and error-reduction programs. We must always be careful with recommendations that may not suit our organization, but may suit the average site. Based on reactions to this document, I have also heard some say that it is potentially the most dangerous document ever put out by INPO. This is not to say the document is flawed, but is to say that how an organization may respond to it, could be. Prior to Human Performance programs, we had a prodominate culture that blamed the last person who touched it and just wanted to move on, but since then, this industry has made many amazing strides in the area of performance improvement and should be very careful turning positive improvement gains around. Am I worried about the impact of this document? Yes.
Not to say we don’t use it today, but to me the best way to describe “common sense” is what was used prior to having human performance programs in the nuclear industry. What I see as the catalyst for changing current programs is a comprehensive Effectiveness Follow-Up (EFU) on all the things we’re spending time on that aren’t actually driving performance improvement, but where this gets us in trouble is when we cut those things that are actually “moving the needle” in the right direction. This has made ineffective observation programs a target for elimination or severe overhaul.
This portion of INPO’s (October 16, 2013) “Industry Cumulative Impact Summary Report” I’d like to highlight, and I am not taking it out of context:
“Section C: Human Performance, Supervisory time with workers will shift from observing and documenting, to engaging and reinforcing expectations. The burden associated with documenting these observations will be reduced to improve the focus on coaching worker behaviors and reducing emphasis on observation documentation. Simple methods to quickly capture key gaps will be developed to allow high-level trending without challenging coaching effectiveness.”
[Author's note: Apologies, as I could not find a link to the entire document at this time]
It makes me want to ask what exactly is “high-level trending” as opposed to other “levels” of trending?
Do not eliminate your observation program; fix it
The number one item that scares me is the cutting out of Observation programs completely. This document is not an excuse to get rid of a bad observation program, but instead a heads up that if your program is not effective, you should change it so it will be. This is not supposed to be treated as a permission slip to stop doing something just because it has been ineffective to date. I know of some stations that are completely transforming their Observation programs, and even eliminating the formal process. I can appreciate this if it was truly causing burden on the leadership, but this type of “burden” seems to be a fallacy to me.
“My Observation Program is a leadership burden” – Is this a justifiable statement?
The answer is “No.” Formal Observations are designed to have 4 phases:
- Feedback – where engaging and reinforcing should already be happening!
The “feedback” phase is for the workers being observed and is the most important part of the process, but the complaining I’ve heard and the burden is said to come from the “documentation” phase. Once familiar with the Observation program software, a user should be able to document an observation in 3-10 minutes. If you cannot do it in that timeframe, you need a better software solution (click here for your best option).
Why should we document Observations?
Observations are documented for; tracking and trending timely positive and negative information; for obtaining, keeping, or re-obtaining training accredidation; as a means of proving leadership enegagement for recommendations based on SOER 10-02 (which practically insists you will document paired observations to prove observing supervisors are effectively engaging the workers); as a way to document At-Risk practices linked to a condition report system; as a way to prove housekeeping areas were walked down on the appropriate intervals; to discover and document shortfalls in knowledge areas that may need a training intervention; and to have proactive performance data, not obtainable or documented by other means.
It is truly a mistake to think that overall performance will improve with less documented observations.
Is coaching different than giving feedback?
Yes. This distinction has to be clear, concise, and most importantly, consistent. In a previous post, I defined coaching as “what someone says to someone else to guide then into correcting an undesired behavior.” A lot of people weighed in on that in LinkedIn forums, and some agreed, while some said it’s also the act of reinforcement. To me reinforcement is feedback, not coaching, but I see your point, especially as it relates to sports coaching.
Human Performance Tool recommendation or requirement?
The thinking in this type of cumulative impact response reminds me of the industry pullback on the “requirement” to use the circle and slash method of placekeeping, because it is a much more robust tool than simpler versions of placekeeping (signoffs and checkblocks) – this became more of a recommendation than a requirement, based on the situation that a human performance tool may actually cause someone to not be engaged or thinking while they are performing a continuous or reference level of use procedure, because they are too wrapped up on circling and slashing each step. No human performance tool should be used if it is known to distract workers from the task. It may sound simple, but without practice the act of circle and slashing each procedure step actually can distract the procedure reader from the actual task being performed. Reader engagment can be vital to the success of that procedure. Circle and slash is an amazing tool totally based on STAR principles, and self-checking each step, but it should be practiced and mastered before it is employed. If you don’t think so, do an observation on someone who has never used, compared to someone who is very familiar with it.
Cumulative Impact Related Links:
Powerpoint for how cumulative burden is being addressed
NEI’s version called, “Cumulative Impact of Industry and NRC Actions”
NEI Nuclear notes: Regulation, Nuclear Energy and the Cafeteria
U.S. Nuclear Power Survival Part 2 (I really appreciate this article and I highly recommend reading it.)
Check out how DevonWay is starting to help the Cumulative Impact effort (YouTube video)
NEI’s November 7, 2013 presentation on “Cumulative Impacts”
From Slide 18:
“•Changing cultures -What is perceived as important to [a] specialist may be of low relative safety significance”
I’m having a real hard time thinking that a specialist’s (or Subject Matter Expert) experience has been disregarded in such a manner. More practitioners need to be paying attention to this particular bullet-point. I interpret this as “someone else (who is not a specialist) thinking that they know better and will be trying to change “culture” based on their expertise versus actual experts.” After doing some of my own research, now I understand why some practicitioners are saying this document could be “dangerous.” Will this be a return to the 1990s culture where HU Practitioners had to convince executive management that event-elimination and error-reduction programs (including observations) were necessary and within the realm of possibility? Because of this, practitioners are going to have more work in front of them than just managing a program. I hoped this was behind us for good.
There still is a balance that needs to be struck between production versus protection
and go to page 15).
I want to make something absolutely crystal clear – I personally view INPO (and NEI) and the bulk of their products as excellence. I myself am constantly striving to improve what I do and how I do it, and they have a similar mission – that is something I can appreciate. I also freely admit that I am not collectively smarter than all of the people that have done hard work to implement cumulative impact reduction. I am cautiously optimistic that when site’s evaluate how this will be translated into their leadership cultures, they will still use conservative decision making and a graded approach. If you are a practitioner, my advice is to not ignore how this is being implemented at your facility.