Monday, January 22, 2007

Why we often do not evaluate training

Why is training so often not evaluated?
Richard J Wagner and Robert J Weigand

Contact:
Dr. Richard J Wagner
Professor of Management
University of Wisconsin – Whitewater
Whitewater, WI 53190
((262) 472-5478
wagnerr@uww.edu

Welcome back to the assessment and evaluation ‘community’ of Talent Management. Our first contribution looked at the issue of why it is important to evaluate the effectiveness of training and performance improvement programs. In the last issue we looked at the question of how we can evaluate training effectiveness. We did this using one ‘model’ for evaluating programs, the four levels of Donald Kirkpatrick. So now we have some idea why and how we should evaluate these programs. So this issues looks at the next question: do we actually evaluate training and performance improvement program or not, and if not, why not?
The first question: Are programs evaluated or not?, can best be answered by looking at data from the American Society for Training & Development (ASTD.ORG). In their annual survey of their members they used Kirkpatrick’s four levels to ascertain how and if organization’s evaluated their efforts. The summary of the results of the last (2005) survey are found in the table below:


Organizations which evaluate their programs based on the reaction of the participants 91%
Organizations which evaluate their programs based on what participants learn from the program 54%
Organizations which evaluate their programs using participant behavior change 23%
Organizations that base their evaluations on the results obtained from these programs 8%

The good news from the above data is that over 91% of all programs are evaluated. The bad news is that so few of them are evaluated in a meaningful way. A lot of organizations find out if the participant liked the program (for example, I enjoyed the day session at the luxury Hotel, instead of working all day in the hot sun), but very few organizations assess the real organizational issues of employee behavior change (i.e.: improved safety practices) or organizational results (increased sales, increased quality, reduced absenteeism). This is a trend which has not changed much over the years, and we often ask ourselves based on the compelling and growing need to meaningfully evaluate all performance improvement, why don’t more organizations do it?
Our years of work in this field have yielded some answers to this question, including some of the following:
1. There is no time to really assess these programs – the time is best spent developing programs
2. It is too expensive to do
3. We do not have the expertise to do a good evaluation
4. There are many factors beyond the program that determine success or failure
5. Maybe we don’t want to know (while our assess could confirm it is a good program, it could also suggest it is not)
Time is always a precious and scarce commodity, but maybe the emphasis on more programs is better moved to better programs. It would seem aligning programs with the organization’s strategic goals would help managers address where to best invest their efforts.
Doing an evaluation can be expensive but so can not doing one. In working with programs we have often found that about 5% of the cost of the program should be allocated to doing that evaluation. Obviously that is a number which will vary a lot, and one of the things we try to focus on is doing an efficient evaluation. That means that if the program is costly, on-going and really critical to organizational success, we should evaluate it accordingly. If the program is short-lived, and of importance to only a small part of the organization, then maybe the evaluation should reflect that value. One method we use to help deal with this is to look at the evaluation in two ways: short-term and long- term. Short-term usually provides some immediate feedback to everyone and often does focus on issues such as reaction and learning. Long-term (often 3 months or more) and may focus more on issues such as behavior and organizational results.
We do not have the expertise to do a good evaluation. Our first foray into the world of measuring the results of training lead us to the concept of utility analysis and the rather imposing formula shown below:
ΔU = (N)(T)(dt)(SDy) - C
ΔU is the expected gain (in dollars) to the organization from a training intervention (The utility of the program)
N is the number of employees trained
T is the expected duration of the benefits of training
dt is the true difference in job performance between the trained group and the untrained group – in standard deviation units
SDy is the dollar value of one standard deviation of performance
C is the cost of the training intervention

It did not take us long to realize we were in way over our head in trying to use this to really evaluate the results of our programs. Fortunately, we have been able to move beyond this and greatly ‘simplify’ the concept of evaluation. More on this in future issues.
Concluding that a one day training program increased company profits by 82% that year is a little hard to grasp – and probably not a valid conclusion. But to use that as the basis of not evaluating a program at all is equally silly. What we have worked on (and will discuss in future articles) are ways to link behaviors and results so that we do not conclude that a program changed everything or use that as an excuse to not evaluate at all.
The final excuse is that maybe we do not want to know what is going on because it may be that the program does not accomplish its objectives. The answer appears to revolve around this: do you want a good program or not? We leave that to each managers own thoughts but suggest that the time for that kind of thinking is in the past.
Next month we begin to look at how we are actually going to evaluate programs.
Join us then.







Dick Wagner is a Professor of Human Resource Management at the University of Wisconsin – Whitewater in Whitewater, Wisconsin

Bob Weigand is the Director of Management Training and Development at St. Luke's Hospital and Health Network in Bethlehem, Pennsylvania.

Tuesday, October 31, 2006

Information About the Articles

On the left hand side of the screen there are new links. These links go to articles that Richard and Robert have published or are getting published. New article links will become available when the articles become available. Enjoy the articles and check back for more later.