Learning Evaluation: Oh Please No! Don’t Make Me Think of It!

We are learning professionals! We take pride in our work. We are passionate in our efforts to build effective learning programs. Today, more than ever before, we see amazing potential in elearning.

Yes, we know we’re not perfect. Who is? We know that we should probably be doing more upfront analysis of work context and learning needs. We know that we’d be even more effective is we were allowed to provide more after-learning follow-up. But the thing that really leaves us cold is LEARNING EVALUATION. We know we must evaluate, but we also know that most of our evals just aren’t worth the paper they used to be printed on. And our organizations aren’t interested in investing in better evaluations. Too often, when it comes to evaluation, we feel paralyzed.

Sometimes we get hope. A vendor shows us a beautiful dashboard. Another talks about artificial intelligence. Another gushes about predictive analytics. But when we look closely, we see that their data is still built on the same backbone—poorly constructed surveys of learners.

What is a learning professional supposed to do?

My Continuing Journey in Learning

Hello! I’m Will Thalheimer! I’ve been an L&D professional for decades. I’ve been an instructional designer, trainer, elearning developer, project manager, research translator, and consultant. About 15 years ago, I began thinking seriously about learning measurement. Sadly, I first got thinking about evaluation when I was looking at the learning research. I noticed that we tended to measure learning in ways that were biased. Measuring knowledge just after people learned hid the fact that they might forget almost everything after a week or so. Measuring competence in the training room—where the context reminds the learners of what they learned—gave the false impression that remembering would be strong everywhere, even without the contextual support.

I started asking myself tough questions about my own learning evaluations. I was not pleased with my practices. Looking back, I see that I used happy sheets that gave me and my team false hope that we’d done well. Looking back, I see that I had failed to measure what was important to measure—instead measuring what was easy to measure.

To make a long story palatable to you who is reading this, let me just say that I’ve been on a journey to build more effective learning evaluations—and I’m still on that journey. I’ve tried to innovate a bit, looking to create improvements that are workable. I haven’t created perfection, and I still have a lot to learn, but by the reaction of people and companies who are adopting these new methods, improvements have clearly been made.

Learning Surveys

Outside of attendance and course completions, the most common way organizations measure learning is through learner surveys. Unfortunately—as scientific research attests—traditional happy sheets are virtually uncorrelated with learning results. The data we get tell us nothing about effectiveness! There are several problems with our happy sheets. I’ll go into detail in my upcoming session, but in short, the way we ask questions is problematic. In my 2016 award-winning book, Performance-Focused Smile Sheets, I developed a radically improved method for asking questions. I’m now calling the process “precision questioning,” because we give learners more precise question-answering guidance and so we get more precise data.


In addition to learner surveys, there is LTEM, a new learning evaluation model that is sweeping the world. FYI, LTEM is pronounced “L-tem.” LTEM stands for Learning-Transfer Evaluation Model. It has several advantages over other frameworks, most notably it is inspired by the science of learning to acknowledge that learning itself can be measured at different potencies. Specifically, LTEM has tiers for knowledge, for decision-making competence, and task competence—instead of cramming learning into one bucket. Achieving an improvement in knowledge is not as important at achieving improvement in decision-making and task competence.

LTEM went through 12 iterations before it was ready for prime time, based on feedback from learning professionals with expertise in many areas, including learning science and learning evaluation. LTEM is now being used to help companies plan their learning-evaluation strategies, look for measurement opportunities, credential learning, and support the learning development process. In my coming session, I will provide details about LTEM and describe how you can use it in your organization.

My Invitation to You

Learning evaluation has been mostly stuck in the same orbit for half a century. With recent innovations, we are unfreezing our practices and drastically improving the opportunities we have in learning evaluation.

Please come join me in my session on October 26th. We as learning professionals are all in this learning-evaluation thing together!

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest