5 Common Myths About Learning Measurement
How the trainer has a critical role to play.
Five common myths surround learning Measurement. Trainers and L&D Professionals who deliver their training according to these thoughts deprive their participants and themselves of opportunities to provide considered training where everyone leaves feeling confident that learning can be used back at the workplace.
Learning Measurement Demystified
Here, we’ll explore these five common myths and highlight the trainer’s critical role to ensure training is delivered with care, so that participants are given every opportunity to learn, build confidence, and succeed.
MYTH 1: Learning Measurement = an Evaluation Form
Let’s get back to basics
How do you measure using a ruler? Only at the end? Of course not, you have a starting point and an ending point, and you calculate the gap.
What’s happened in L&D?
In learning and development, the term ‘Measurement’ has morphed to be an activity conducted at the end of a learning program. If this is the case, it’s not Measurement at all. In its simplest form, an Evaluation conducted by a document or an online survey tool is essentially a qualitative indicator of what participants did or didn’t like about the program, and at best, informs the L&D team what participants have learned and intend to apply as a result of the learning program.
Overcoming Myth 1
Let’s extinguish the word ‘learning measurement’ unless we are talking about true Measurement. That is, measuring skills and knowledge before participants start their learning program, during the program, and after the program. That’s Measurement!
MYTH 2: Learning can’t be measured during a training course….…it can only be successfully measured back at the workplace
If this were true…
…how do we know whether participants can apply new skills and ideas in their jobs?
Too often, trainers guess at participants’ abilities, after a course, saying things like: ‘Sally will go okay…’ ‘Juan is going really well; I wish there were more like him…’
Learning measures must be taken during the training course if you want to say you know they know.
Overcoming Myth 2
The following questions help to uncover your current perspective on Myth 2 and suggest ways to overcome it:
- What’s the difference between ‘care’ and ‘genuine care’?
- Do you genuinely care if your participants learn, retain, and are ready to apply their learning?
- If so, what specific things are you doing in your training courses to show this?
- How much time do I take to measure learning?
- Do you check and measure the progress of each individual throughout the course?
- If so, what is the balance of time between learning new content and checking that participants truly know that content?
- How can I create a state of knowing they know?
- Do you know the exact level each of your participants is at throughout the training course?
- If so, how do you know?
- How do I create a strategy for participants who don’t measure up?
- Do you take responsibility for filling gaps in participant’s learning during a course?
- Do you have a strategy for participants who need further help?
- If you do not have a formal strategy, what do you do when participants require assistance?
- Who can help me?
- Do you actively engage participants’ managers to help ensure learnings are applied?
Final Note on Myth 2
If you ensure that strategies and work practices are in place during the training course, the odds of successful learning transfer are greater. Furthermore, post-course evaluation and Measurement become more prosperous and more valuable.
Relationships between the trainer, the participant and their manager will be greatly enhanced as you all strive to get the maximum outcome from the course content and apply it back in the workplace.
MYTH 3: A written test or exam measures learning
In the past few years, many books and papers have been written about how adults learn, covering subject domains such as adult emotions and emotional responses to learning and learning environments.
Summary of Exam-Style Environment
Professional trainers know that some adults do well at exams and tests, and others do not. The results an adult gets may or may not be aligned to the amount of learning that has taken place. Rarely does the exam environment mirror the environment in which the learning will be applied. On the job, the participant has all sorts of formal ways to find answers to their problems – reference material, help desks, training aids, and more. Significantly, many adults are skilled at building informal networks of people from whom they can seek help.
Overcoming Myth 3
Avoiding the exam-style environment should be a top-of-the-list priority for all professional trainers. Learning can be successfully measured without this pressure, using review activities and post-course learning measures. The results of these measures will not only be better, but they will also be more accurate.
Given a chance, we as trainers should relay better and more accurate results that factual data can back up to a participant’s manager, rather than the often-inaccurate results of a closed-book examination.
What About Industry-Set Exams & Compliance Regulations?
If training must include a formal assessment, take a proactive approach to ensure that participants are confident, skilled, and ready to sit the exam. If they undergo regular review activities throughout the course and sit preparatory exams, most participants will accept even the most rigorous exam.
MYTH 4: An evaluation form is a measure of learning
When we were children, our parents taught us how to behave and treat the teachers at school. The teachers reinforced these lessons and shaped how adults tend to act in a learning environment.
A lesson from children
The myth of evaluation forms being a measure of learning can be illustrated by looking at just some of these lessons we were taught as children. Rarely do participants write anything other than good things about the trainer and the content of the course on the evaluation form. They leave a section blank when they have nothing flattering to write. It is because of these childhood lessons that evaluation forms are often accurately called ‘smile sheets.’
What Are Evaluation Forms Expected to Measure?
In organizations across the world, evaluation forms are used to measure the trainer’s competence, the course content, and how much learning has taken place. In reality, evaluation forms generally evaluate the training course’s execution, administration, trainer, and content presented, but they do not and cannot measure learning.
If Evaluation Forms Don’t Measure Learning, What Does?
Only review activities run at regular intervals throughout the training course, and post-course measures have a chance at measuring what participants have learned.
Review activities can measure what participants know at the time of learning. If there are gaps in the knowledge, the participant and trainer can work together to fill those gaps. Reviews also measure the readiness of the participant to apply their learning.
The outcome of review activities can provide valuable data against which participants and trainers can be measured. The data they provide needs to be combined with other information to compile a complete picture of competence.
A Place for Evaluation Forms
Evaluation forms are evidence of attendance and a first-line general opinion of how the course went for participants. Simply combining the results of review activities with those of evaluation forms is a step in the right direction for creating valuable information for participants, trainers, and the organization. However, nothing takes the place of gathering accurate learning analytics when measuring before, during, and after the learning program.
MYTH 5: There’s no time left for reviews
A time-bound trainer who feels the need to get through lots of content and finish on time will be tempted to drop reviews from a course.
As far as time goes, review activities are self-accommodating. Time taken to run the review leads to a faster pace of learning. It does not add to the training day but is rather absorbed into the day. The learning pace quickens because of the social proof operating in the training room, the high level of confidence in the participants, and the usefulness or relevance of the material being learned.
As professional trainers, our goal is that participants know 100%, or close to it, of all the topics covered by the end of a training course. The only way we know, this is to see evidence of it for ourselves, and the outcome of review activities helps us to know they know.
However, there comes a time when we have to choose content versus knowing they know. We have a certain amount of training content to cover, and learning objectives reflect this expectation.
What’s essential for a trainer is to ensure the learner is confident and able to apply the material taught and to avoid cramming in more content, to say everything is done.
What if you get behind schedule?
If you are ever behind time, drop content before dropping a review activity. Participants should be confident and ready to apply everything you have taught them than be confused and overwhelmed by a mass of content and no opportunity to check understanding. Leaving training confused could affect them applying the learning they were once confident with.
A Final Note on Myth 5
The bottom line is – you don’t have time not to have time for review activities!
If your purpose is to have confident participants ready to apply their learning, you should create time. If you are conducting a content-rich course that doesn’t have review activities, you can add them to your course without scheduling additional time.
Measuring learning is not a ‘smile sheet’ distributed to participants at the end of a learning program. Undertaking analytics on the learning progression of a participant looks simple, however there are a myriad of underlying complexities.
As learning and development professionals, let’s collectively squash these five myths and focus on developing learning programs that deliver results and providing key stakeholders with metrics that matter.
ID9 Intelligent Design powers learning programs with pre-, during and post-course activities that form the basis of learning analytics. When partnered with the AI-driven platform ID9 IMPACT, organizations can confidently and accurately measure learning for individual participants, groups, teams, or an entire learning curriculum.