In a post-training debrief with an intercultures customer and consultants this financial quarter, the group discussed how to best interpret basic evaluation data from recent training. One intercultures consultant shared her personal observation that professionals of her Millennial generation often expect quick wins and swift progress up the professional ladder. The same training consultant said she experiences that some Millennial participants find themselves disappointed when they encounter training that is process-rich and product-poor. She, too, feels inclined to push her career forward faster, and is sometimes impatient with the time it takes to grow competence and clientele. As a trainer, this consultant said she’s also witness to the fact that (adult) learning is a process—about which countless theories and models have been designed. Still, she steps aside for participants who feel the need to hurry up and wait for the learning.
Can training evaluation be scary?
Offering an evaluation to training participants can feel like a risky proposition for a professional trainer when expectations are mismatched and are not met. There’s a service-oriented desire to please the customer by delivering particularly positive evaluations at the end of a training day, and be invited to return for repeat business. Maybe you have firsthand experience collecting participants’ evaluations only to discover critical metrics and or narrative. Receiving “middle of the line” results on a numerical scale or comments like, “I feel like two days of my life have been wasted,” can feel less than rewarding—for trainer and participant alike.
But, so-called “strong” training varies. Business-critical learning, behavioral change and results are not always complimented by favorable evaluation response. It’s time to talk more about what our training evaluation data is actually communicating.
The four levels of training evaluation
intercultures distributes a one-page, paper, “smiley face” evaluation to participants at the conclusion of each training. It classifies as a first-level evaluation on Kirkpatrick’s Four-Level Training Evaluation Model. Level one captures respondents’ reactions; levels two through four are categorized as learning, behavior and results, respectively.
Deeper level evaluations are not often viable. It’s common that customers often do not have the budget and interest to invest in a more profound measure of impact. Even with engaging training, participants often fall over themselves to complete and hand in evaluations in order to return to work or home.
How can we improve our quality evaluation processes?
The quality of the conversation may matter more than the depth of the evaluation. A 30-minute debrief about what influenced the content of evaluations can greatly inform the learning of organizations, trainers and the learning of training participants.
Consider how the following three categories of culture affect participants’ evaluations:
- Generational Culture. How do the workplace expectations and experiences on the part of various generations affect participants’ purpose in attending training, and what they find relevant?
- National Culture. In country-specific schools systems and professional development programs, how are thoughts organized, what is the mode of learning, and who learns from whom?
- Organizational Culture. How does the modus operandi of the organization influence participants’ belief that they will be able to find success by applying lessons learned?
Worse case scenario: Your participant is simply having a bad day!
In the building of cultural competence, sorting out our culture-based expectations is both the process by which global skills can be learned, and the learning product. It’s an experience that is not to be limited to the training room, but applied to how we work daily. Ahead of your next training, schedule a short debrief as one step toward creating a learning organization of the workplace.
The above article was included in our Sept. 2016 intercultures e-newsletter.
Photo credit: Getty Images.