‘Happy sheets’ are so named because, by the end of a training session (if it’s been done right), participants are feeling good about things. When people have been asked to comment on what they enjoyed about the event, just before they leave the room, it’s no surprise to see praise and enthusiastic comments.
‘I learned sooo much!’
‘It was such fun!’
(What if they hated it? More on that later …)
It’s no surprise that theories, frameworks and texts on training, learning and professional development refer to the feedback taken at the end of a training course as being at the reaction level – ‘reactionnaires’ – because it’s well and truly in the moment. And that’s about the only thing that’s great about feedback given at this point. It’s fresh – but does it satisfy?
No, according to Dr Jim Kirkpatrick, whose father Donald first came up with a model for evaluating training effectiveness in the 1950s. Dr Jim, whilst bringing things up to date, points out that too many happy sheets put the participant in the role of ‘armchair critic’. A barrage of questions seek to find out what they liked (or disliked) about the trainer / content / materials / venue / catering. Of course it’s sensible to take some form of quality check at this stage – but there’s more information that can and should be obtained. I was fortunate to meet Dr Jim at a conference round table, where a few of us discussed how we could improve evaluation (including happy sheets), and what we could do beyond them to track the effectiveness of our efforts. I came away with ideas about what needs to stay the same, and what needs to change.
What needs to stay the same?
1. Get feedback before learners leave the room
Anyone who’s had the hapless task of sidling up to a colleague and whining ‘Can you please complete my online post-training survey?’ will know all too well how fed up everyone gets at the very idea. The moment has passed. Yet online survey tools are great at organising the data. Try this British Psychological Society tactic: at one of their member events, we were all emailed the link to complete a feedback survey during the session – and were asked by the organiser to complete it before we left the room, which most of us did. If it’s a short session with a large group, try a real-time interaction tool, such as Padlet. Prefer to stick with pen and paper? By all means do this – but ditch the long forms.
2. Elicit learning needs
When people are in learning mode, it’s a great opportunity to find out what else they need to learn. You could offer a menu for them to tick (quick and simple – we do this in Zoomly’s bite-sized workshops), or ask ‘what will help you perform better in your job?’ If you opt for the latter, brace yourself for a few choice comments – but hey, it’s all feedback!
3. Check for relevance
Dr Jim was very keen on this point; it seemed a bit beyond basic to me, but he reminded us that far too many people show up for training with far too little sense of how the learning is relevant for their job. (Blimey.) If a participant perceives the training’s relevance to be 1-5 rather than 6-10, that’s worth following up. Did the training stray from the stated learning objectives? Did the participant misread the programme, or simply book in because ‘it looked interesting’?
What needs to change?
4. Get beyond guessing
First up, we need to ditch happy sheet questions about how learners will apply what they’ve learned. At best, they’re being optimistic and at worst they’re just guessing. The ways in which learning can be applied back at the job should be identified before the training event, preferably in conversation between the participant and their manager, and supported afterwards. The end of the session is a little late to be figuring out the context for what’s been covered. Dr Jim advised us to instead find out the extent to which participants are committed to, and confident about, applying what they’ve learned. I’ve found this to be revealing since incorporating it into Zoomly’s feedback – and so have clients.
5. Keep it short and sweet
Long forms with heaps of questions and a dozen free text boxes can take ages to complete – and take away the learner’s enthusiasm. Mix up the format: Likert scales can be marked, words can be circled, lists can be ticked and small boxes can accommodate free text.
6. Monitor the metrics that matter
A speaker at an L&D conference memorably taunted the audience about ‘measuring the meaningless’: attendee numbers, how highly rated the venue was, and whether or not participants enjoyed the training. Brave or bad? You decide. Confession time: before my encounter with Dr Jim, Zoomly’s feedback cards would ask for a rating. Since then, they’ve been updated in line with Dr Jim’s advice; though we’ve kept one free text box for ‘anything else?’ just in case we have a would-be armchair critic. But initial reactions are just that; we need to take a wider view to get the metrics that matter – for a provocation, see this Human Capital Institute piece. In a previous role, a training event I’d organised came in for criticism on many (un)happy sheets. However, the senior sponsor had set a clear business metric; a simple, measurable expectation that they announced before the event. This was surpassed within a few months, more than repaying the initial investment. The ‘reactionnaire’ was just that – but the metrics that mattered told the more meaningful story.
You may find this blog post useful: ‘ROI for training – it’s not rocket science’