I was asked today the following question from a learning professional in a large company:
It will come as no surprise that we create a great deal of mandatory/regulatory required eLearning here. All of these eLearning interventions have a final assessment that the learner must pass at 80% to be marked as completed; in addition to viewing all the course content as well. The question is around feedback for those assessment questions.
- One faction says no feedback at all, just a score at the end and the opportunity to revisit any section of the course before retaking the assessment.
- Another faction says to tell them correct or incorrect after they submit their answer for each question.
- And a third faction argues that we should give them detailed feedback beyond just correct/incorrect for each question.
Which approach do you recommend?
Here is what I wrote in response:
It all depends on what you’re trying to accomplish…
If this is a high-stakes assessment you may want to protect the integrity of your questions. In such a case, you’d have a large pool of questions and you’d protect the answer choices by not divulging them. You may even have proctored assessments, for example, having the respondent turn on their web camera and submit their video image along with the test results. Also, you wouldn’t give feedback because you’d be concerned that students would share the questions and answers.
If this is largely a test to give feedback to the learners—and to support them in remembering and performance—you’d not only give them detailed feedback, but you’d retest them after a few days or more to reinforce their learning. You might even follow-up to see how well they’ve been able to apply what they’ve learned on the job.
We can imagine a continuum between these two points where you might seek a balance between a focus on learning and a focus on assessment.
This may be a question for the lawyers, not just for us as learning professionals. If these courses are being provided to meet certain legal requirements, it may be most important to consider what might happen in the legal domain. Personally, I think the law may be behind learning science. Based on talking with clients over many years, it seems that lawyers and regulators often recommend learning designs and assessments that do NOT make sense from a learning standpoint. For example, lawyers tell companies that teaching a compliance topic once a year will be sufficient -- when we know that people forget and may need to be reminded.
In the learning-assessment domain, lawyers and regulators may say that it is acceptable to provide a quiz with no feedback. They are focused on having a defensible assessment. This may be the advice you should follow given current laws and regulations. However, this seems ultimately indefensible from a learning standpoint. Couldn't a litigant argue that the organization did NOT do everything they could to support the employee in learning -- if the organization didn't provide feedback on quiz questions? This seems a pretty straightforward argument -- and one that I would testify to in a court of law (if I was asked).
By the way, how do you know 80% is the right cutoff point? Most people use an arbitrary cutoff point, but then you don’t really know what it means.
Also, are your questions good questions? Do they ask people to make decisions set in realistic scenarios? Do they provide plausible answer choices (even for incorrect choices)? Are they focused on high-priority information?
Do the questions and the cutoff point truly differentiate between competence and lack of competence?
Are the questions asked after a substantial delay -- so that you know you are measuring the learners' ability to remember?
Bottom line: Decision-making around learning assessments is more complicated than it looks.
Note: I am available to help organizations sort this out... yet, as one may ascertain from my answer here, there are no clear recipes. It comes down to judgment and goals.
If your goal is learning, you probably should provide feedback and provide a delayed follow-up test. You should also use realistic scenario-based questions, not low-level knowledge questions.
If your goal is assessment, you probably should create a large pool of questions, proctor the testing, and withhold feedback.