Discussions around rubrics are not new. Over the past few years, the pedagogy of rubrics has enjoyed acclaim and notoriety in equal measure. A paper by Sadler (2009) is cogent in its criticism and clear in its warning to higher education assessment designers, and most definitely provides a case for using comparative judgement as a marking/grading approach. Advocates of comparative judgement suggest that marking has increased reliability because ‘people are far more reliable when comparing one thing with another than when making absolute judgements (Jones and Alcock’s (2012))
But in this post, I argue that having higher education academics design rubrics has other benefits that potentially outweigh the reliability issues that plague outcome assessment. There are two of particular note:
- the first is the way rubric pedagogy facilitates that conversation between course learning outcomes and assessment outcomes, a conversation that enables even the most inexperienced educators to arrive at a standard of learning design.
- the second is that comparative judgement offers no feedback to students, where as the rubric at least provides a student with an estimate of why they have been graded as they have. In the video I also advocate an innovative approach to significantly reducing the amount of written comments (and thus time spent marking) in an average response: the use of a coded rubric.
Thanks to the fabulous Natalia Zarina from Adelaide University for assisting in this thinking.
Sadler, D.R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.
I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger