Previously, I have written about the ethics of using peer assessment in a summative form, asserting that there is an inherent unfairness in the pedagogy for students who are given grades that result in them being placed on the wrong side of a grade boundary. Such a situation can have a life-altering outcome, and affected students would have just cause to challenge a peer assessed grade if they chose. But since then I have been thinking a great deal about it, and now believe that the argument should focus more on the opportunity summative peer assessment affords rather than its cost.
Standing tall and proud as one of several ‘opportunities’ of using summative peer assessment is the potential for it to reduce overall marking load. With all the rhetoric and evidence in education about the importance of quality assessment and quality feedback, the reality is that the essential funds to facilitate quality marking are rarely allotted to the cause. There simply isn’t enough money to buy teachers/tutors/academics the necessary time to mark and feedback as diligently as the rhetoric demands. Assigning a relatively small percentage of a course’s assessment regime to peer assessment would result in less marking by the teacher/tutor/academic. The question is whether this opportunity to save valuable time is worth the cost of a loss of fairness for those competing for grades.
Let’s look more closely at the cost. Yes, the unfairness is a very real dilemma, but it may not be for the majority of learners. It may be that the majority of students’ scores aren’t hanging on the precipice of a grade boundary, and that the peer assessment won’t in fact be a deciding factor in their overall grade. For the majority of the students, there won’t be any challenges made that take up valuable time.
This means that the cost can then actually be absorbed by the assessor because they will have time to remark the few who challenge the validity of their peer grade, which assuages the unfairness argument. When we include the necessary moderation of a sample of peer judgements, teachers/tutors/academics still come out way in front in terms of time saved using this pedagogy. This time saving is of course relative to the number of students you have to mark, and for very large classes of 200+ learners, the time saved would be significant.
It’s worth noting too that there is likely to be less challenges to the validity of peer judgement if the assessment has been set up correctly. This involves providing the learners with sufficient training and practice opportunities for them to become competent at grading and feeding back to their peers, and that learners are not simply expected to engage successfully in the pedagogy without acquiring the necessary knowledge to do so. Trained peers will be content that their peers have done a good enough job in marking their work, and more so if the peers are assigned randomly, and are able to anonymously grade the work – all crucial aspects of successful peer assessment (Wanner & Palmer 2018).
Wanner and Palmer add to a large body of evidence that exhorts that peer assessment can only be successful when learners are explicitly trained in the pedagogy and have practised it. A likely reason for this is that it is only when they have done so that they are likely to understand its benefit, and therefore utilise it in it most powerful context: as a formative learning tool. However, achieving such an outcome requires expert planning and curriculum design, one that affords time for the teaching, practising and reinforcement of the skill, but also the initial time for teachers/tutors/academics to also be trained in the application of the pedagogy. It also means incorporating the knowledge associated with it into retrieval practice too so when students are asked to peer assess they can draw on their knowledge of the processes involved quickly and automatically. The reality however, is that this level of planning and design is not the standard requisite for the inclusion of peer assessment in a course’s assessment strategy, which means that ordinarily for peer assessment to work, the buy-in is not intrinsically motivated, but has to be seduced in a reward/punishment context; unschooled learners won’t see the point in taking time to read their peers’ work and feedback on it if they know their investment of effort won’t affect their final grade. Making it summative improves learner buy-in, but it is a bitterly ironic advantage.
It may not seem palatable that a justification for using peer assessment is to save workload, but the pragmatics of higher education almost demand it.
Thomas Wanner & Edward Palmer (2018) Formative self-and peer assessment for improved student learning: the crucial factors of design, teacher participation and feedback, Assessment & Evaluation in Higher Education, 43:7, 1032-1047, DOI: 10.1080/02602938.2018.1427698
I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger