In a previous post, I talked about the advantages and disadvantages of peer assessment. In this post I want to broach the notion that using peer assessment as a summative tool may not be possible.
In a competitive academic landscape, where even half a percentage point can be the difference between a grade boundary, is it fair to let students determine which side of the boundary a student ultimately lies?
In a context where final grades can determine access to certain undergraduate second and third year courses, or post graduate courses, the difference between getting in or not could be down to less than half a percentage point. If untrained assessors are contributing to the final summative grade, then would a student who falls short of the desired boundary have a case against the institution for unfair practice? Consider what’s at stake: denial of access to a course could have serious ramifications for future life outcomes.
Most summative assessment that contains peer contribution is usually allocated only a minor presence in the overall assessment regime. However, even the lowly 5% will have consequences when the competition is tight.
The argument for the use of peer summative assessment that potentiates these conditions may be that it mirrors the real world, and that in the real world students will encounter times when peers are grading the quality of their work. However, in the real word those peers will be experts in the respective fields, and so will be able to grade the work with validity. They will be as expert as the lecturer who traditionally is tasked with grading student work.
So does this mean that peer assessment just doesn’t work?
The answer is no. It can work, but I propose that it should only be done in formative ways. It should be used to take advantage of the fact that not only will reading and observing peers’ work assist in the assimilation or accommodation of knowledge if necessary, but peers potentially can assist others in explaining concepts or advising in how to improve areas of work, sometimes with more concision than the expert. This is because the peer is closer to the student in terms of schema development and may therefore not omit any steps or sections of an explanation that the expert assumes is easily understood. The video below highlights this phenomenon.
The disadvantage of removing the peer assessment as part of the summative arsenal however is that students may not take it seriously, or at least not participate as vigorously as they might if they know there is a consequence to their grade.
But let’s not believe that this is a problem only associated with this particular pedagogy. The problem of getting students to learn for the sake of learning is a perennial problem, ubiquitous across the world across all sectors of education. The answer lies in being able to convince students of the merits of the formative process, being able to convince them that participation is the key to learning and that by involving themselves with the right spirit in peer assessment that their learning can improve significantly. The first port of call in this strategy is improving student metacognition about the process, the second is showing them data and evidence that it works.
I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger