Making feedback worth everyone’s time

Tutor feedback and student learning should be inseparable. If they become uncoupled, the formative aspect of assessment is lost.

(Orsmondet al., 2000)

Well-timed and directed feedback is one of the most effective teaching and learning strategies (Hattie and Timperley, 2007). Sadler (1989) argues that the power of feedback lies in closing the gap between where students are in their learning and where they are aiming to be. Hattie and Timperley suggest there are four levels at which feedback is directed, and that of the four feedback is more effective when it provides students with information about how to improve the processes of learning involved in a task. For example, detailing how a learner could improve their writing by using more adjectives, or by focusing on more case studies to validate a thesis. This is opposed to the accuracy of the answer, or to feedback that is so specific that it is not generalisable to other contexts, which may go some way to explaining Black and William’s (1998) finding that written comments are significantly more effective than grades as feedback.

However, a raft of research cited by Gilbert et al (2013) suggests that learners don’t see value in summative assessment feedback and rarely engage with it, making the effort spent by tutors and markers redundant: ‘It’s too late to do anything about it now anyway so why bother’.

Solutions that don’t cost the earth

Carless (2011) suggests that the student must become more involved to make feedback practices more sustainable and effective. A traditional strategy such as a tutor writing comments on final versions of assignments misses an opportunity to encourage a dialogic feedback cycle, where students use feedback to drive self-regulatory behaviours. Engaging learners in peer feedback as well as dialogues with tutors about what constitutes quality performance increases the likelihood that feedback will be acted on.

Here are some strategies to consider, some more pragmatic than others.

  • Two-stage assessment design

Constructive alignment ensures that all course activities, including assessments, become formative in nature. The final assessment task should include criteria that seek mastery of criteria that have been introduced in earlier assessments. The feedback provided in these introductory assessments is therefore vital for learners, and only the foolish would not act on suggestions to improve their understanding of the processes required. The advantage of this is not only about consistent alignment but that it doesn’t require any extra marking, of drafts for example.

Another possible application of this is the use of redeemable assessment, where students are able to forego previous assessment grades and use the final as the greater weighting. This encourages students to realise a tremendous opportunity to use feedback to improve their overall grade. Again, no extra marking is required. Some courses at ANU use this approach.

  • Withholding grades

 Klenowski (1995) warns that a summative assessment grade diverts attention from fundamental judgements and the criteria for making them, and as a result, may actually be counterproductive for formative purposes. Taras (2001) proposes a solution of withholding grades from students until after feedback has been understood and absorbed. The practical issue with this approach however is that more time is required by the marker, in ensuring that the feedback has been read. This begins to move into the ideal but impossible space of encouraging drafts to be marked.

  • Dialogic discussion

‘I videotape each student for five minutes … We show the video right after the presentation … Usually, I get them to reflect first, ‘How do you think you did?’ And then we give them feedback. I think they find it phenomenally useful because hardly anybody does this … They are able to give insightful analysis on their own performance because pictures don’t lie … At first, they get a bit embarrassed but I find they are objective. That’s why I think it’s very effective, because they see the truth.’ (Carless, 2011).

In this approach, a dedicated time possibly replacing one or two of the weekly tutorials, and set up as in a parent-teacher interview format, students get to be involved in the marking process. They self-mark their work first using a rubric before coming to the tutorial and explaining their decisions in a face-to-face discussion with the tutor. The tutor then reveals their marks for the work, and a conversation occurs. An extra criterion for the reflection and the learner’s level of engagement during the discussion is added to the rubric. This incentivises learners to develop their own feedback before evaluating that of the markers. The other large advantage of this is that in the age of AI and academic misconduct, this conversation serves as a triangulation of evidence of the authenticity of each learner’s assessment.

  • Exemplars

‘In short, when teachers and students analyse exemplars together a number of important goals can be achieved: clarifying expectations and standards; enabling students to develop an evolving sense of what good work looks like; enhancing their capacities to make sound evaluative judgments; and potentially improving learning outcomes. ‘(Carless et al, 2018). Whilst exemplars effectively serve as analogies, assisting in the development of schema required to produce assessment, the timing of providing them to learners is important, so as to avoid flagrant copying from the model. As with the dialogic discussion strategy, exemplars could be provided once the learner has submitted the assessment. Their evaluation of why it achieved the grade it did, compared to their own, would be used as a second component of the assessment. This second component is again carried out in conversation with the tutor to ensure authenticity, where the student is presented with an exemplar that is one grade above theirs. This stipulation is important, so the learner is better able to manage the cognitive load required to make improvements. Their response forms their grade on an additional rubric criterion.

References

Black, P. and Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education; 5 1 7-75

Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36 (4) 395-407.

Carless, D., Chan, K.K.H., To, J., Lo, M. & E. Barrett (2018). Developing students’ capacities for evaluative judgement through analysing exemplars. In D. Boud, R. Ajjawi, P. Dawson & J. Tai (Eds), Developing Evaluative Judgement in Higher Education: Assessment for knowing and producing quality work. London: Routledge.

Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research. https://doi.org/10.3102/003465430298487

Gilbert, S. and Mcneill, L. (2013). [online] Country Health South Australia -Clinical Development. Available at: https://ojs.unisa.edu.au/index.php/ergo/article/view/938/667 .

‌Orsmond, P., Merry, S.  & Reiling, K. (2000) The use of student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 25(1), pp. 21–38

Sadler R. (1989). Formative assessment and the design of instructional systems. Instructional Science 1989; 18:119-144

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

One comment

Leave a Reply