Doing less to do more


Key takeaways

  • Add an oral assessment component to written tasks
  • Don’t spend time marking the written component, only the Q&A

In all of my time as a teacher and learning designer, the resounding reason why educators don’t work on improving their courses is because they don’t have time. But, with AI now entering the education space as a major disruptor, teachers and academics MUST redesign assessments to avoid academic misconduct. Not only does it take time to do the redesigning, but one of the most advocated changes is to include interactive oral assessments or Vivas, which adds significantly to the marking workload. This news to the already frustratingly time-poor educator is like a thorn in the side.

It is simply untenable to expect educators to take on more workload. Something has to give.

I recently attended a wonderfully useful workshop on Assessment and AI, run by Edward Palmer from the University of Adelaide and Jason Lodge from the University of Queensland. Participants were tasked with considering how to redesign assessments to prevent academic misconduct but with the strong caveat to consider the practicalities and implications of changes made. This included workload and its consequential financial factors. Our group was tasked with considering how to change a group presentation, which included a group presenting their findings of a written report. We decided that adding a Q&A section to the presentation would help us understand if the students were able to speak to what they had presented in the written aspect of the group report (and even the presentation which could also have been produced using AI). To compensate for the increased time it would take to add the Q&A for each group, we decided that we would reduce the word count in the report to save time marking. But then Ed said something that changed my whole way of thinking.

Changing mindsets

One of the issues we have with generative AI is that we can’t be sure if a student has written what is submitted. The ease of access to generative AI and the speed at which it is improving its sophistication is going to make it practically impossible to detect misconduct by a student who is savvy in its use. So, rather than spend enormous amounts of time trying to mitigate the use of AI, Ed asked why we wouldn’t just accept the use of AI to generate the report, make it a formative component of the assessment and then dedicate all of the assessment grade weighting to the Q&A session.

The enormous amount of time spent marking the written report would be transferred to the more authentic assessment, the Viva. This is a solution that achieves workload balance, doing less to do more.

It is a really interesting position. Yes, it then raises questions about the validity of the Q&A in terms of how well students are trained in such a task, and of course it’s not an ideal solution for assessments whose outcomes are seeking strong writing ability such as essays in the humanities. Other considerations would be creating equity in marking through the use of a well-designed rubric, training markers to ask the right types of questions to tease out understanding from the students, and creating opportunities for neurodiverse students and those who would be eligible for an alternative to this type of assessment.

Nevertheless, it is a provocative idea, and is certainly a way to provide greater validity to the authenticity of a student’s work. It also goes some way to helping students develop the very much needed graduate attribute of communication.

In future posts, I will investigate some of these challenges.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

Leave a Reply