musical chairs

Wasted rubric real estate

Imagine this fairly typical scenario: A course has four assessments. Three of them are written assessments. All three have marks awarded for general writing skills centered around the quality of the response. The skills might include structure, grammatical accuracy, well-written conclusions, spelling, use of in-text citations, correct referencing, inclusive language, logical planning, concision, etc. The three written assessments have writing contributing to 10/50, 10/50 and 15/50 marks respectively, as indicated in the marking rubrics. The three assessments are weighted 20%, 30% and 40% respectively. The first quiz assessment is worth the remaining 10%. I won’t bore you with the maths, but this effectively means that 22% of the course is dedicated to writing skills. That is quite a substantial weight.

This weight then begs the question – is that what is actually intended? Is it stated in the intended learning outcomes?

A game of musical chairs

If the answer to this is no, then there needs to be some reconsideration of the focus. The percentage of marks awarded to writing skills in the rubrics will need to be redistributed to accurately reflect what the intended outcomes of the course are, and therefore, the skills being assessed and graded. If the skills are designed to be included, then a new intended learning outcome should be constructed to ensure there is alignment between what is assessed and what is intended to be learnt – in other words, the course components are constructively aligned.

If the answer is yes, then is the weighting accurate? Is it, which is likely to be the case, a higher number than what was anticipated? Reconsideration of the outcomes may need to take place, or how the intended outcome/s is/are better represented in the assessment rubrics.

Identifying the more accurate level also has large implications for how much attention is given to the outcome’s development in the teaching and learning sequence. We have argued before that if it is a stated outcome, then the skill needs to be taught. But when the number is as high as 22%, this would surely indicate that the skill needs to be consciously developed to ensure the validity of stating that the outcome has been attained by students. If it hasn’t been taught and is not met by students, there is no way of knowing why.

AI and the redundancy of assessing writing

The other elephant in the room here is the influence of AI and the increasing redundancy of assessing writing skills. The use of AI to streamline editing is already a common feature of using a computer, and it is likely to become even more of a part of how things are done digitally, whether we like it or not. Few could argue that the lines are becoming increasingly blurred that using AI for editing is an academic integrity issue, mainly because we ourselves use it every day, and it will (and mostly already has) become an integrated tool in all word-processing software. It appears that attributing any percentage of grade to something that is almost entirely outsourced to gen-AI is wasting valuable real estate on marking rubrics.

Using AI to edit and develop writing skills is surely never going to be an intended learning outcome. This means it shouldn’t be taking up assessment space. When 22% of a course is dedicated to it, we could safely say that the assurance of measuring the attainment of the other outcomes suffers as a result. We can safely say that there has been a constructed misalignment.

I’m Paul Moss. I’m a learning designer at The University of Adelaide. I’m on Twitter too

Leave a Reply