Can teaching to the test be good for learning?

Yes!

But only with a very large caveat: when constructive alignment is correctly employed!

This is the 4th post in a series on constructive alignment. The first is here.

Teaching to the test

It all has to do with what is called ‘backwash’. Backwash is a term coined by Elton (1992, as cited in Biggs and Tang, 2011) to refer to how assessment affects student learning. It is essentially summarised by a scenario where the assessment drives how students learn. Despite the negative connotation, in and of itself, backwash is neutral. ‘Positive’ backwash occurs when teachers embed the outcomes within the test, whereas negative backwash occurs when the learner can game and/or inadvertently pass the test without addressing the outcomes.

Positive backwash

Students learn from what they think they will be tested on, which drives and determines ‘their’ curriculum (Ramsden, 1992: 187). When assessments are purposefully aligned to the outcomes, learners will engage in positive backwash to prepare and succeed in the test. In this context, teaching to the things that are going to be in the assessment makes total sense.

Biggs and Tang, (2011: 198), illustrate in the diagram below that it doesn’t matter what the starting point is for students – as long as the outcomes are in their trajectory.

In other words, when learners backwards engineer their assessments to understand what it is they need to know, if the assessments insist on demonstrations of the outcomes and they can prepare for the test by referencing teaching and learning activities that are tied to those outcomes, they will be able to achieve the outcomes.

But this ostensibly simple alignment sequence can be compromised quite easily.

Negative backwash

Negative backwash occurs when the design and grading of assessments result in students only superficially engaging, if at all, in the intended learning outcomes. This is manifested in two contexts:

Assessment types

If the assessment type becomes a barrier to demonstrating outcomes, misalignment results. For example, MCQs are often aligned to higher-order outcomes, but the reality is that it is very difficult (but still possible) to design questions that would sufficiently test higher-order outcomes. Assessments that test communication, group work or other desirable graduate qualities can easily be misaligned if these skills are considered assumed knowledge without clarity on where those skills are taught in either the course or the program, or if at all.

The MCQ test is also open to being gamed. If an answer is not known, students will try to eliminate one or more of the response alternatives and then guess. This is exacerbated when distractors are poorly written and allow students to guess with a high probability of being correct. Sometimes, the answer is even obvious because of its length, but perhaps more worryingly, having the options in front of the student can trigger performance recall, which cannot be a proxy for outcome attainment as the recall is not necessarily suggestive of deep learning where the learner will be able to use the knowledge in future contexts.

Grading approaches

A more subtle negative backwash is when assessment grading is designed in a way that allows a learner to still aggregate enough points to pass despite clearly not being able to engage in the central tenets of the assessment.

Sometimes the grading scheme inadvertently facilitates the law of diminishing returns, when more marks are awarded for the first half of a question than the second half. This signals to students that the questions can be gamed, and that if they attempt all the questions to a certain degree, they will secure more points than if they try and complete all they attempt. If the learner is able to follow through on questions they actually do know well, then they can potentially pass overall.

Not all outcomes are likely to be equal, and so weighting rubric criteria precisely is an important skill. The overall outcome weightings should be reflected in the rubric criteria that are aligned with those outcomes. For example, if 30% of an assessment rubric is awarded to writing and research skills, those weights MUST be in proportion to how much of the overall course is dedicated to these outcomes. When this weighting isn’t precise, students can accumulate enough scores in the rubric despite not passing the main outcomes of the assessment. And if test scores become proxies for overall course outcome attainment, students could be ‘successfully’ moving through a program without actually attaining the outcomes at all.

Precision pays off

Great design almost always possesses a fascinating irony: its genius is that it appears effortlessly simple on the surface, yet is often the result of a complex whirlwind of considerable effort and creativity beneath. The simplicity of constructive alignment seems to be characterised by such an archetype and duly suffers when that simplicity is conflated with insouciance when it needs to be implemented. Designing courses and programs of study is an extremely complex endeavour, but whilst the logic of constructive alignment provides us with the framework to succeed, great care must be paid to its execution. This series of blogs aims to provide you with the confidence to do so.

The next post explores further how a higher-order outcome such as critical thinking can be aligned with assessment and teaching activities.

References

Biggs, J. B., & Tang, C. S. (2011). Teaching for quality learning at university: what the student does (Fourth edition.). McGraw-Hill/Society for Research into Higher Education.

Ramsden, P. (1992) Learning to Teach in Higher Education. London: Routledge

Cover image: Waterson, B. (n.d.). Calvin and Hobbes [Comic strip]. Retrieved from https://www.gocomics.com/calvinandhobbes/

One comment

Leave a Reply