Poor old summative assessment. For so long now you’ve been the whipping boy of education, the outlet for frustrated teachers, the cause of ire for students. You’ve been misunderstood, had poor instructional design conflated with you, been accused of destroying lives. But you’ve survived, recently gained in power, and are slowly, but surely, beginning to be perceived in your rightful edict as the only true way of assessing what has been taught.
Its’ only anecdotal, but it’s probably not a wild assertion that summative assessment has acquired a bad name. To wave the summative assessment banner nowadays you’d have to be rather brave, or mad. In an age of ‘enlightened’ pedagogy, summative assessment is all too easily equated with draconian out-of-date practice, and deemed antithetical to modern teaching and learning. To espouse and exhort its benefits is tantamount to poking the bear. Well maybe I am mad, but I currently believe summative assessment to be an important tool in gauging how my students can piece together everything I’ve been teaching them.
I understand teachers’ contestations. Ostensibly, summative assessment equals teaching to the test. Cue, and quite rightly, abhorrence, as it is the devil’s work indeed, culminating in shallow and narrow learning, robotic lifeless engagement, and factory-like production. Add to this the consequential inexorable data accumulation, leading to lurid league tables, and it’s easy to see why teachers and a society interested in equality in general have turned their back on standardised testing. But the conflation, whilst pervasive, is inaccurate, because to put it plainly, to succeed in summative testing, the worst thing to do is to teach to the test.
This statement seems somewhat provocative. How can teaching to the test be an inefficient way of learning to succeed in the test? It’s counter-intuitive, surely? Well let’s get into some analogies, a highly effective teaching technique, to explain the apparent anomaly.
If your ultimate goal is to lift a rather heavy object, simply repeatedly trying to lift it will not result in it actually eventuating, as your muscles are not sufficiently developed to lift such a weight. Learning to play Mozart in front of a packed full auditorium is not best achieved by playing Mozart to a packed auditorium, unless both you and the crowd are into some form of sadism, as your fingers couldn’t possibly move as adequately as was required. Running a marathon is not best trained for by running marathons, as your lack of stamina would continuously result in exhaustion and non-completion. Metaphorically (and literally) you can’t jump in the deep end and learn to swim. Of course, you may eventually, and some do, including some very successful humans, but it is a highly inefficient strategy, and most will metaphorically drown. The more effective approach is to isolate the processes that lead to the overall performance, practice each repeatedly, and incrementally make the tasks more difficult once each is sufficiently mastered. This will serve to slowly but surely build the necessary strength/skill/knowledge required to perform the summative act.
Learning is no different.
The whole is to the sum of its parts as summative assessment is to …… Can you guess what? To achieve success in a summative test, like writing a sustained analysis of a text, or building a brick wall (insert a summative assessment from your subject), students need to work deliberately and discretely on the smaller skills that go towards making up that final skill. They need to develop the parts that make the whole. Trying to improve students’ performance on summative tests by simply giving them those tests to practise is like asking them to run before they can walk. And this is why teaching to the test is a waste of time, and the reason why teachers must absolve summative testing from such shackles. Without such extrication, its power and potential as a learning tool will be forever lost.
Falling into the ‘teaching to the test trap’ is made all the easier by the seeming lack of interconnectedness between formative tasks and their summative parents. Often, the formative tasks don’t resemble the exam, and focusing on them can seem like missed opportunities to move towards progress. When we consider how much pressure there is on teachers to achieve targets, the fear is very understandable. However, the only way to ever alleviate fear is through greater understanding, and because all good teachers are good learners, the adjustment becomes less a leap of faith and more an assertion of sensible pedagogy.
What it looks like in action
Take as an example trying to improve the speed of student writing in a timed assessment. The seemingly correct strategy would be to just provide the students with a task that looked similar to the desired final product, say a 45-minute writing analysis. But the problem is that students who can’t sustain writing for 45 minutes, when given the 45-minute task, capitulate at the same point in the task, every time. They get exhausted. The better approach is to build their stamina via significantly shorter tasks. For example, I provide my students with mini extracts and set a 7-minute time limit on the writing. Initially, the expectation is that they will discuss one technique used by the author in that time. After several such tasks, and when the students are comfortable with that amount of writing, I will expect 2 techniques. I then increase the time to 12 minutes, but require 3 techniques discussed, and so on. As the time slowly increases, students are comfortably able to sustain their writing: their concentration levels have been trained; their reading skills have been trained; even their hand muscles have been trained.
Of course the pedagogy is not restricted to English or writing tasks. It doesn’t matter what your subject: building skills is essential if you want students to succeed in a summative assessment of your course.
More support for summative
Another reason why summative assessment is king compared to its lowly relative is that formative assessment can often become simply a proxy for learning, with teachers and students believing they are learning because they are doing work and answering questions. Carl Hendrick, Adam Boxer, and Dawn Cox suggest that if students aren’t able to represent the learning at a later date (a process that would be summative in nature) then it could be argued that they haven’t really learnt the material. This has enormous implications for single lesson observations that are carried out in the hope of assessing student progress. It’s really only in the summative test that real progress can be measured, as students are tested on a range of skills taken from the domain of learning undertaken in the course, and have to have committed the skills to the level of automaticity. This is why Ofsted is so reliant on exam results above all else, and has no conflict of interest in promoting that how schools achieve the results is irrelevant to them.
This above insight renders the fallacious, but common insistence on using a summative test at the beginning of a course to set up some relative data, delusory. Often presented in the guise of ‘seeing where the students are at’, the results don’t tell the teacher anything at all in terms of what the students know, because the students couldn’t possibly produce what the summative assessment is seeking: they haven’t taken the course yet. The result is an uninformed teacher who can’t use the test formatively as the reasons for lack of performance could be numerous and are indistinct, and a student deflated and demoralised before you’ve even taught them anything. Those old days of shooting myself in the foot! What you are better off doing is issuing multiple tests. The tests would seek current student understanding of the key skills you know will be needed to build overall competency in your course, but in isolation. You can then use the results formatively, and develop a scheme of work to build the skills necessary to move towards the exam at the end, leaving the summative assessment to, well, the end.
The incredible Making Good Progress by Daisy Christodoulou articulates the argument far more succinctly than I, and without any shadow of doubt, is a must read for all involved in assessment of students.
A final thought is that the quality of summative tests can have a large bearing on the claim made in the title of this article, but that is best left for another post. For now, I’ll leave you with the repeated utterance: Summative assessment is king!