This is the second post in a series on constructed misalignment and an approach to mitigate it. In the first, it was highlighted that there is a problem in using an aggregated assessment score placed on a 100-point scale as a proxy for the learner attaining the stated outcomes for the course. This post explores the 6 questions raised in that post that characterise some of the problems and details why they attenuate confidence in the current practice of how outcomes are aligned with assessments.
1. Are the outcomes equally important? If yes, are they equally weighted?
If the answer is yes, then it should be explicit where these outcomes are spread over the assessment design. Naturally, too, there should be an equal ratio of time allotted to teach and learn each outcome. I touch on the challenges of this here, but we’ll explore more in the design of outcomes post in this series. But consider, if there are 4 or more outcomes for a course, as typically is the case, and there are 3 or 4 assessments that likely increase in weighting towards the back half of the course, how the equal distribution of outcomes is planned is more complex to consider than it appears on the surface. We have never seen this explicitly done, written about or promoted.
This also raises as an aside an important question: is it ok if each outcome is only measured once? Does that ensure validity in confidently stating that a student has attained the outcome, given that a domain of knowledge typically outweighs what is assessed of that domain? If analysed further, where that outcome was measured, is the assessment design robust enough to prove that there was a genuine measurement occurring? In other words, could that assessment be audited to isolate where the outcome is aligned and check if it is sufficiently doing what it says it is doing? The next step in that process is to then check if there is sufficient teaching in the course for that outcome to be attained, and if there is a range of learning available related to that outcome, from Pass to HD.
If the answer is no, then has the ratio been clearly indicated so that the ratio can be adequately applied to the assessment design, and therefore to the teaching sequence too? In what order are the outcomes then applied to the assessments? Is one outcome almost subsumed by others and so sequentially must come first? If so, is the plan clear to students, so they know that even though the ratio is not even, implicitly if they have issues with an earlier outcome then it may significantly impact attaining the subsequent outcomes?
Does the learner need to pass all of the CLOs to pass the course? This raises questions of whether some of the skills are important enough to become assessed as a hurdle. And if so, does that hurdle need to be its own outcome?
When outcomes are aligned to assessments but there isn’t sufficient clarity around the comparative importance of the outcomes, it is difficult to suggest that the assessments are proving the outcomes are being met.
2. Are the skills that sit inside an outcome equally important?
This carries significant implications for a program and ensuring assurance of learning. If success in assessments serves as a proxy for outcome attainment, what happens if a critical part (but not hurdle-worthy) of an outcome is not passed, but is important, or assumed knowledge for the next course? Is it acceptable if a learner passes a course but doesn’t have some of the requisite skills necessary for the next stage of the program? Is this data captured and highlighted somewhere, so the next teacher in the sequence can adjust or be aware of potential knowledge gaps, especially if the learners who lack a specific skill are in the majority? We can’t very well place hurdles on every aspect of an outcome, so what mechanisms are put in place to help close these gaps before the next course is entered?
When outcomes are aligned to assessments but there isn’t sufficient clarity around the components that make up an outcome, it is difficult to suggest that the assessments are proving the outcomes are being met.
3. If it is an outcome does it have to be assessed?
The answer is yes. But a more important question is does it then have to be taught? This is pertinent in many contexts where an outcome is a soft skill, such as communication or group work. Many courses will have an aspect of communication as an outcome and then don’t actually teach the skills of communication but assess a product or artifact that relied on the skill of communication in some form (presentation, interview, oral viva etc). This usually happens because there is an assumption that the learner has the skill as a prerequisite. While this may be true for some students, we shouldn’t make this assumption, as it has ethical and pedagogical consequences for those who lack these prerequisites. When this assumption is made, an outcome is being assessed without assurance that the associated skills have been taught.
The converse of this is if the skill is being assessed but is not related to an outcome. Communication is again a good example here, where sometimes significant proportions of assessments are dedicated to writing and structure, grammar, referencing etc. If these elements are not part of an outcome, should they form part of the assessment? The answer is probably no, as they are taking up space that should be dedicated to assessing the outcomes..
When outcomes are aligned to assessments but there isn’t sufficient clarity around the purpose of the outcomes, it is difficult to suggest that the assessments are proving the outcomes are being met.
4. Is it feasible that every outcome is assigned/aligned to every assessment?
This is relatively common practice, especially in non-science courses, but unless it is highlighted how much of each outcome is aligned with each assessment it is likely to be untenable. When outcomes are assigned to an assessment, it must be assumed unless otherwise specified, that the outcome in its entirety is being assessed. And this is not practicable, or logical when assessments are spread across a course. An early assessment for example ostensibly aligned to a number of outcomes implies that each of those outcomes has been ‘taught out’ by that point in the course. An assessment in any week not near the end of the course is probably not able to say that – after all, what then is being taught after the outcome has been assessed? If it is simply practice and strengthening after all the content has been taught, the assessment of the outcome is likely to be of a higher order, related to a more developed understanding and application of the outcome – thus, a different aspect of the outcome.
Another important question to consider is what happens when an assessment is a hurdle but has only a few of the outcomes aligned to it? Does this mean that those that aren’t are not as equally important as the outcomes that are? Alternatively, does this mean a hurdle assignment should align with all the outcomes? And if the assessment is early in the course, does this again highlight the importance of defining what the skills are that sit inside an outcome?
When outcomes are aligned to assessments but there isn’t sufficient clarity around which aspect of the outcomes are being assessed, it is difficult to suggest that the assessments are proving the outcomes are being met.
5. Does each assessment clearly define how much of each outcome is being assessed?
This is related to the point above. But it is worth mentioning from the point of view of the course design aspect of constructive alignment, specifically the teaching and learning sequence and development of learning activities. In this resource, we will argue that the timing and placement of the assessment should be solely dependent on the knowledge of what has been taught up until the assessment is given. If it is clearly articulated how much of an outcome is the focus of assessment design, then the activities to support students in preparing for the assessment can become more strategic.
Now you may say that it is implied or tacitly understood that not all of the outcomes are being assessed early in the course. But unless it is indicated how much and which parts of each outcome actually are, the design of the course up until that point is made harder. Then you might say that the learner is obviously going to get information in the assessment description about what the focus of the assessment will be and what skills are required, and that this will guide the design, but the lack of precision in simply aligning outcomes to assessment reduces the validity of the alignment.
When outcomes are aligned to assessments but there isn’t sufficient clarity around how much and which parts of each outcome are being assessed, it is difficult to suggest that the assessments are proving the outcomes are being met.
6. Are assessments so well designed that they explicitly aggregate the percentages of each learning outcome across the course? And thus result in each outcome being entirely assessed?
This again is related to the above points but is more precisely concerned with ensuring that assessment design is explicitly able to signal how the assessments aggregate learning via outcome alignment and that there is a strategy in place to collect the specific attainment of the components of the outcomes. As stated above, if we are going to use aggregated grades as a proxy for outcome attainment, we need to know which components of each outcome have influenced the grades, and whether it is acceptable for some components not to have been met; a likely scenario if a student only passes a course.
When outcomes are aligned to assessments but there isn’t sufficient clarity around the collection of data related to the assessed components of each outcome, it is difficult to suggest that the assessments are proving the outcomes are being met.
There’s lots to consider!
In the next post we will take a step back and explain why the concept of constructive alignment is worth hanging on to, and why mitigating the 6 points above strengthens its persuasiveness.
I’m Paul Moss. I’m a pedagogy fanatic and manager of educational design at the University of Adelaide. Dr Sasi Rathnappulige is a Curriculum Design specialist at the University of Adelaide.
3 comments