the manifestation of cognitive overload

ATC -- Frazzled Cartoon Lady

The reactions people have to cognitive overload are varied. Some get angry, some withdrawn, some somewhere in the middle. What is common to all who experience it though is feeling overwhelmed, feeling uncomfortable, feeling frustrated and sometimes feeling worthless. Imposter syndrome can be common.

Understandably, it’s an experience we want to avoid. It can be exhausting.

How students handle it is largely determined by their temperament, which is affected by a multitude of factors. The more obvious reactions are the extremes: poor behaviour, lashing out, belligerence and compete withdrawal. Of course, poor behaviour and withdrawal is more complex than just cognitive overload, but at times it is certainly a factor, and I find it strange that little to no conversation ever discusses improving behaviour in the same way that we endorse the elimination of academic cognitive overload – through incrementally improving cognitive skills. Inculcating new behaviours surely needs the same level of design and commitment? Perhaps less obvious is cognitive overload in students who externally give few clues that they are experiencing it; perhaps they are not reacting because of compliance to the school’s rules or respect for authority, or perhaps because they don’t want to be seen as not understanding what is being taught; peer pressure is huge in all education sectors. Perhaps they are having a difficult time outside of the classroom, most certainly a factor affecting higher education students who may have lost their employment during COVID. Needless to say, cognitive overload reduces learning.

See the source image

A practical and relatively simple solution to mitigate against too much cognitive load is the design of learning sequences that focus on the building of schema that include lots of formative assessment to check learning. Good communication with students also allows you to gauge how students are feeling in their learning, and this can be an extremely useful form of formative assessment too.

It’s not just students who feel it

It’s certainly not just students who experience it . Any time you are under pressure in a new situation you are likely to experience it to some degree as your mind grapples with the new content and searches relevant schema to connect it to: the more the pressure and the fewer the connections, the greater the load. You are likely to experience it when you attend a conference where presentations don’t adhere to multi-media principles, you are likely to experience it in a meeting when you don’t have the relevant background knowledge on a topic being discussed, and you are likely to experience it when you yourself are presenting/teaching and you don’t fully understand or believe in what you are discussing. Of course, the most obvious analogy is when your practice is being observed. All of these examples are the times when you are effectively a student, a novice learner. As an educator, it is important that you reflect on the feeling of cognitive overload and how easily it can occur, and use that knowledge to consider how you design and shape the learning experiences of your students so they experience it less, and learn more.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

“ATC — Frazzled Cartoon Lady” by campbelj45ca is licensed under CC BY-SA 2.0

chunking lectures – it’s a bit of a no-brainer

Breaking a lecture up into distinct chunks or sections is a bit of a no-brainer. It is all to do with understanding the implications of cognitive load theory, specifically that the brain can only process a small amount of new information at once. Presenting more information than the brain’s architecture can handle leads to overloading the working memory, and usually a significant decrease in learning.

Breaking your lecture into chunks provides students a chance to process each chunk before new material is presented. Designing opportunities for students to be active (black boxes) in the processing of the content also assists in facilitating the content’s understanding, and eventual transfer into long term memory.

So, here’s a possible live -streamed lecture design that considers cognitive load implications, the need for the student to be active in their learning, and is very manageable for the lecturer. The model can be applied to both live and recorded lectures, but the recorded lecture will need some more specific context discussed, which I will do in another post.

I’ve talked before about the possible mixed-mode future of live lecturing, with it being able to facilitate a breakout room. The below model considers this as a possibility.

lesson segmentrationaletech to assist
introThe lesson begins with a retrieval quiz.
The benefit of retrieval is enormous. It strengthens the memory of key ideas and content. The purpose of this is so the knowledge can be automatically brought to cognition when new learning is presented, without taxing the working memory. The more knowledge the student can draw from, the greater the opportunity to delve into more higher order independent learning, so building students’ schema through retrieval is is a bit of a no-brainer.
The lecturer will place answers on the screen, and spend 2-3 minutes explaining answers if common errors were made.
Polls
Echo 360
Mentimeter
Quizziz
Canvas quiz
teachingDelivering content.
10-12 min.
Incremental building to application is is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts before tehy are presented with more content. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture, and is a bit of a no-brainer.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
teachingDiscussion of last task if necessary – may not be if practising or completing examples.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
Formative assessmentChecking for learning
A quiz of short answer opportunity to see if what you have presented so far has been understood is is a bit of a no-brainer. The questions also provide another opportunity for a student to process the content and develop a better understanding.
Questions up on screen.
Zoom polling.
Using Canvas discussions as student answer repository.
Mentimeter.
Quizziz.
teaching Check answers – you may need to pivot the lecture if misconceptions are still prevalent.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
summary Recapping key ideas. Tying the lecture all together: linking it to previous learning and real word contexts. Discussion and questions asking students to link their learning is a great way to draw attention to the key concepts again, and is a bit of a no-brainer. Mentimeter open ended question.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

Is mixed-mode lecturing the future of HE lecturing?

Picture the setting: instead of the regular face to face lecture of 120 students, there are 40 in front of you and the other 80 are remote. How can a lecturer operate under such conditions, satisfying both contexts at the same time? 

Well, each student is connected to Zoom, face to face students either through a laptop or a phone, and remote students similarly so. The face to face students have a choice – they can watch and hear the lecturer as normal, or watch and listen through the screen, as the remote student would have to do. If slides are presented, then the face to face student likely has an advantage as they can see the lecturer full size and the content on a larger screen, whereas the remote student sees only a thumbnail of the lecturer in the corner of the presentation.   

So, what are some of the advantages of having face to face students being connected via Zoom too – why not just watch and listen as normal? 

100% participation in formative assessment – if everyone has a device then you can assess their understanding at stages of the lecture using polls and quizzes. Beginning each lecture with a retrieval quiz is highly beneficial as it brings back into the minds of your students key ideas from past lectures that you know they need to know – helping them retrieve such content actually helps you too as new concepts will be better understood if students can automatically bring past ‘connected’ ideas into their thinking without taxing the working memory. Half way through a lecture is another good time to formatively check for understanding.

Generally asking lots of questions in a lecture is still good practice, but getting everyone involved is near impossible in a regular lecture context – now technology affords this. Getting more data helps you know if what you’re teaching is being understood.  

Interactions with peers – when appropriate, students can seek clarification from a peer without disturbing the rest of the lecture room. Of course, this should only be encouraged when there is space in the lecture so students aren’t missing key ideas if talking to a peer. You can manage the chat functions to be open to all or so that students can only message you during content delivery. See here for more Zoom engagement advice.

Interactions with the lecturer – potentially, shy students in the lecture theatre can now ask a question to the lecturer, anonymously if they like, via the chat in Zoom. For some, the pressure of not wanting to appear silly by asking a question is huge, and often such students won’t ask, and then move onto the next section of the lesson without clarity on what was just taught. Now everyone can be heard.  

Group work in a lecture – breakout rooms facilitate the option of having students work together to solve problems. At stages in the lecture when chunking is necessary to secure students’ attention, an option may be for students to spend some time to practise what has just been delivered, consider relevant analogies to help strengthen understanding, or collaborate on creative solutions to new problems. Addressing misconceptions or consolidation through practice is probably best done in pairs, whereas groups of 3-5 may be more suited to discussing ideas and analogies rather than practice.  

Black screens can be good – the wonderful Dr David Wilson from Adelaide University provided some valuable insight in this area. There may be several legitimate reasons why a student decides to turn their video off. Of course, the best communicators make their expectations explicit and clear from the beginning, and help students with legitimate screen issues arrive at alternative ways to engage in the lecture, but sometimes a student will turn their screen off because it’s easier to engage passively. We all know that active learning is better than passive learning, but in a large lecture theatre, it can be hard to determine who is and who isn’t active, and time consuming trying to address an individual who pretends not to hear you. Now, the black screen at least gives you a chance at instantly seeing who the passive student is and a chance at addressing their decision. If you’ve made it clear that you prefer the screen on, and that those who can’t should communicate why privately to you, then if  the student simply still refuses to engage when addressed, it’s easy to write down the Zoom name or student number and address it later with a friendly check-in to see if there is anything you can do to help. If the student has used a fake name, well that’s a fair bit harder, but you’d hope that having established high expectations, continually developed the metacognitive abilities of your students, and done so in a really friendly demeanour, then such a student would be in the minority.  

Logistical considerations that may be deemed as disadvantages – it may seem daunting to get all the technology working to facilitate such a learning environment, but it is easier than you might think 

ISSUESOLUTION
Audio feedback from multiple zooms in the lecture theatreStudents would need to be on mute unless asked a question 
Teacher’s zoom camera –  how can it be placed to emulate a real life view?Placed so it captures the teacher’s whole body and gesturing as they move around (movement like in a normal lecture). This means that the camera will be at distance and not so you can only see the person’s head. It may require some configuring with the existing setup so that your camera connects to the console displaying your slides or doc camera, but quite often the lecturer will be distant from the console and using a clicker to move through slides. 
Teacher’s microphone – how would the distanced camera pick up the lecturer’s voice? Lots of lecture rooms have a microphone that is pinned to the lecturer and operates via bluetooth. A room microphone would pose problems of feedback, but if that is the only option, then face to face zoom participants must always have their mic muted and questions and answers  asked in house would need to be repeated by the lecturer for the sake of the remote students – or questions are asked via zoom chat. This is actually not a bad outcome anyway as repeating the question ensures a) everyone heard it, and b) a longer processing time to engage with it.  
Being able to produce worked examples and use a whiteboard to demonstrate problem solvinguse a tablet as the screen share in Zoom where you can draw/write and show your workings. Alternatively, you can use your phone as the screen share and position/suspend it above your working area.
Monitoring the chat effectivelyI would dedicate a section of the lecture where you stop to check for questions. This is surely just good practice anyway.  

Previously perhaps the promotion of such a learning environment may have been frowned upon as a threat to lectures going ahead at all – why would we need to have a live lecture when it can be watched online, at one’s own convenience. Well, it would seem that the average cohort of lecture audience has always contained a mix of those who like and benefit from the in-person ‘live’ experience and those who prefer the remote alternative. Mixed-mode lectures offer the best of both worlds.  

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

DO YOU EXPLICITLY DISCUSS MOTIVATION WITH STUDENTS?

This is part 2 of an essay based on self-regulated learning, and whether it needs to be taught for students to become skilled in it. Part 1 is here.

In part 1 I discussed how explicitly teaching and modelling to students how to think with knowledge potentially facilitates students being able to self-regulate such thinking. The proposition has implications for the explicit modelling of thinking critically and creatively. In this post I will expound on Zimmerman and Moylan’s 2009 paper that theorises that motivation is inextricably linked to these metacognitive processes, and just like everything else connected to learning, needs to be explicitly taught to students in equal measure for them to eventually be able to use the knowledge independently.

Zimmerman and Moylan suggest that there are 3 differentiated stages in achieving self-regulation. These can be equated with the EEFs appropriated terms: planning, monitoring and evaluation. The diagram below represents the cyclical processes of self-regulation.

FORETHOUGHT = PLANNING 

IT’S A CASE OF WHICH COMES FIRST, the chicken or the egg, but in order for a student to get their learning off the ground, they need to be motivated to do so. Oftentimes in the school sector, this may not be an intrinsic motivation, with extrinsic rewards and punishments tending to dominate the setting. Upon presentation of a new learning activity, a student will process a range of thoughts evaluating whether they should in fact participate in the endeavour. Students immediately process the expectations against any prior experiences or knowledge, drawing on their schemata to ascertain the extent of having to set new goals and strategies to achieve the new learning, whilst probably concomitantly deciding if they have any intrinsic interest in the task. If they arrive at the conclusion that they don’t possess either of these motivators, your work is immediately cut out for you.

Compounding this will be the fact that students also naturally draw from that schemata the affective responses they had or indeed have built over time in dealing with similar types of activities or learning experiences. If this audit brings up negative memories, perhaps emanating from a lack of success, or serious disinterest, then this will heavily impact on their motivation to continue. It certainly won’t be the case that ‘If you build it they will come’. A student’s self-efficacy or belief that they will be able to positively engage in the task will most certainly affect their planning, strategy and goal setting capacity. So, besides forcing students to participate, what can be done to break this thought pattern?

METACOGNITION – Make explicit the possible reactions students may have to a new task: ‘You may have had a negative experience with this type of problem before, but this time is different because…’, ‘You may immediately think there’s no relevance to this task, but…’, ‘You may have not achieved the grade you wanted in the last task, but this time we are going to plan the response better…’. By making such reactions explicit, explaining how demotivating factors can arise, and providing explicit strategies that ‘show’ how a different outcome may eventuate, the teacher is training the student to think about the new context in a new way, and mitigating against poor self-efficacy inhibiting impetus.   

Also crucial to setting up learning is making explicit the goal orientation of the task. Plenty of research suggests that ‘performance’ orientated goal setting, where students’ motivations to learn are primarily centered on comparison and competing against others, is tellingly inferior to having a ‘learning’ goal orientation: here. The positioning of a task’s import as being an opportunity to strengthen personal understanding against personal standards has been shown to facilitate a deepening of learning: ‘In this activity, let’s think about how we can incrementally improve our knowledge of the topic…’, ‘I want you to think about what your level of knowledge is on the topic and set yourself a goal of looking to strengthen it by the time we have finished….’, ‘In this task, we are going to concentrate on mastery…’ However, such ambition is made infinitely more difficult in a system predicated on accountability. Nonetheless, a good teacher will explicitly and inexorably focus their students’ attention on setting goals for self-improvement, and that learning is indeed a continuum that takes time and practice to master. When such purpose is part of the learning culture, once the task is successfully completed the student’s evaluation process then positively feeds into and strengthens the self-efficacy required to engage in a new learning context, regardless of how they fared compared to others in the cohort.  

This personal growth rather than competitive epistemology is particularly relevant if you are trying to encourage students who are working hard but not quite succeeding – and observing others around them achieving – in the beginning of a course. These students not only need the explicit discussion of what success means (improvement against your last effort), but precise feedback that articulates what the gaps in knowledge are, and crucially, scaffolded activities that facilitate the opportunity for observed improvement against the last effort. Mastery pathways not only provide opportunity for incremental success, but also the chance to eventually catch up to the expected standard. Because success is the greatest motivator of all, when those achievements are explicitly labelled to the student, s/he will accommodate their self-efficacy to become more positive.

PERFORMANCE = MONITORING 

During the task, drawing students’ attention to how they are solving problems and the progress they are making and the motivation required to do so will facilitate the eventual automaticity of such thinking. Modelling self-questioning and verbalisation of thinking processes whilst scaffolding learning through worked and completion examples builds the schema of such processes in students’ minds, and teaching students how to manage time and set up an appropriate learning space should never be assumed to be assumed knowledge. Providing as many opportunities as necessary to facilitate a culture where the student can control these learning strategies and can readily select the most appropriate tools to negotiate the context they find themselves in should be an engrained aspect of a teacher’s curriculum design. When students feel such control over the strategies they employ to negotiate the present task, their motivation and self-efficacy will be strong.

The explicit drawing of attention to higher order thinking processes during the task goes towards developing the schema for doing so in future, independent contexts. As argued in part 1, assuming students will engage in higher order thinking once knowledge is sufficiently acquired is not a good idea, as students may not do this unless they are highly motivated in the discipline or topic in question. Prompting with questions like ‘So if we know this about …., what would happen if …..?’, ‘What is the connection of this idea to the topic we looked at last week?’, ‘What would happen if we combined these 2 ideas?, ‘So imagine this scenario…., how would you solve the problem at hand?‘ If you model this thinking, students will use the model as a strategy when asked to think about knowledge in new contexts, and being able to do so will boost their confidence in engaging with knowledge in interesting ways. This confidence develops self-efficacy, and thus motivation.

SELF-REFLECTION = EVALUATING 

From my experience, one of the most difficult things to do is to get students to reflect on their performance and planning after the event. This is especially difficult if the student entered the transaction with a performance goal orientation and wasn’t overly successful. The immediate deflation is palpable. Explicitly discussing this with the students is important at this very moment. But perhaps most importantly, understanding the causal attributions some students may have applied to their success or failure is necessary to ensure that they are able to benefit from the evaluation.

Many students attribute their experience to fixed ability, which is particularly detrimental if they engaged in the activity with a performance goal and didn’t succeed. The comparison against others that essentially results in a defeat if unsuccessful solidifies a negative self-efficacy, which in turn has a negative influence on the planning stage of the next learning moment. If however, the student can be persuaded by the learning continuum theory and that their ability in the task is not fixed and can in fact be improved by application of effort, practice and good revision and study techniques, then the probability of their motivation being secure for the next task is high.  

Unfortunately, over time and repeated negative experiences in learning environments, some students develop entrenched negative evaluations that seriously inhibit motivation to continue or engage in future learning contexts. Procrastination may be a milder symptom of such a state, but more serious and damaging is learned helplessness, a notable defence mechanism employed that prevents a student from trying because they believe that there’s nothing that they can do to change an inevitable failure. Often, such a state becomes an unconscious default, and can only be changed by carefully designed scaffolded learning opportunities that promote success, as well as making the psychological context explicit. Of course it is time consuming, but a well-constructed audit of a student’s performance, including how they approached and revised etc for the task, will likely find a host of issues that could be rectified. A checklist may work in helping students evaluate their performance in a task, and the explicit discussion about how neglect in each element on the list is quite impactful could act as a motivator for a student to alter their preconceived beliefs that they aren’t in control of changing their learning potential.

TAKE AWAY

Teaching students about motivation and how past experiences affect the present, and helping students identify patterns of behaviour, their ‘real’ causes and how they can be adjusted is as imperative as teaching them content. Making thinking explicit can go a long way to positively affect how a student perceives a task and their ability to process, engage with, and succeed in it. The result is that students will willingly drink from the water you have led them to.    

The next post will discuss how beneficial it can be for students to understand how learning actually happens.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

DO WE NEED TO TEACH SELF-REGULATION?

Achieving independence and self-regulation in learning is the holy grail of education, but how to go about it is as equally mystical. Essential to the quest is developing a rich schema through the building and interaction of knowledge, and whilst belief in the explicit teaching of students in how to think about their thinking processes (metacognition) and how to evaluate them as being an integral part of self-regulation is gaining momentum (EEF), this 2 part post will seek to extend the current understanding by discussing whether it is necessary to promote critical and creative thinking inside subject domains. The essay also expounds on Zimmerman and Moylan’s 2009 paper that theorises that motivation is inextricably linked to both of these metacognitive processes, can’t be omitted from the discussion, and in fact needs to be explicitly taught to students in equal measure. As Kuhn exhorts, ’People must see the point of thinking if they are to engage in it.’  

WE ALL WANT 21ST CENTURY SKILLS 

Whilst many argue that labelling skills such as critical thinking and creativity as ‘21st century’ does an injustice to those who for thousands of years exhibited such proficiency in them, few could argue that there is a growing demand for graduates to be strong in these areas in the age of increasingly automated and mechanised jobs. How to equip students with such skills then has become the mission of educators, but many well-intentioned educators have erroneously conflated the desired outcome with a direct pedagogy, succinctly stated by Kirschner: the epistemology of a discipline should not be confused with a pedagogy for teaching or learning it. The practice of a profession is not the same as learning to practise the profession. There are plenty of excellent voices who assent to this notion, none better then Daisy Christodoulou, specifically pointing to the fact that thinking critically or creatively relies entirely on a strong bedrock of knowledge and can’t be taught in the abstract. If we think about this it seems rather logical – you can’t think about things you have no knowledge of, and most creativity is the accommodation of knowledge already in existence. Such constraints make the application of such skills heavily context and domain dependent. But what tends to be lacking from such unequivocal pedagogy is the answer to this question: once the foundations of knowledge are secure, do students need explicit modelling of how to think critically and creatively with that knowledge? I contend that the answer is yes.  

If we consider how learning is characterised by the acquisition of schema, and how crucial modelling is in that continuum, I would argue that modelling how to play with knowledge is no less important than modelling the knowledge itself. However, it is something that is often overlooked in modern curricula for three reasons:  

  • Because we sometimes assume that students will naturally think in these ways  
  • Because of the need to fit in so much content in so little time  
  • Because it is hard to assess, relying on subjective and therefore unstable evaluation 

The first relies on Geary’s theory of primary vs secondary knowledge. The exposition of the theory is that once sufficient knowledge is obtained, the mixing/matching and challenging/critiquing of what is understood should become axiomatic. From my experience though, without the continuous prompting by the teacher to engage with the knowledge in this way, such an outcome tends to rely heavily on a student being highly motivated in a specific domain of knowledge, with the less interested, but equally as capable student, content with achieving in assessment but not necessarily interested in exploring the content further. But what is notable however about the self-motivated student, is that they still will undertake a process of learning in how to mix and match and challenge what they know, albeit, independently: it is through the experimentation of their thinking and its evaluation that they may eventually arrive at something unique and interesting, but this ostensibly natural skill is actually being practised and refined to be maximised – and quite possibly, inefficiently, compared to what some guidance in the process could afford. When motivation to pursue a discipline is not as high, students need to be prompted to engage in ‘higher order’ thinking. Interestingly, sometimes it is only after these higher order prompts that real interest and motivation is sparked, and so the explicit provocation of them in a learning environment is important.

Sweller’s addition to Geary’s thesis, that : ‘Organizing general skills to assist in the acquisition of subject matter knowledge may be more productive than attempting to teach skills that we have evolved to acquire automatically…’ supports the earlier statement that teaching critical and creative thinking in the abstract is pointless, but it is the focus on the word ‘organising’ that is crucial here: the conclusion then is that it’s not enough to assume students will naturally engage with this type of thinking – it is only through the explicit organisation and modelling of it that will facilitate students being able to self-regulate this thinking.

Practising the application of critical and creative thinking needs time and space for it to be strengthened, and this is why the existence of the 2nd obstacle in educational contexts is so concerning. The impetus of non-invigilated exams has certainly made apparent the need for assessment to involve the application of knowledge. But to do so requires a carefully designed curriculum that facilitates such opportunity in the sequence of learning.  I tend to promote a sequence patterned by the rhythm: learn, practise, apply. New knowledge is introduced by the expert, students interact with and practise using the knowledge to confirm understanding, students then apply their knowledge to do something with it. The application doesn’t have to be a large project type task. It may simply be the asking of higher order questions that include hypothesising, creating analogies, exploring various points of view, wondering if the content can be applied in other contexts, what the connections are to other aspects of the course, or brainstorming with a view to generate new ideas for a real-world context. The latter is especially relevant for the later stages of higher education.  

It is such a pattern of learning that models for students how to interact with the understood knowledge they now have in their possession, a modelling process that observes what Volet (1991) imports as the necessity of identifying and making explicit how an expert thinks. This is relevant to not just when the expert is presented with new problems, but also how they think with the knowledge they already have. Palincsar &Brown (1989) concur, ‘By demonstrating the different activities by which subject matter may be processed, problems solved, and learning processes regulated, the teacher makes knowledge construction and utilization activities overt and explicit that usually stay covert and implicit.’ Like all learning, the goal is to take the metacognition to automaticity so the propensity for self-regulation in the next sequence of learning isn’t compromised by cognitive overload.   

WHAT ABOUT TRANSFER?

Whether or not this explicit process of thinking within specific domains can be transferred to new contexts remains to be seen, but Simon, Anderson, & Reder (1999) arouse our curiosity when they suggest that transfer happens far more frequently than we might think. They cite reading as a prime example, but more specifically challenge a famous study by Gick and Holyoak who demonstrated that students were unable to see the abstract similarities between two problems even when they were presented side by side:  

One of the striking characteristics of such failures of transfer is how relatively transient they are. Gick and Holyoak were able to increase transfer greatly just by suggesting to subjects that they try to use the problem about the ‘general’. Exposing subjects to two such analogues also greatly increased transfer. The amount of transfer appeared to depend in large part on where the attention of subjects was directed during the experiment, which suggests that instruction and training on the cues that signal the relevance of an available skill might well deserve more emphasis than they now typically receive–a promising topic for cognitive research with very important educational implications.’  

They then continue to suggest that: ‘Representation and degree of practice are critical for determining the transfer from one task to another, and transfer varies from one domain to another as a function of the number of symbolic components that are shared.’ It follows then that for Dignath and Buttner’s claim to be valid, in their meta-analysis on Components of Fostering Self-regulated Learning, that ‘Providing students with opportunities to practice strategy use will foster the transfer of metastrategic knowledge to real learning contexts’, relies on students being able to recognise patterns or connections between contexts where they can apply their metacognition.  

As stated earlier, you can’t think critically and creatively without a strong foundation of knowledge, and further, some of that thinking will be only relevant in specific domains. But it does seem likely that some of the higher order strategies stated above (hypothesising etc) would be able to be applied in a range of disciplines, and that a student observing the modelled thinking processes of a teacher in a second context will recognise some (if not many) elements learnt from their first. Once reinforced through this observation, students will begin the regular learning continuum of taking the skills to automaticity through practice. Once achieved, being able to apply the thinking in new contexts is made more possible – it will be up to further research to ascertain whether, having met these conditions, such transfer is actually possible.  

WHAT DO WE WANT FROM EDUCATION? 

 Another consideration when teaching critical thinking draws from Kuhn, who exhorts that the development of epistemological understanding may be the most fundamental underpinning of critical thinking. In no uncertain terms, she beseeches that teachers provide the opportunity for students to reach an evaluative level of epistemological understanding, realising that simply possessing an absolute epistemology constrains and in fact eliminates a need for critical thinking, as does a ‘multiplist’ stance, allowing students a degree of apathy characterised by statements such as “I feel it’s not worth it to argue because everyone has their opinion.” The explicit modelling of an evaluative epistemology, where students are encouraged to the fact that people have a right to their views with the understanding that some views can nonetheless be more right than others, sets up a learning culture where students see the ‘weighing of alternative claims in a process of reasoned debate as the path to informed opinion, and they understand that arguments can be evaluated and compared based on their merit (Kuhn, 1991).’ Such a pedagogy may satiate an interesting question posed by Martin Robinson: ‘Should the result of a good education include all students thinking the same or thinking differently?’

The 3rd obstacle also looms large. Assessing creativity especially is a difficult thing due to its subjectivity. Rubrics are notoriously imprecise as a reliable reference in determining success or failure of creativity: what I may think satisfies one element of a rubric may be argued against by a colleague; maintaining consistency even with myself in marking is difficult. And if we don’t assess, will students not particularly interested in the topic lose motivation, and make the process a challenging one to manage? I think the answer lies within the answer to Martin Robinson’s question: surely we don’t want everyone robotically programmed. We want students to engage critically and creatively with concepts, and participate in the building of a dynamic and interesting world, so we have to have faith that the knowledge taught to our students, when learnt well, will provide avenues for curiosity that will engage them to participate. Such an epistemology then satisfies stakeholder desires to employ graduates who can think critically and creatively in a modern workplace.      

So how is motivation linked to it all?

 In the next post, I will extrapolate on Zimmerman’s imperative that metacognition is inextricably linked to motivation, and how educators can ensure they incorporate both in learning design.  

References 

Anderson, J. R., Reder, L.M., & Simon, H.A. (2000, Summer).Applications and Misapplications of Cognitive Psychology to Mathematics Education.Texas Educational Review. 

Dignath, C., Buttner, G. (2008). Components of fostering self-regulated learning among students. A metaanalysis on intervention studies at primary and secondary school level. Article in Metacognition and Learning · December 2008 retrieved from here 

Geary, D. (2001). Principles of evolutionary educational psychology.
Department of Psychological Sciences, University of Missouri at Columbia,
210 McAlester Hall, Columbia, MO 65211-2500, USA here

Palincsar, A. S., & Brown, A. L. (1989). Classroom dialogues to promote self-regulated comprehension. In J. Brophy (Ed.), Advances in research on teaching, Vol. 1 (pp. 35–67). Greenwich, CO: JAI Press. 

Sweller, J. (2008) Instructional Implications of David C. Geary’s Evolutionary Educational Psychology, Educational Psychologist, 43:4, 214-216, DOI: 10.1080/00461520802392208

Volet, S. E. (1991). Modelling and coaching of relevant metacognitive strategies for enhancing university students’ learning. Learning and Instruction, 1, 319–336. 

Zimmerman, B., Moylan, A. R. (2009). Self-Regulation from:
Handbook of Metacognition in Education. Routledge.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

TRAINING STUDENTS FOR ONLINE EXAMS REDUCES COGNITIVE OVERLOAD

Teaching to the test doesn’t work. But teaching students about the test is imperative. Not only that, exam performance IS a thing, and you can assist students to get better at that performance. It’s all about mitigating cognitive load.

GAME TIME – Any sports person will tell you that match fitness is everything. Regardless of how much you prepare, you never achieve the same level of fitness and game knowledge compared to actually playing. Why? Because when the real thing happens, not only do nerves and adrenaline consume vast amounts of energy, interfering with the ability you have coming to the surface, but lots of other unexpected occurrences happen, all leading to increased cognitive load, and leading to exhaustion quicker. The cognitive load can be so debilitating that the player has to rely on muscle memory to get them through. When a student sits an exam, adrenaline and anxiety will naturally surge through their veins. Helping them revise the content is a must, but importantly, helping them become more familiar with the game/exam context is climactical, and this can be achieved by training students to automaticity with exam technique.

ABOUT THE TEST

1. Exam layouts

 Show students, and get them used to, the layout of the online exam. The more they see the module and layout of the exam and understand what the expectations are of each section, the less pressure they’ll feel when they see the real thing.  

Of particular importance with students having to complete exams online is detailing the processes involved if they experience technical issues. Take them through the procedures so if it happens during the exam they don’t lose all confidence and panic. ALSO: Ensure students have read the academic integrity policy and that you discuss it repeatedly – the more you talk about academic integrity the more of it you’ll get.

MANAGEABLE Student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%

2. Question requirements

Ensure students know what each question is demanding of them.

How long is a piece of string?

What does a short answer look like? What gets you full marks? What does a long answer look like? What gets you full marks? How much working out is necessary? How much detail is required?

Don’t expect students to guess the answers to these questions. Students who have to worry about what constitutes a good answer expend lots of valuable cognitive load. Model the expectations by showing previous examples, past exams, etc.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%

IN THE EXAM

1. Time training

Training students with timings of questions in exams will significantly propitiate cognitive load. It’s one thing to know what the question demands of you, but another to actually do it in a stressed environment. If a student isn’t used to the pressure of time, the longer the exam goes on, the greater the likelihood of their cognitive load increasing and their performance reducing as they panic with the evaporation of time. So, get them to practice doing a mock of a section in the exam – let them experience what it’s like to type in the allocated time – do their fingers get tired? What’s it like to upload if necessary etc. The more practice they get the better, but if you are running out of lesson time to train students, at least give students the chance to practice once – just one section that requires an upload process for example.

This image has an empty alt attribute; its file name is image-2.png

The other aspect of time training is in helping students to set personal timers. Obviously, the online exam doesn’t have all the usual cues that an invigilated exam offers: a large clock, a warning by the invigilator of 5 minutes to go, and even the cues of students completing and organising their work on the next desk. But an advantage of online exams is that students can set their own alarms to negotiate each individual section of the exam, and not accidentally spend too much time on a certain section:

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%

2. Editing their work

Rereading responses is difficult for exhausted students to do at the end of a lengthy exam. It is usually at this point that they have a sense of relief, and the last thing they want to do is reread what they’ve done. Of course, it’s madness not to, to ensure there are no silly mistakes, particularly in multiple choice questions, or content mistakes. Even checking for structural, punctuation and/or spelling issues could benefit the overall grade. 

So, I have to build that practice into their normal way of working, so it becomes a part of the process, and not an add on. This can really only be achieved by repeatedly physically getting students to do it: at the end of each ‘mock’ assessment, stop the test and get students to spend 4 – 5 minutes in dedication to proof reading…and explain the rationale, repeatedly: I always tell my students they WILL lose more marks with errors (they can fix) than they are able to gain by writing more response in the last 5 minutes. But without it being a normal way of working, exhausted students won’t do it automatically.   

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%

3. Being professional

Not panicking in certain situations is crucial in reducing cognitive load. Taking students through possible scenarios will help to calm them if the situation presents in the exam, scenarios such as:  If you’re running out of time what should you focus on to get you the most marks? What to do if you can’t answer a question – do you panic and lose total focus for the rest? Should you move on and come back to questions? Are you aware that the brain will warm up and so coming back later may be easier than it is now? This last point is absolutely crucial to convey to students. As the exam progresses, lots of the exam content itself may trigger or cue retrieval of content that couldn’t previously be answered, so teaching students this metacognitive notion could make a significant difference to their overall performance.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%
Being professional 10%5%

As you can see by the very much made up numbers, the cognitive load experienced by Student A is significantly greater than Student B, and would indubitably affect performance in the exam. The student’s knowledge would have to fight a great deal to break through the pressure. 

BEGIN NOW!

The more you do something the better at it you get, provided of course you’re doing it the right way. Students don’t really get that many opportunities to learn to negotiate the exam environment on their own, especially in the current context of moving to online non-invigilated exams, and so providing them with such training is critical. 

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger, and follow this blog for more thoughts on education in general.  

10 WAYS TO ENCOURAGE PARTICIPATION USING ZOOM

Participation is crucial in any learning environment, and a Zoom session is no different. Participation encourages attention, which is a requisite for learning. If students aren’t attending to the content or discussions on offer, they have no chance of encoding that content and then being able to use it at a later time: in other words, learning it. Being skillful in ensuring participation is therefore imperative.

Varying the way students are asked to participate is a powerful way to encourage engagement. Zoom can encourage participation in several different modes, which sometimes is not possible in a regular face to face session. Here’s how a teacher/tutor can engage students in a Zoom session:

  • Immediate quiz/questions
  • Explaining your method
  • Non-verbal feedback
  • Verbal questions
  • Written questions
  • Polls/quizzes
  • Breakout rooms
  • Screen sharing
  • Using the whiteboard
  • Modifying content

1. IMMEDIATE QUIZ/QUESTIONS

Because of the way our memories function, recapping content from previous sessions is essential to help the knowledge move into the long-term memory where it can then be recalled automatically to assist in processing new information. Students who arrive on time to your Zoom session should immediately be put to work, either doing a 3 or 4 question quiz on previous learning, or producing short answers to a question or 2. Both of these are shared from your screen. This then does 2 things: firstly, it activates prior knowledge that will assist in today’s learning, and secondly, it gets the students involved straight away. Late comers also won’t miss the new content. Answers to the quiz etc are briefly discussed and then the current session begins with students’ minds active.

2. EXPLAINING YOUR METHOD

By articulating the strategies you will employ in the session up front you are likely to alleviate students’ anxieties about some of the processes they’ll experience during the session, and therefore encourage participation. Explaining why you are repeating questions, why you are talking about things from previous sessions, why you are asking for different types of responses and feedback, why you are insisting everyone responds before you move on, why you are using polls and why you are so keen on student participation and its effect on learning will help students feel more comfortable during the session and feel more able to participate.

3. NON-VERBAL FEEDBACK

You will have to turn on NON-VERBAL FEEDBACK in the settings:

Getting students to indicate a yes or no or a thumbs up encourages participation. Whilst you can’t guarantee such an assessment for learning truly proves students have understood your question, as students could just be guessing or indicating to avoid being asked why they haven’t, it still gets students involved. Even if a student answers to try to avoid a follow up question when the tutor sees they haven’t responded they are still actively listening, which is a condition of learning. Varying the type of questions can also generate some humour and fun in a session – asking if students are breathing, or if they know that Liverpool football club is the best team in the world for example. Non-verbal feedback is best used in triangulation with other assessment for learning options, such as verbal questions:

4. VERBAL QUESTIONS

Effective questioning is a powerful way to assess for learning and guarantee participation. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Keeping the questions ‘alive’ is important to encourage participation because as soon as you provide an answer the majority of students will stop thinking about the answer – they have no need to keep thinking: allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By randomly choosing students to answer you not only get a sense of their levels of understanding which allows you to pivot the next sequence if necessary, but it also keeps students on their toes as they realise that they may be called on next. This random selection of students will even work in a very large tutorial.

Sometimes it’s the little things. Be aware that you might naturally tend to favour interacting with those you can see in the session. Those without their cameras on, as in the image below, may not get asked as many questions, so an awareness of this and conscious questioning of unseen students will encourage a broad participation in the session.

5. WRITTEN QUESTIONS

Using the chat section to elicit answers to check for learning encourages participation. It is a variation on simply just listening and answering verbally. Having students write down an answer proves they know or don’t know the content. Dedicating a time in a session for this process not only varies the type of participation, but can be a great indicator that students have the required knowledge to continue. Opening up the chat lines for student to student interactions also encourages participation as some will answer questions and feel empowered in the process, and some will just enjoy the interactions. It is important though that the chat area is monitored as it can lead to the wrong kind of participation – like students just chatting in the classroom/lecture theatre which means they are not paying attention to the content. You can’t write/read and listen at the same time. I write about that here.

6. POLLS/QUIZZES

Using the poll function in Zoom is easy. You have to ensure it is turned on in the settings:

Once you’ve designed your questions, preferably before the session, you can then launch the poll.

Students then participate by responding. You then share the results, which at this point are anonymous, with the whole group. This serves as an assessment for learning opportunity, and you can pivot the session based on the answers if necessary. In answering the questions, students’ minds are activated as they search for knowledge in their schemata. There is an art to designing effective polls and multiple choice questions, and I discuss that art form here.  

Canvas quiz can also be incorporated into the Zoom session. The advantage of this is that it has a variety of question types that further encourage participation. There are many other apps too, such as Quizizz, Kahoot, and Mentimeter, but should be used with caution if not supported by your institution, as students may not want to sign-up for such platforms that essentially require them to surrender their data.

7. BREAKOUT ROOMS

Sending students into groups to discuss a concept or problem is a fantastic way to encourage participation. Homogeneous groups tend to work best, because those with vastly different levels of developed schema tend not to engage with each other as well as those with closer skill levels. It is sometimes of benefit of the more knowledgeable student to help another peer, but this then relies on effective teaching skills to work, and in reality that is a big ask of a student. So setting them up before a session may be your best bet.

Providing guidance on what to do when students are in the session is crucial, and it is worth popping in to each group to see how it is progressing. As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’ Attributing a member of the group to be a scribe is also worth doing, so that when the group returns to the main session they are able to share their screen and discuss their work/findings/solutions etc.

8. SCREEN SHARING

Getting students to share their screen encourages participation. This is especially effective coming out of a breakout room, but can be used at any point in a session. A student may be asked to demonstrate their workings of a problem, an answer to an essay question etc and the tutor can use it as a model to provide feedback. Of course caution would be used here, and only positive/constructive feedback provided.

9. USING THE WHITEBOARD

Sharing the whiteboard and getting students to interact with the content you or they put on there is a great way to encourage participation. You could model your thinking process in this medium and explain or annotate examples to discuss how students could gain a better understanding of the content. You could also have students annotate the board, asking them to underline key words, complete equations etc. Getting multiple students to add their own annotations is probably more beneficial with smaller groups, such as in the breakout rooms. Unfortunately in Zoom you can’t paste an image on the whiteboard, only text.

10. MODIFYING CONTENT

I firmly believe that there will only be a very small percentage of students who are genuinely unwilling to participate in this medium. Such students would be expected to use the chat option and only ‘send to the host’ for example to ensure they are still participating. If you have tried all of the above strategies and your students are still not really getting involved, it is likely that they just don’t know the answers. As humans, we naturally want to succeed, and non-participation may indicate to you that you need to strip it back a bit, and come back to some foundational knowledge. It doesn’t matter what you think students should know, it is about what they actually do, and the relevant development of their schema. It is better that you facilitate the construction of knowledge, and provide questions that students will know answers to so they can build up their confidence in participating, By doing this, you will slowly, but surely, build their schemata so they will want to get involved consistently.

Online participation is essential for the session to be effective. If you have other tips and advice how to engage participation, please let me know and i’ll add to the list.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.

REFLECTION

Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.

WHAT DO YOU WANT TO TEST IN AN EXAM?

Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau

4 REQUISITES OF A WELL DESIGNED OPEN EXAM

  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   

CRITICISMS OF OPEN BOOK EXAMS

A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.

EXPOSURE TO OPEN BOOK ASSESSMENT IS CRITICAL

Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.

A PEDAGOGICAL UPSHOT

As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

LOOKING TO MINIMISE PLAGIARISM IN AN ONLINE ASSESSMENT?

When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.

HOW TO SET AN ASSESSMENT FOR A SPECIFIED TIME

STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


USING CANVAS QUESTION BANKS

Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.

USING CANVAS FORMULA QUESTIONS

I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.

FINAL THOUGHTS

Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ZOOM BOOM! Maximising virtual lessons

Using a virtual platform requires as much planning, preparation and expectation as a regular lesson. Of course there are differences to a face to face context, but like any good learning sequence, being aware of pedagogical principles will ensure the session is an active, useful learning experience.

HOSTING A SUCCESSFUL ZOOM SESSION REQUIRES 3 ESSENTIAL ELEMENTS:

  • knowing the tech
  • preparing the students and the session
  • managing the session

Knowing the tech

At Adelaide University we have developed a range of resources that will take the academic from the basics of downloading zoom to their computer to being able to proficiently place students into virtual breakout classrooms here. I know many other Universities also have good resources, like this one from UQ. We recommend the following:

  • Set yourself small goals in mastering one aspect of the tool at a time.
    • Practice amongst your peers and learn about the functionality of the platform.
    • Perhaps the most important thing to remember is that your skill using the tech will improve considerably with practice, and that you what may seem overwhelming now, will be an automatic teaching method soon.

Preparing the students and the session

  • Students:
    • make sure the students understand the tech.
    • Provide clear and explicit instructions how to download and use the tool – we have developed these already.
    • Provide clear and explicit expectations about participation and etiquette.

In the end, the online session is still a classroom, and the behaviours for learning you would expect in a classroom to maximise learning are the ones you should expect and demand in a virtual setting. As soon as your expectations drop because you aren’t confident that the setting can produce learning, then you’ll lose the student engagement.  

  • The session: it is imperative that you are clear what the objectives of the session are. Is the goal to teach a new idea, check for understanding, to correct misconceptions, to extend thinking or simply to practice and consolidate existing knowledge? When used in conjunction with a recorded lecture in Echo 360, or a pre-loaded or flipped activity in a discussion board, the zoomed tutorial is often used to check for understanding. Have clear sectioned elements to the tutorial:
    • a recap of the last session (an introductory retrieval quiz is best)
    • a modelled example to introduce the desired content
    • opportunity for students to demonstrate their understanding
    • opportunity for students to ask questions
    • opportunity to practise

Managing the session

Always remember the session is an opportunity for learning, and what you would do in a regular learning context is what has to be applied here too.

  • Start on time – have students login 5 minutes before the start so you are not waiting for stragglers and being interrupted when the tutorial begins by having to add them manually to the session. The waiting room can have the session rules attached as seen above.
  • As soon as the session begins have students complete a recap quiz – also provides something for punctual students to do whilst you’re waiting for others to join. Retrieval is everything in learning!
  • Go through answers briefly
  • Discuss the expectations and rules of engagement of the current session. Repeat these many times over lots of sessions, so the process eventually becomes automatic for students.
  • Be friendly and encouraging – and patient whilst students become familiar with the process
  • Go through an example similar in difficulty to the pre-loaded activity as a warm up, narrating your workings. See here for more on the power of worked examples.
  • Present the pre-loaded activity
  • Check for understanding
    1. By asking questions: don’t take one or 2 student responses as an indication of the whole group’s understanding. See here for how to ask the right questions.
    2. By getting students to upload or show their learning,
  • Use at least 2 student examples to provide feedback – discussing their strengths and weaknesses will be another teaching moment
  • Present another activity of analogous difficulty to strengthen understanding. Consider breaking cohort into homogeneous groups, have them discuss the problem and present a consensus back to the main cohort’s discussion page.  
  • Present a final activity that is harder

Successful Zoom sessions will offer you a unique opportunity to check for understanding or to extend student knowledge.  It also offers an opportunity to place yourself in the shoes of the learner, the learner who is constantly introduced to a lot of new content and problems and may feel overwhelmed at times in the process. The more conscious you are of helping students manage the cognitive load when introducing new material, the better you will design and sequence that learning. Concomitant with that is articulating your method and helping students become stronger at understanding the metacognitive process.

Mastering Zoom will take practice, but that’s like everything you first began.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger