the manifestation of cognitive overload

ATC -- Frazzled Cartoon Lady

The reactions people have to cognitive overload are varied. Some get angry, some withdrawn, some somewhere in the middle. What is common to all who experience it though is feeling overwhelmed, feeling uncomfortable, feeling frustrated and sometimes feeling worthless. Imposter syndrome can be common.

Understandably, it’s an experience we want to avoid. It can be exhausting.

How students handle it is largely determined by their temperament, which is affected by a multitude of factors. The more obvious reactions are the extremes: poor behaviour, lashing out, belligerence and compete withdrawal. Of course, poor behaviour and withdrawal is more complex than just cognitive overload, but at times it is certainly a factor, and I find it strange that little to no conversation ever discusses improving behaviour in the same way that we endorse the elimination of academic cognitive overload – through incrementally improving cognitive skills. Inculcating new behaviours surely needs the same level of design and commitment? Perhaps less obvious is cognitive overload in students who externally give few clues that they are experiencing it; perhaps they are not reacting because of compliance to the school’s rules or respect for authority, or perhaps because they don’t want to be seen as not understanding what is being taught; peer pressure is huge in all education sectors. Perhaps they are having a difficult time outside of the classroom, most certainly a factor affecting higher education students who may have lost their employment during COVID. Needless to say, cognitive overload reduces learning.

See the source image

A practical and relatively simple solution to mitigate against too much cognitive load is the design of learning sequences that focus on the building of schema that include lots of formative assessment to check learning. Good communication with students also allows you to gauge how students are feeling in their learning, and this can be an extremely useful form of formative assessment too.

It’s not just students who feel it

It’s certainly not just students who experience it . Any time you are under pressure in a new situation you are likely to experience it to some degree as your mind grapples with the new content and searches relevant schema to connect it to: the more the pressure and the fewer the connections, the greater the load. You are likely to experience it when you attend a conference where presentations don’t adhere to multi-media principles, you are likely to experience it in a meeting when you don’t have the relevant background knowledge on a topic being discussed, and you are likely to experience it when you yourself are presenting/teaching and you don’t fully understand or believe in what you are discussing. Of course, the most obvious analogy is when your practice is being observed. All of these examples are the times when you are effectively a student, a novice learner. As an educator, it is important that you reflect on the feeling of cognitive overload and how easily it can occur, and use that knowledge to consider how you design and shape the learning experiences of your students so they experience it less, and learn more.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

“ATC — Frazzled Cartoon Lady” by campbelj45ca is licensed under CC BY-SA 2.0

chunking lectures – it’s a bit of a no-brainer

Breaking a lecture up into distinct chunks or sections is a bit of a no-brainer. It is all to do with understanding the implications of cognitive load theory, specifically that the brain can only process a small amount of new information at once. Presenting more information than the brain’s architecture can handle leads to overloading the working memory, and usually a significant decrease in learning.

Breaking your lecture into chunks provides students a chance to process each chunk before new material is presented. Designing opportunities for students to be active (black boxes) in the processing of the content also assists in facilitating the content’s understanding, and eventual transfer into long term memory.

So, here’s a possible live -streamed lecture design that considers cognitive load implications, the need for the student to be active in their learning, and is very manageable for the lecturer. The model can be applied to both live and recorded lectures, but the recorded lecture will need some more specific context discussed, which I will do in another post.

I’ve talked before about the possible mixed-mode future of live lecturing, with it being able to facilitate a breakout room. The below model considers this as a possibility.

lesson segmentrationaletech to assist
introThe lesson begins with a retrieval quiz.
The benefit of retrieval is enormous. It strengthens the memory of key ideas and content. The purpose of this is so the knowledge can be automatically brought to cognition when new learning is presented, without taxing the working memory. The more knowledge the student can draw from, the greater the opportunity to delve into more higher order independent learning, so building students’ schema through retrieval is is a bit of a no-brainer.
The lecturer will place answers on the screen, and spend 2-3 minutes explaining answers if common errors were made.
Polls
Echo 360
Mentimeter
Quizziz
Canvas quiz
teachingDelivering content.
10-12 min.
Incremental building to application is is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts before tehy are presented with more content. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture, and is a bit of a no-brainer.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
teachingDiscussion of last task if necessary – may not be if practising or completing examples.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
Formative assessmentChecking for learning
A quiz of short answer opportunity to see if what you have presented so far has been understood is is a bit of a no-brainer. The questions also provide another opportunity for a student to process the content and develop a better understanding.
Questions up on screen.
Zoom polling.
Using Canvas discussions as student answer repository.
Mentimeter.
Quizziz.
teaching Check answers – you may need to pivot the lecture if misconceptions are still prevalent.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
summary Recapping key ideas. Tying the lecture all together: linking it to previous learning and real word contexts. Discussion and questions asking students to link their learning is a great way to draw attention to the key concepts again, and is a bit of a no-brainer. Mentimeter open ended question.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

YOGA, MENTAL HEALTH, AND THE LINK TO LEARNING

Life can be hard. Really hard. There are practically unlimited ways that one can become stressed, and I think I must have absorbed most of them over the last 25 years.

To cut a long story short, over these years I have incorrectly dealt with these stresses, and absorbed them into my shoulders and neck. Gradually, both became like rocks, stiff, tight and inflexible. But instead of doing anything about it, I let it build and build and build to the point of being in constant physical pain, often resulting in headaches, poor sleep and sadness. I know everything is relative, and that others are experiencing real pain and loss, but it was chronically affecting my entire being.

People: friends and family, always kept telling me that I needed to stretch to relieve the muscle pain and that I should go to a physio or massage therapist. I did that a few times, but it only touched the surface, so I would go onto YouTube and search for some free yoga as they suggested. Now I’ve done yoga before, but every time I did, I would always injure myself after a couple of sessions. The reason for that is because I would, unknowingly, choose a session that wasn’t suited to where my body was at. Or, if i got passed a few sessions unscathed, I would still be in pain and always think that it wasn’t working, and so stop.

Now, despite these bad experiences, I knew I couldn’t just leave it. I also knew that it made sense that the tension would ironically have to be reduced by exercising the muscles. Yoga must be the answer.

So I got to thinking, and then finally it occurred to me 3 months ago that doing yoga should be no different from doing anything as a novice, and that I absolutely needed modelled and scaffolded support to be able to get anything out of this ancient philosophy. I’m not sure why it took me so long to register this, especially since I have written about the imperative of modelling numerous times: here, here, here, and the importance of incremental knowledge building here and here, but anyways, eventually it clicked.

So what did I do?

1. The first thing was for me to accept a different motivation: learning, and not performance. I realised that I had to accept that the years of wrenching myself would not be unwound in a couple of yoga sessions. This meant that I would have to tell myself that the process was an extended one, and that if I was to measure any improvement, I would realistically need to give myself a month of continuous practice to evaluate any effectiveness, but even then, that it would be only small if at all.

2. The second thing was to realise that I needed to be taught by someone who knew what they were doing, who knew that building strength takes time, and who offered a continuum of learning where I could start at the very beginning. I found this in the reasonably priced platform www.glo.com.

3. The third thing was to realise that I needed to choose only two 20 minute classes that offered a chance to work on specific sections of the body (upper and lower), and that I would have to repeat those two classes, and alternate between them each day.

4. The fourth thing was to accept that even though some of the beginner class stretches were ostensibly too easy, that I was in fact strengthening muscles that I would need to support more difficult and inflexible muscles in other poses. This reminded me of Engelmenn, here and here.

5. The fifth thing was to discipline myself to doing the practice each day before work. And there were a lot of times where it would have been so easy not to do it – but I knew it was only 20 minutes, and I knew that I would be glad I had done it – I was able to self-regulate.

So after 90 sessions, where am I?

  • I don’t wake up anymore in more pain than when I went to bed.
  • I still experience pain, but nowhere near as much
  • I am still prone to headaches, but the frequency of them is significantly reduced.
  • I am a lot more flexible and have noticed that I feel better in my hips as well as my lower back.
  • I feel stronger.
  • I have begun to do some running as a result of this increased liberation.

Without any shadow of doubt, I am feeling better. My mental health has improved a lot. I am not constantly plagued by a physical pain that shouldn’t even be there. I know that I still have a way to go, but the cool thing is that because things have improved quite a lot, I’m obviously on the right path – and that gives me even more motivation. I know that I can also start to increase the level of difficulty now too to strengthen even more…and that’s exciting.

Develop a plan – like a sequence of learning

For me, the greatest realisation was that to strengthen my mental health, I needed to have a plan that allowed me to gradually develop and build my knowledge of a domain that could address it. For me, it was yoga. For you, it might be swimming, running, or cooking. Patience and discipline reign supreme, but planning an incremental curriculum that will help you achieve tangible benefits is more important.

I might check back in with you in another 90 days.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow this blog for more stuff about education and follow me on Twitter @edmerger

DO YOU EXPLICITLY DISCUSS MOTIVATION WITH STUDENTS?

This is part 2 of an essay based on self-regulated learning, and whether it needs to be taught for students to become skilled in it. Part 1 is here.

In part 1 I discussed how explicitly teaching and modelling to students how to think with knowledge potentially facilitates students being able to self-regulate such thinking. The proposition has implications for the explicit modelling of thinking critically and creatively. In this post I will expound on Zimmerman and Moylan’s 2009 paper that theorises that motivation is inextricably linked to these metacognitive processes, and just like everything else connected to learning, needs to be explicitly taught to students in equal measure for them to eventually be able to use the knowledge independently.

Zimmerman and Moylan suggest that there are 3 differentiated stages in achieving self-regulation. These can be equated with the EEFs appropriated terms: planning, monitoring and evaluation. The diagram below represents the cyclical processes of self-regulation.

FORETHOUGHT = PLANNING 

IT’S A CASE OF WHICH COMES FIRST, the chicken or the egg, but in order for a student to get their learning off the ground, they need to be motivated to do so. Oftentimes in the school sector, this may not be an intrinsic motivation, with extrinsic rewards and punishments tending to dominate the setting. Upon presentation of a new learning activity, a student will process a range of thoughts evaluating whether they should in fact participate in the endeavour. Students immediately process the expectations against any prior experiences or knowledge, drawing on their schemata to ascertain the extent of having to set new goals and strategies to achieve the new learning, whilst probably concomitantly deciding if they have any intrinsic interest in the task. If they arrive at the conclusion that they don’t possess either of these motivators, your work is immediately cut out for you.

Compounding this will be the fact that students also naturally draw from that schemata the affective responses they had or indeed have built over time in dealing with similar types of activities or learning experiences. If this audit brings up negative memories, perhaps emanating from a lack of success, or serious disinterest, then this will heavily impact on their motivation to continue. It certainly won’t be the case that ‘If you build it they will come’. A student’s self-efficacy or belief that they will be able to positively engage in the task will most certainly affect their planning, strategy and goal setting capacity. So, besides forcing students to participate, what can be done to break this thought pattern?

METACOGNITION – Make explicit the possible reactions students may have to a new task: ‘You may have had a negative experience with this type of problem before, but this time is different because…’, ‘You may immediately think there’s no relevance to this task, but…’, ‘You may have not achieved the grade you wanted in the last task, but this time we are going to plan the response better…’. By making such reactions explicit, explaining how demotivating factors can arise, and providing explicit strategies that ‘show’ how a different outcome may eventuate, the teacher is training the student to think about the new context in a new way, and mitigating against poor self-efficacy inhibiting impetus.   

Also crucial to setting up learning is making explicit the goal orientation of the task. Plenty of research suggests that ‘performance’ orientated goal setting, where students’ motivations to learn are primarily centered on comparison and competing against others, is tellingly inferior to having a ‘learning’ goal orientation: here. The positioning of a task’s import as being an opportunity to strengthen personal understanding against personal standards has been shown to facilitate a deepening of learning: ‘In this activity, let’s think about how we can incrementally improve our knowledge of the topic…’, ‘I want you to think about what your level of knowledge is on the topic and set yourself a goal of looking to strengthen it by the time we have finished….’, ‘In this task, we are going to concentrate on mastery…’ However, such ambition is made infinitely more difficult in a system predicated on accountability. Nonetheless, a good teacher will explicitly and inexorably focus their students’ attention on setting goals for self-improvement, and that learning is indeed a continuum that takes time and practice to master. When such purpose is part of the learning culture, once the task is successfully completed the student’s evaluation process then positively feeds into and strengthens the self-efficacy required to engage in a new learning context, regardless of how they fared compared to others in the cohort.  

This personal growth rather than competitive epistemology is particularly relevant if you are trying to encourage students who are working hard but not quite succeeding – and observing others around them achieving – in the beginning of a course. These students not only need the explicit discussion of what success means (improvement against your last effort), but precise feedback that articulates what the gaps in knowledge are, and crucially, scaffolded activities that facilitate the opportunity for observed improvement against the last effort. Mastery pathways not only provide opportunity for incremental success, but also the chance to eventually catch up to the expected standard. Because success is the greatest motivator of all, when those achievements are explicitly labelled to the student, s/he will accommodate their self-efficacy to become more positive.

PERFORMANCE = MONITORING 

During the task, drawing students’ attention to how they are solving problems and the progress they are making and the motivation required to do so will facilitate the eventual automaticity of such thinking. Modelling self-questioning and verbalisation of thinking processes whilst scaffolding learning through worked and completion examples builds the schema of such processes in students’ minds, and teaching students how to manage time and set up an appropriate learning space should never be assumed to be assumed knowledge. Providing as many opportunities as necessary to facilitate a culture where the student can control these learning strategies and can readily select the most appropriate tools to negotiate the context they find themselves in should be an engrained aspect of a teacher’s curriculum design. When students feel such control over the strategies they employ to negotiate the present task, their motivation and self-efficacy will be strong.

The explicit drawing of attention to higher order thinking processes during the task goes towards developing the schema for doing so in future, independent contexts. As argued in part 1, assuming students will engage in higher order thinking once knowledge is sufficiently acquired is not a good idea, as students may not do this unless they are highly motivated in the discipline or topic in question. Prompting with questions like ‘So if we know this about …., what would happen if …..?’, ‘What is the connection of this idea to the topic we looked at last week?’, ‘What would happen if we combined these 2 ideas?, ‘So imagine this scenario…., how would you solve the problem at hand?‘ If you model this thinking, students will use the model as a strategy when asked to think about knowledge in new contexts, and being able to do so will boost their confidence in engaging with knowledge in interesting ways. This confidence develops self-efficacy, and thus motivation.

SELF-REFLECTION = EVALUATING 

From my experience, one of the most difficult things to do is to get students to reflect on their performance and planning after the event. This is especially difficult if the student entered the transaction with a performance goal orientation and wasn’t overly successful. The immediate deflation is palpable. Explicitly discussing this with the students is important at this very moment. But perhaps most importantly, understanding the causal attributions some students may have applied to their success or failure is necessary to ensure that they are able to benefit from the evaluation.

Many students attribute their experience to fixed ability, which is particularly detrimental if they engaged in the activity with a performance goal and didn’t succeed. The comparison against others that essentially results in a defeat if unsuccessful solidifies a negative self-efficacy, which in turn has a negative influence on the planning stage of the next learning moment. If however, the student can be persuaded by the learning continuum theory and that their ability in the task is not fixed and can in fact be improved by application of effort, practice and good revision and study techniques, then the probability of their motivation being secure for the next task is high.  

Unfortunately, over time and repeated negative experiences in learning environments, some students develop entrenched negative evaluations that seriously inhibit motivation to continue or engage in future learning contexts. Procrastination may be a milder symptom of such a state, but more serious and damaging is learned helplessness, a notable defence mechanism employed that prevents a student from trying because they believe that there’s nothing that they can do to change an inevitable failure. Often, such a state becomes an unconscious default, and can only be changed by carefully designed scaffolded learning opportunities that promote success, as well as making the psychological context explicit. Of course it is time consuming, but a well-constructed audit of a student’s performance, including how they approached and revised etc for the task, will likely find a host of issues that could be rectified. A checklist may work in helping students evaluate their performance in a task, and the explicit discussion about how neglect in each element on the list is quite impactful could act as a motivator for a student to alter their preconceived beliefs that they aren’t in control of changing their learning potential.

TAKE AWAY

Teaching students about motivation and how past experiences affect the present, and helping students identify patterns of behaviour, their ‘real’ causes and how they can be adjusted is as imperative as teaching them content. Making thinking explicit can go a long way to positively affect how a student perceives a task and their ability to process, engage with, and succeed in it. The result is that students will willingly drink from the water you have led them to.    

The next post will discuss how beneficial it can be for students to understand how learning actually happens.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

WHY MAFS IS A GOOD MODEL FOR EDUCATION

It ostensibly seems like a very tenuous link, but there is actually a strong corollary between the way the show is edited and the way educators should approach the delivery of their courses.

To maintain the intensity of the central driver of the show, the emotional connections the audience make with the actors*, the editors continuously replay certain scenes that are contingent to a theme or storyline they believe will generate the maximum reaction from the audience. Either deliberately or intuitively, by frequently recalling key content, the producers facilitate the retrieval of the content for its audience, which in turn strengthens their propensity to remember it. Being able to remember what has happened is critical for the audience to connect their feelings to the new drama, and maintain the necessary emotional intensity required for the show to be successful.

The show runs for nearly an hour on television or on demand, but the amount of ‘fresh’ material in each show would amount to about a third of the overall content. The show is unconscionably peppered with adverts, sometimes inserted after just 3 minutes of viewing and of equal length, but upon returning from the break, the show unfailingly recaps what happened just before the advertisements. The editors cleverly build drama before each advert break, and by replaying the intense moment upon returning, the audience’s memory of their pre-advert reaction is resurrected, strengthened, and can now be exploited to react to the next adventure presented. The editors also replay scenes from several shows ago to jog the audience’s memories of those events. This not only strengthens the memories of those episodic events, but crucially allows the producers to precisely position the audience’s emotional reaction as they structure and direct the connections between the scenes for them.

This continuous recapping of key content is how education works best. When new content is presented, the skilled tutor realises that in order for that content to become cemented in the learner’s memory it needs to be retrieved on several occasions, and over time. The necessity for the learning to become a part of the long-term memory is so that is can be drawn from when new content is introduced. This stems from the way our brains learn. Students construct new knowledge by making connections between new ideas and existing mental models, and then building on them. The ease with which the learner can recall these newly constructed understandings affects the load on the working memory, with automatic recall allowing the learner to make newer connections with comparative ease. Nuthall suggests that learners need at least 3 exposures to a concept before they have any hope of moving it into their long-term memories. By replaying key concepts many times, the learner’s construction of new content is supported. Again, either deliberately or intuitively, Married At First Sight has mastered this approach.

The imperative of replaying the key content to secure future recall, by logic, has implications for how much new content should be introduced at a time. Engelmann believes that the amount of new content introduced vs the practising and recapping of old should be approximately 20 : 80%. I wonder how many courses are designed that facilitate such recapping? Quite simply, without dedicated opportunities for the old stuff to be practised and recapped over and over again, the less likely it will actually be learned.

Married At First Sight teaches us absolutely nothing in terms of how to be a good human, but it utilises what is understood about memory, and demonstrates that if you want someone to make connections to previous emotions, you have to recap the scenes that led to those emotions many times. The same is true for educators. If you want a learner to make connections to previously taught key concepts, you have to recap those key moments many times.  

*are they actors? If not professional, surely they are directed by the producers to behave in certain ways and to ask specific questions of each other?

I’m Paul Moss. I manage a team of learning designers. Follow me on @twitter

10 WAYS TO ENCOURAGE PARTICIPATION USING ZOOM

Participation is crucial in any learning environment, and a Zoom session is no different. Participation encourages attention, which is a requisite for learning. If students aren’t attending to the content or discussions on offer, they have no chance of encoding that content and then being able to use it at a later time: in other words, learning it. Being skillful in ensuring participation is therefore imperative.

Varying the way students are asked to participate is a powerful way to encourage engagement. Zoom can encourage participation in several different modes, which sometimes is not possible in a regular face to face session. Here’s how a teacher/tutor can engage students in a Zoom session:

  • Immediate quiz/questions
  • Explaining your method
  • Non-verbal feedback
  • Verbal questions
  • Written questions
  • Polls/quizzes
  • Breakout rooms
  • Screen sharing
  • Using the whiteboard
  • Modifying content

1. IMMEDIATE QUIZ/QUESTIONS

Because of the way our memories function, recapping content from previous sessions is essential to help the knowledge move into the long-term memory where it can then be recalled automatically to assist in processing new information. Students who arrive on time to your Zoom session should immediately be put to work, either doing a 3 or 4 question quiz on previous learning, or producing short answers to a question or 2. Both of these are shared from your screen. This then does 2 things: firstly, it activates prior knowledge that will assist in today’s learning, and secondly, it gets the students involved straight away. Late comers also won’t miss the new content. Answers to the quiz etc are briefly discussed and then the current session begins with students’ minds active.

2. EXPLAINING YOUR METHOD

By articulating the strategies you will employ in the session up front you are likely to alleviate students’ anxieties about some of the processes they’ll experience during the session, and therefore encourage participation. Explaining why you are repeating questions, why you are talking about things from previous sessions, why you are asking for different types of responses and feedback, why you are insisting everyone responds before you move on, why you are using polls and why you are so keen on student participation and its effect on learning will help students feel more comfortable during the session and feel more able to participate.

3. NON-VERBAL FEEDBACK

You will have to turn on NON-VERBAL FEEDBACK in the settings:

Getting students to indicate a yes or no or a thumbs up encourages participation. Whilst you can’t guarantee such an assessment for learning truly proves students have understood your question, as students could just be guessing or indicating to avoid being asked why they haven’t, it still gets students involved. Even if a student answers to try to avoid a follow up question when the tutor sees they haven’t responded they are still actively listening, which is a condition of learning. Varying the type of questions can also generate some humour and fun in a session – asking if students are breathing, or if they know that Liverpool football club is the best team in the world for example. Non-verbal feedback is best used in triangulation with other assessment for learning options, such as verbal questions:

4. VERBAL QUESTIONS

Effective questioning is a powerful way to assess for learning and guarantee participation. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Keeping the questions ‘alive’ is important to encourage participation because as soon as you provide an answer the majority of students will stop thinking about the answer – they have no need to keep thinking: allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By randomly choosing students to answer you not only get a sense of their levels of understanding which allows you to pivot the next sequence if necessary, but it also keeps students on their toes as they realise that they may be called on next. This random selection of students will even work in a very large tutorial.

Sometimes it’s the little things. Be aware that you might naturally tend to favour interacting with those you can see in the session. Those without their cameras on, as in the image below, may not get asked as many questions, so an awareness of this and conscious questioning of unseen students will encourage a broad participation in the session.

5. WRITTEN QUESTIONS

Using the chat section to elicit answers to check for learning encourages participation. It is a variation on simply just listening and answering verbally. Having students write down an answer proves they know or don’t know the content. Dedicating a time in a session for this process not only varies the type of participation, but can be a great indicator that students have the required knowledge to continue. Opening up the chat lines for student to student interactions also encourages participation as some will answer questions and feel empowered in the process, and some will just enjoy the interactions. It is important though that the chat area is monitored as it can lead to the wrong kind of participation – like students just chatting in the classroom/lecture theatre which means they are not paying attention to the content. You can’t write/read and listen at the same time. I write about that here.

6. POLLS/QUIZZES

Using the poll function in Zoom is easy. You have to ensure it is turned on in the settings:

Once you’ve designed your questions, preferably before the session, you can then launch the poll.

Students then participate by responding. You then share the results, which at this point are anonymous, with the whole group. This serves as an assessment for learning opportunity, and you can pivot the session based on the answers if necessary. In answering the questions, students’ minds are activated as they search for knowledge in their schemata. There is an art to designing effective polls and multiple choice questions, and I discuss that art form here.  

Canvas quiz can also be incorporated into the Zoom session. The advantage of this is that it has a variety of question types that further encourage participation. There are many other apps too, such as Quizizz, Kahoot, and Mentimeter, but should be used with caution if not supported by your institution, as students may not want to sign-up for such platforms that essentially require them to surrender their data.

7. BREAKOUT ROOMS

Sending students into groups to discuss a concept or problem is a fantastic way to encourage participation. Homogeneous groups tend to work best, because those with vastly different levels of developed schema tend not to engage with each other as well as those with closer skill levels. It is sometimes of benefit of the more knowledgeable student to help another peer, but this then relies on effective teaching skills to work, and in reality that is a big ask of a student. So setting them up before a session may be your best bet.

Providing guidance on what to do when students are in the session is crucial, and it is worth popping in to each group to see how it is progressing. As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’ Attributing a member of the group to be a scribe is also worth doing, so that when the group returns to the main session they are able to share their screen and discuss their work/findings/solutions etc.

8. SCREEN SHARING

Getting students to share their screen encourages participation. This is especially effective coming out of a breakout room, but can be used at any point in a session. A student may be asked to demonstrate their workings of a problem, an answer to an essay question etc and the tutor can use it as a model to provide feedback. Of course caution would be used here, and only positive/constructive feedback provided.

9. USING THE WHITEBOARD

Sharing the whiteboard and getting students to interact with the content you or they put on there is a great way to encourage participation. You could model your thinking process in this medium and explain or annotate examples to discuss how students could gain a better understanding of the content. You could also have students annotate the board, asking them to underline key words, complete equations etc. Getting multiple students to add their own annotations is probably more beneficial with smaller groups, such as in the breakout rooms. Unfortunately in Zoom you can’t paste an image on the whiteboard, only text.

10. MODIFYING CONTENT

I firmly believe that there will only be a very small percentage of students who are genuinely unwilling to participate in this medium. Such students would be expected to use the chat option and only ‘send to the host’ for example to ensure they are still participating. If you have tried all of the above strategies and your students are still not really getting involved, it is likely that they just don’t know the answers. As humans, we naturally want to succeed, and non-participation may indicate to you that you need to strip it back a bit, and come back to some foundational knowledge. It doesn’t matter what you think students should know, it is about what they actually do, and the relevant development of their schema. It is better that you facilitate the construction of knowledge, and provide questions that students will know answers to so they can build up their confidence in participating, By doing this, you will slowly, but surely, build their schemata so they will want to get involved consistently.

Online participation is essential for the session to be effective. If you have other tips and advice how to engage participation, please let me know and i’ll add to the list.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

SHOULD YOU CLOSE THE ONLINE CHAT OPTION IN A LESSON?

Yes, and no.                                                

Having the chat option open from the word go in an online tutorial can present problems for both you and the students. Whilst it may seem ideal for students to be able to interact when something comes to mind, the reality is that whatever else you are hoping will happen at the time they are chatting, like them listening to information or explanations, just won’t happen. This can be explained by dual coding theory.

Dual coding theory essentially tells us that we encode information via 2 channels in the brain, the auditory channel and the visual. Reading, listening and writing all fall under the auditory channel, and seeing and physical interactions fall in the visual channel. The theory informs educators that combining content in multi modal forms will enhance the encoding of that content, but crucially, also tells us that if you present multiple pieces of information in a single channel, the working memory will have to decide what to attend to, at the expense of the competing stimuli.

In other words, you can’t do two things at once in a single channel. If you expect students to read at the same time as listen to instructions or explanations, one of those requests will be compromised (a common mistake made in lecture theatres and classrooms worldwide when talking over PPT slides full of text). If you expect students to write at the same time as listening to instructions or explanations, they won’t be able to do it as efficiently as if only focusing on one stimulus. So, students typing away and responding to the online chat means they aren’t listening to you or paying attention to any text you may be presenting. It would be the same in a face to face setting: they would be talking to each other and therefore not attending to you.   

My advice would be, analogous to a regular face to face learning context, to restrict the availability of the chat to specific times in the session. Assessing for learning is of course crucial in a session, and the chat area is a good means of doing this, but you can’t hope to assess for learning if the students weren’t listening in the first place. Opening the chat up at specific times will maximise this avenue of assessing for learning.

Having said that, we do want to encourage students to write down questions that arise from your delivery, otherwise they undoubtedly will be forgotten. So to facilitate this, using Zoom, you would select the ‘HOST ONLY’ option (see the images below for how to do this). Only you will see the questions, and this means that other students won’t get distracted – and certainly not by completely unrelated comments that inevitably will propagate in the space. You will then perhaps dedicate a time after your delivery to address those questions that have come up…and then open up the chat lines for interactions.

Select the ellipsis on the RHS of the chat box
Select host only

For a step by step guide, view this video

So, in summary, by reducing the opportunities students have to lose concentration in a learning environment, you will increase the likelihood that they will be attending to what it is you want them to be focusing on. Of course, some classes will have the maturity to engage appropriately with the chat function and such measures of control won’t be necessary.

In the next post I will discuss other ASSESSMENT FOR LEARNING opportunities in the online space.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.

REFLECTION

Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.

WHAT DO YOU WANT TO TEST IN AN EXAM?

Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau

4 REQUISITES OF A WELL DESIGNED OPEN EXAM

  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   

CRITICISMS OF OPEN BOOK EXAMS

A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.

EXPOSURE TO OPEN BOOK ASSESSMENT IS CRITICAL

Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.

A PEDAGOGICAL UPSHOT

As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

LOOKING TO MINIMISE PLAGIARISM IN AN ONLINE ASSESSMENT?

When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.

HOW TO SET AN ASSESSMENT FOR A SPECIFIED TIME

STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


USING CANVAS QUESTION BANKS

Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.

USING CANVAS FORMULA QUESTIONS

I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.

FINAL THOUGHTS

Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

An evidence based pedagogical plan for a Zoom tutorial

  1. Provide a pre-loaded/flipped worked problem – here’s why
  2. Begin with a quiz – here’s why
  3. Work through a problem analogous in difficulty to the pre-loaded problem – here’s why
  4. Present a new problem a little more difficult -– here’s why
  5. Have students break out into homogeneous ability rooms – here’s why
  6. Have students demonstrate their solutions – here’s why
  7. Provide feedback  – here’s why
  8. Set more practice tasks – here’s why

More rationale here

I’m Paul Moss. I’m a learning designer. Follow me @edmerger