chunking lectures – it’s a bit of a no-brainer

Breaking a lecture up into distinct chunks or sections is a bit of a no-brainer. It is all to do with understanding the implications of cognitive load theory, specifically that the brain can only process a small amount of new information at once. Presenting more information than the brain’s architecture can handle leads to overloading the working memory, and usually a significant decrease in learning.

Breaking your lecture into chunks provides students a chance to process each chunk before new material is presented. Designing opportunities for students to be active (black boxes) in the processing of the content also assists in facilitating the content’s understanding, and eventual transfer into long term memory.

So, here’s a possible live -streamed lecture design that considers cognitive load implications, the need for the student to be active in their learning, and is very manageable for the lecturer. The model can be applied to both live and recorded lectures, but the recorded lecture will need some more specific context discussed, which I will do in another post.

I’ve talked before about the possible mixed-mode future of live lecturing, with it being able to facilitate a breakout room. The below model considers this as a possibility.

lesson segmentrationaletech to assist
introThe lesson begins with a retrieval quiz.
The benefit of retrieval is enormous. It strengthens the memory of key ideas and content. The purpose of this is so the knowledge can be automatically brought to cognition when new learning is presented, without taxing the working memory. The more knowledge the student can draw from, the greater the opportunity to delve into more higher order independent learning, so building students’ schema through retrieval is is a bit of a no-brainer.
The lecturer will place answers on the screen, and spend 2-3 minutes explaining answers if common errors were made.
Echo 360
Canvas quiz
teachingDelivering content.
10-12 min.
Incremental building to application is is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts before tehy are presented with more content. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture, and is a bit of a no-brainer.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
teachingDiscussion of last task if necessary – may not be if practising or completing examples.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
Formative assessmentChecking for learning
A quiz of short answer opportunity to see if what you have presented so far has been understood is is a bit of a no-brainer. The questions also provide another opportunity for a student to process the content and develop a better understanding.
Questions up on screen.
Zoom polling.
Using Canvas discussions as student answer repository.
teaching Check answers – you may need to pivot the lecture if misconceptions are still prevalent.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
summary Recapping key ideas. Tying the lecture all together: linking it to previous learning and real word contexts. Discussion and questions asking students to link their learning is a great way to draw attention to the key concepts again, and is a bit of a no-brainer. Mentimeter open ended question.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

Is mixed-mode lecturing the future of HE lecturing?

Picture the setting: instead of the regular face to face lecture of 120 students, there are 40 in front of you and the other 80 are remote. How can a lecturer operate under such conditions, satisfying both contexts at the same time? 

Well, each student is connected to Zoom, face to face students either through a laptop or a phone, and remote students similarly so. The face to face students have a choice – they can watch and hear the lecturer as normal, or watch and listen through the screen, as the remote student would have to do. If slides are presented, then the face to face student likely has an advantage as they can see the lecturer full size and the content on a larger screen, whereas the remote student sees only a thumbnail of the lecturer in the corner of the presentation.   

So, what are some of the advantages of having face to face students being connected via Zoom too – why not just watch and listen as normal? 

100% participation in formative assessment – if everyone has a device then you can assess their understanding at stages of the lecture using polls and quizzes. Beginning each lecture with a retrieval quiz is highly beneficial as it brings back into the minds of your students key ideas from past lectures that you know they need to know – helping them retrieve such content actually helps you too as new concepts will be better understood if students can automatically bring past ‘connected’ ideas into their thinking without taxing the working memory. Half way through a lecture is another good time to formatively check for understanding.

Generally asking lots of questions in a lecture is still good practice, but getting everyone involved is near impossible in a regular lecture context – now technology affords this. Getting more data helps you know if what you’re teaching is being understood.  

Interactions with peers – when appropriate, students can seek clarification from a peer without disturbing the rest of the lecture room. Of course, this should only be encouraged when there is space in the lecture so students aren’t missing key ideas if talking to a peer. You can manage the chat functions to be open to all or so that students can only message you during content delivery. See here for more Zoom engagement advice.

Interactions with the lecturer – potentially, shy students in the lecture theatre can now ask a question to the lecturer, anonymously if they like, via the chat in Zoom. For some, the pressure of not wanting to appear silly by asking a question is huge, and often such students won’t ask, and then move onto the next section of the lesson without clarity on what was just taught. Now everyone can be heard.  

Group work in a lecture – breakout rooms facilitate the option of having students work together to solve problems. At stages in the lecture when chunking is necessary to secure students’ attention, an option may be for students to spend some time to practise what has just been delivered, consider relevant analogies to help strengthen understanding, or collaborate on creative solutions to new problems. Addressing misconceptions or consolidation through practice is probably best done in pairs, whereas groups of 3-5 may be more suited to discussing ideas and analogies rather than practice.  

Black screens can be good – the wonderful Dr David Wilson from Adelaide University provided some valuable insight in this area. There may be several legitimate reasons why a student decides to turn their video off. Of course, the best communicators make their expectations explicit and clear from the beginning, and help students with legitimate screen issues arrive at alternative ways to engage in the lecture, but sometimes a student will turn their screen off because it’s easier to engage passively. We all know that active learning is better than passive learning, but in a large lecture theatre, it can be hard to determine who is and who isn’t active, and time consuming trying to address an individual who pretends not to hear you. Now, the black screen at least gives you a chance at instantly seeing who the passive student is and a chance at addressing their decision. If you’ve made it clear that you prefer the screen on, and that those who can’t should communicate why privately to you, then if  the student simply still refuses to engage when addressed, it’s easy to write down the Zoom name or student number and address it later with a friendly check-in to see if there is anything you can do to help. If the student has used a fake name, well that’s a fair bit harder, but you’d hope that having established high expectations, continually developed the metacognitive abilities of your students, and done so in a really friendly demeanour, then such a student would be in the minority.  

Logistical considerations that may be deemed as disadvantages – it may seem daunting to get all the technology working to facilitate such a learning environment, but it is easier than you might think 

Audio feedback from multiple zooms in the lecture theatreStudents would need to be on mute unless asked a question 
Teacher’s zoom camera –  how can it be placed to emulate a real life view?Placed so it captures the teacher’s whole body and gesturing as they move around (movement like in a normal lecture). This means that the camera will be at distance and not so you can only see the person’s head. It may require some configuring with the existing setup so that your camera connects to the console displaying your slides or doc camera, but quite often the lecturer will be distant from the console and using a clicker to move through slides. 
Teacher’s microphone – how would the distanced camera pick up the lecturer’s voice? Lots of lecture rooms have a microphone that is pinned to the lecturer and operates via bluetooth. A room microphone would pose problems of feedback, but if that is the only option, then face to face zoom participants must always have their mic muted and questions and answers  asked in house would need to be repeated by the lecturer for the sake of the remote students – or questions are asked via zoom chat. This is actually not a bad outcome anyway as repeating the question ensures a) everyone heard it, and b) a longer processing time to engage with it.  
Being able to produce worked examples and use a whiteboard to demonstrate problem solvinguse a tablet as the screen share in Zoom where you can draw/write and show your workings. Alternatively, you can use your phone as the screen share and position/suspend it above your working area.
Monitoring the chat effectivelyI would dedicate a section of the lecture where you stop to check for questions. This is surely just good practice anyway.  

Previously perhaps the promotion of such a learning environment may have been frowned upon as a threat to lectures going ahead at all – why would we need to have a live lecture when it can be watched online, at one’s own convenience. Well, it would seem that the average cohort of lecture audience has always contained a mix of those who like and benefit from the in-person ‘live’ experience and those who prefer the remote alternative. Mixed-mode lectures offer the best of both worlds.  

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger


Participation is crucial in any learning environment, and a Zoom session is no different. Participation encourages attention, which is a requisite for learning. If students aren’t attending to the content or discussions on offer, they have no chance of encoding that content and then being able to use it at a later time: in other words, learning it. Being skillful in ensuring participation is therefore imperative.

Varying the way students are asked to participate is a powerful way to encourage engagement. Zoom can encourage participation in several different modes, which sometimes is not possible in a regular face to face session. Here’s how a teacher/tutor can engage students in a Zoom session:

  • Immediate quiz/questions
  • Explaining your method
  • Non-verbal feedback
  • Verbal questions
  • Written questions
  • Polls/quizzes
  • Breakout rooms
  • Screen sharing
  • Using the whiteboard
  • Modifying content


Because of the way our memories function, recapping content from previous sessions is essential to help the knowledge move into the long-term memory where it can then be recalled automatically to assist in processing new information. Students who arrive on time to your Zoom session should immediately be put to work, either doing a 3 or 4 question quiz on previous learning, or producing short answers to a question or 2. Both of these are shared from your screen. This then does 2 things: firstly, it activates prior knowledge that will assist in today’s learning, and secondly, it gets the students involved straight away. Late comers also won’t miss the new content. Answers to the quiz etc are briefly discussed and then the current session begins with students’ minds active.


By articulating the strategies you will employ in the session up front you are likely to alleviate students’ anxieties about some of the processes they’ll experience during the session, and therefore encourage participation. Explaining why you are repeating questions, why you are talking about things from previous sessions, why you are asking for different types of responses and feedback, why you are insisting everyone responds before you move on, why you are using polls and why you are so keen on student participation and its effect on learning will help students feel more comfortable during the session and feel more able to participate.


You will have to turn on NON-VERBAL FEEDBACK in the settings:

Getting students to indicate a yes or no or a thumbs up encourages participation. Whilst you can’t guarantee such an assessment for learning truly proves students have understood your question, as students could just be guessing or indicating to avoid being asked why they haven’t, it still gets students involved. Even if a student answers to try to avoid a follow up question when the tutor sees they haven’t responded they are still actively listening, which is a condition of learning. Varying the type of questions can also generate some humour and fun in a session – asking if students are breathing, or if they know that Liverpool football club is the best team in the world for example. Non-verbal feedback is best used in triangulation with other assessment for learning options, such as verbal questions:


Effective questioning is a powerful way to assess for learning and guarantee participation. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Keeping the questions ‘alive’ is important to encourage participation because as soon as you provide an answer the majority of students will stop thinking about the answer – they have no need to keep thinking: allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By randomly choosing students to answer you not only get a sense of their levels of understanding which allows you to pivot the next sequence if necessary, but it also keeps students on their toes as they realise that they may be called on next. This random selection of students will even work in a very large tutorial.

Sometimes it’s the little things. Be aware that you might naturally tend to favour interacting with those you can see in the session. Those without their cameras on, as in the image below, may not get asked as many questions, so an awareness of this and conscious questioning of unseen students will encourage a broad participation in the session.


Using the chat section to elicit answers to check for learning encourages participation. It is a variation on simply just listening and answering verbally. Having students write down an answer proves they know or don’t know the content. Dedicating a time in a session for this process not only varies the type of participation, but can be a great indicator that students have the required knowledge to continue. Opening up the chat lines for student to student interactions also encourages participation as some will answer questions and feel empowered in the process, and some will just enjoy the interactions. It is important though that the chat area is monitored as it can lead to the wrong kind of participation – like students just chatting in the classroom/lecture theatre which means they are not paying attention to the content. You can’t write/read and listen at the same time. I write about that here.


Using the poll function in Zoom is easy. You have to ensure it is turned on in the settings:

Once you’ve designed your questions, preferably before the session, you can then launch the poll.

Students then participate by responding. You then share the results, which at this point are anonymous, with the whole group. This serves as an assessment for learning opportunity, and you can pivot the session based on the answers if necessary. In answering the questions, students’ minds are activated as they search for knowledge in their schemata. There is an art to designing effective polls and multiple choice questions, and I discuss that art form here.  

Canvas quiz can also be incorporated into the Zoom session. The advantage of this is that it has a variety of question types that further encourage participation. There are many other apps too, such as Quizizz, Kahoot, and Mentimeter, but should be used with caution if not supported by your institution, as students may not want to sign-up for such platforms that essentially require them to surrender their data.


Sending students into groups to discuss a concept or problem is a fantastic way to encourage participation. Homogeneous groups tend to work best, because those with vastly different levels of developed schema tend not to engage with each other as well as those with closer skill levels. It is sometimes of benefit of the more knowledgeable student to help another peer, but this then relies on effective teaching skills to work, and in reality that is a big ask of a student. So setting them up before a session may be your best bet.

Providing guidance on what to do when students are in the session is crucial, and it is worth popping in to each group to see how it is progressing. As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’ Attributing a member of the group to be a scribe is also worth doing, so that when the group returns to the main session they are able to share their screen and discuss their work/findings/solutions etc.


Getting students to share their screen encourages participation. This is especially effective coming out of a breakout room, but can be used at any point in a session. A student may be asked to demonstrate their workings of a problem, an answer to an essay question etc and the tutor can use it as a model to provide feedback. Of course caution would be used here, and only positive/constructive feedback provided.


Sharing the whiteboard and getting students to interact with the content you or they put on there is a great way to encourage participation. You could model your thinking process in this medium and explain or annotate examples to discuss how students could gain a better understanding of the content. You could also have students annotate the board, asking them to underline key words, complete equations etc. Getting multiple students to add their own annotations is probably more beneficial with smaller groups, such as in the breakout rooms. Unfortunately in Zoom you can’t paste an image on the whiteboard, only text.


I firmly believe that there will only be a very small percentage of students who are genuinely unwilling to participate in this medium. Such students would be expected to use the chat option and only ‘send to the host’ for example to ensure they are still participating. If you have tried all of the above strategies and your students are still not really getting involved, it is likely that they just don’t know the answers. As humans, we naturally want to succeed, and non-participation may indicate to you that you need to strip it back a bit, and come back to some foundational knowledge. It doesn’t matter what you think students should know, it is about what they actually do, and the relevant development of their schema. It is better that you facilitate the construction of knowledge, and provide questions that students will know answers to so they can build up their confidence in participating, By doing this, you will slowly, but surely, build their schemata so they will want to get involved consistently.

Online participation is essential for the session to be effective. If you have other tips and advice how to engage participation, please let me know and i’ll add to the list.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter


Yes, and no.                                                

Having the chat option open from the word go in an online tutorial can present problems for both you and the students. Whilst it may seem ideal for students to be able to interact when something comes to mind, the reality is that whatever else you are hoping will happen at the time they are chatting, like them listening to information or explanations, just won’t happen. This can be explained by dual coding theory.

Dual coding theory essentially tells us that we encode information via 2 channels in the brain, the auditory channel and the visual. Reading, listening and writing all fall under the auditory channel, and seeing and physical interactions fall in the visual channel. The theory informs educators that combining content in multi modal forms will enhance the encoding of that content, but crucially, also tells us that if you present multiple pieces of information in a single channel, the working memory will have to decide what to attend to, at the expense of the competing stimuli.

In other words, you can’t do two things at once in a single channel. If you expect students to read at the same time as listen to instructions or explanations, one of those requests will be compromised (a common mistake made in lecture theatres and classrooms worldwide when talking over PPT slides full of text). If you expect students to write at the same time as listening to instructions or explanations, they won’t be able to do it as efficiently as if only focusing on one stimulus. So, students typing away and responding to the online chat means they aren’t listening to you or paying attention to any text you may be presenting. It would be the same in a face to face setting: they would be talking to each other and therefore not attending to you.   

My advice would be, analogous to a regular face to face learning context, to restrict the availability of the chat to specific times in the session. Assessing for learning is of course crucial in a session, and the chat area is a good means of doing this, but you can’t hope to assess for learning if the students weren’t listening in the first place. Opening the chat up at specific times will maximise this avenue of assessing for learning.

Having said that, we do want to encourage students to write down questions that arise from your delivery, otherwise they undoubtedly will be forgotten. So to facilitate this, using Zoom, you would select the ‘HOST ONLY’ option (see the images below for how to do this). Only you will see the questions, and this means that other students won’t get distracted – and certainly not by completely unrelated comments that inevitably will propagate in the space. You will then perhaps dedicate a time after your delivery to address those questions that have come up…and then open up the chat lines for interactions.

Select the ellipsis on the RHS of the chat box
Select host only

For a step by step guide, view this video

So, in summary, by reducing the opportunities students have to lose concentration in a learning environment, you will increase the likelihood that they will be attending to what it is you want them to be focusing on. Of course, some classes will have the maturity to engage appropriately with the chat function and such measures of control won’t be necessary.

In the next post I will discuss other ASSESSMENT FOR LEARNING opportunities in the online space.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.


Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.


Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau


  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   


A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.


Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.


As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.


When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.


STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.


I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.


Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

An evidence based pedagogical plan for a Zoom tutorial

  1. Provide a pre-loaded/flipped worked problem – here’s why
  2. Begin with a quiz – here’s why
  3. Work through a problem analogous in difficulty to the pre-loaded problem – here’s why
  4. Present a new problem a little more difficult -– here’s why
  5. Have students break out into homogeneous ability rooms – here’s why
  6. Have students demonstrate their solutions – here’s why
  7. Provide feedback  – here’s why
  8. Set more practice tasks – here’s why

More rationale here

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  

ZOOM BOOM! Maximising virtual lessons

Using a virtual platform requires as much planning, preparation and expectation as a regular lesson. Of course there are differences to a face to face context, but like any good learning sequence, being aware of pedagogical principles will ensure the session is an active, useful learning experience.


  • knowing the tech
  • preparing the students and the session
  • managing the session

Knowing the tech

At Adelaide University we have developed a range of resources that will take the academic from the basics of downloading zoom to their computer to being able to proficiently place students into virtual breakout classrooms here. I know many other Universities also have good resources, like this one from UQ. We recommend the following:

  • Set yourself small goals in mastering one aspect of the tool at a time.
    • Practice amongst your peers and learn about the functionality of the platform.
    • Perhaps the most important thing to remember is that your skill using the tech will improve considerably with practice, and that you what may seem overwhelming now, will be an automatic teaching method soon.

Preparing the students and the session

  • Students:
    • make sure the students understand the tech.
    • Provide clear and explicit instructions how to download and use the tool – we have developed these already.
    • Provide clear and explicit expectations about participation and etiquette.

In the end, the online session is still a classroom, and the behaviours for learning you would expect in a classroom to maximise learning are the ones you should expect and demand in a virtual setting. As soon as your expectations drop because you aren’t confident that the setting can produce learning, then you’ll lose the student engagement.  

  • The session: it is imperative that you are clear what the objectives of the session are. Is the goal to teach a new idea, check for understanding, to correct misconceptions, to extend thinking or simply to practice and consolidate existing knowledge? When used in conjunction with a recorded lecture in Echo 360, or a pre-loaded or flipped activity in a discussion board, the zoomed tutorial is often used to check for understanding. Have clear sectioned elements to the tutorial:
    • a recap of the last session (an introductory retrieval quiz is best)
    • a modelled example to introduce the desired content
    • opportunity for students to demonstrate their understanding
    • opportunity for students to ask questions
    • opportunity to practise

Managing the session

Always remember the session is an opportunity for learning, and what you would do in a regular learning context is what has to be applied here too.

  • Start on time – have students login 5 minutes before the start so you are not waiting for stragglers and being interrupted when the tutorial begins by having to add them manually to the session. The waiting room can have the session rules attached as seen above.
  • As soon as the session begins have students complete a recap quiz – also provides something for punctual students to do whilst you’re waiting for others to join. Retrieval is everything in learning!
  • Go through answers briefly
  • Discuss the expectations and rules of engagement of the current session. Repeat these many times over lots of sessions, so the process eventually becomes automatic for students.
  • Be friendly and encouraging – and patient whilst students become familiar with the process
  • Go through an example similar in difficulty to the pre-loaded activity as a warm up, narrating your workings. See here for more on the power of worked examples.
  • Present the pre-loaded activity
  • Check for understanding
    1. By asking questions: don’t take one or 2 student responses as an indication of the whole group’s understanding. See here for how to ask the right questions.
    2. By getting students to upload or show their learning,
  • Use at least 2 student examples to provide feedback – discussing their strengths and weaknesses will be another teaching moment
  • Present another activity of analogous difficulty to strengthen understanding. Consider breaking cohort into homogeneous groups, have them discuss the problem and present a consensus back to the main cohort’s discussion page.  
  • Present a final activity that is harder

Successful Zoom sessions will offer you a unique opportunity to check for understanding or to extend student knowledge.  It also offers an opportunity to place yourself in the shoes of the learner, the learner who is constantly introduced to a lot of new content and problems and may feel overwhelmed at times in the process. The more conscious you are of helping students manage the cognitive load when introducing new material, the better you will design and sequence that learning. Concomitant with that is articulating your method and helping students become stronger at understanding the metacognitive process.

Mastering Zoom will take practice, but that’s like everything you first began.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  


This is the 7th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.


Multiple choice assessments have anecdotally been the pariah of the assessment family. But its perceived inferiority as a valid form of assessment is unfounded, as research by Smith and Karpicke (2014) attests. However, for the format to be just as effective as short answer questions, the design of the test requires careful consideration, and I shall now outline the key characteristics of an effective multiple choice test.


Understanding schema is everything, as always. An awareness that you are building your students’ schema of a topic will help shape the design of your multiple choice questions. Butler, A. C., Marsh, E. J., Goode M. K., & Roediger, H. L., III (2006) discovered that adding too many lures as distractors to the novice learner not only negatively impacted on motivation, but also inhibited later recall of that content when compared to the performance of a student with better developed schema. This makes sense because novices are not yet able to distinguish between the distractors because their knowledge is not secure enough. While it may be tempting to make the questions harder by adding in lots of other knowledge, it is not an effective strategy.

We also know that there is the possibility that a novice will learn from the ‘incorrect’ lures/distractors presented (Marsh, E.J., Roediger, H.L., Bjork, R.A. (2007)), further evidence that we need to be cautious and precise when designing multiple choice questions for novice learners.


The design of the questions should emulate the way the knowledge was taught: incrementally building in difficulty.


Initially, individual pieces of knowledge that form part of a larger key concept need to be retrieved. Much of the content of multiple choice questions at this stage of the learning journey would be based on factual knowledge that simply has to be retained to help shape understanding of more complex knowledge at a later time. The advantage of using these questions to dominate the fundamental stages of your retrieval strategy is that you will be able to isolate misconceptions and gaps in learning immediately; the reality is that if a student is struggling at this stage, then they either haven’t studied or paid enough attention to the content. By approaching the design of your assessment in this way, you are ensuring that your students can walk before you expect them to run.


As a retrieval strategy, multiple choice tests should help a student master individual components of the course before they strive to test several and eventually all components of the course.  


There are several design choices that strengthen the validity of a multiple choice question being able to assess learning.

  • Brame, C., (2013) has written a superb resource on multiple choice design considering factors such as writing an appropriate stem, suitable alternatives (distractors), why none of the above and all of the above make it easier to guess through deduction (which means you’re not testing what you want to test) and how to engage higher order thinking.
  • Odegard, T. N., & Koen, J. D. (2007) suggest that there are certain questions, such as ‘none of the above’ that you shouldn’t ask, as they potentially don’t encourage retrieval as none of the relevant information is being recalled. Also, of concern is that one of the wrong answers may incidentally and inadvertently be retrieved. 
  • Answers should include at least 2 plausible options, otherwise a student can choose an answer by elimination, which is not necessarily strengthening the retrieval of the correct answer. For example, a poor design would be: What is the capital of Australia? A) London, B) Canberra C) Paris, D) Berlin. In this question the student doesn’t have to know it is Canberra, they could just eliminate the other options that they would have heard of before. If option D) was Sydney, then they would have to think and retrieve harder.
  • The number of plausible options should increase as the retrieval stretches to include multiple components of the course.
  • As the course proceeds and the domain of knowledge increases, the range of questions increases to include previous learning as well as the current learning. Adding options that are wrong in the current question but correct to another question has been shown to be effective: Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). This strategy is only useful however when a student has a well developed schema about the content, otherwise incorrect answers could be again inadvertently retrieved, but now on two occasions.
  • Feedback AS RETRIEVAL – Besides automatic marking, multiple choice questions provide 2 extra bonuses: they help make feedback more precise, and a prepared discussion of why certain plausible options are not quite the right answer presents another excellent retrieval opportunity as students see the correct answer in context and how it is connected to other pieces of knowledge. Below is a good example of this:


There is a science and an art to designing multiple choice questions. Understanding the research on what works and what doesn’t will render your design an effective assessment for/of learning tool as well as an excellent retrieval activity, or simply a tokenistic waste of time.


By asking several questions about the same concept the tutor can safely eliminate that students have guessed their way to success.

The same can be done by ensuring there are at least 4 options as answers for questions: every extra option statistically reduces the chance of guessing correctly.  


If you provide enough questions, and enough options inside those questions, statistically you’ll be in a better position to assess learning.


Eventually, the multiple choice test you design will strive to assess not just individual pieces of knowledge, but more of the domain. The domain will be made of many individual components, which are in turn made up of many individual pieces of knowledge. When designing the domain tests, questions should be created with a mastery approach in mind, where there will be 3 streams of knowledge: core, developmental, and foundational.

A student who incorrectly answers a question in the core stream shouldn’t be encouraged to continue with the quiz in this ‘core’ stream of questions until they can address the error: the error produces a learning gap that can be confounded later if not fixed now.

A mastery pathway enables this by redirecting the student to a ‘developmental’ stream of questions to help strengthen and eventually secure the correct knowledge necessary to return to the core stream. The developmental stream is comprised of 3 – 4 questions that are hierarchical in difficulty, eventually building to be analogous to the original question. Students who simply made a mistake or pressed the wrong choice for example, are encouraged by this process to be more precise in the future – they are also presented further retrieval opportunity, and so still gain from the perceived waste of time, provided they are aware of the teaching strategy (more on the power of metacognition in the next post).

If a student is unsuccessful in the developmental stream they are indicating that they need further knowledge building. Such a student would be redirected to a ‘foundational’ stream, where questions take the student back to basic factual and elementary pieces of knowledge. Success in this stage provides access back to the developmental stream and then eventually back to the core stream, and crucially, possessing the required knowledge to progress in the course. The video below illustrates this process.


It may take some students longer to arrive at the required level of knowledge, but at least they will eventually arrive – that is not something that every teacher could guarantee presently.


Of course, designing a multiple choice sequence is a time consuming affair. Sometimes coming up with the ‘wrong’ distractor options is actually quite difficult. Having to then design extra questions to satisfy a mastery pathway is even more demanding. But, once created, the multiple choice test is able to be used multiple times, over many years, and will have significant benefits to students who present with learning gaps. Also, it will actually save you time in the long run as less energy will have to be spent addressing gaps further into a course.

So, in summary, the key things to consider when designing multiple choice questions are:


 Butler, A. C., Marsh, E. J., Goode, M. K., & Roediger, H. L., III (2006). When additional multiple-choice lures aid versus hinder later memory. Applied Cognitive Psychology, 20, 941-956.

Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23, 1337-1344.

Marsh, E.J., Roediger, H.L., Bjork, R.A. et al. The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review 14, 194–199 (2007).

Odegard, T. N., & Koen, J. D. (2007). “None of the above” as a correct and incorrect alternative on a multiple-choice test: Implications for the testing effect. Memory, 15, 873-885.

Smith MA and Karpicke JD (2014) Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory 22: 784–802. 

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE pt 5 – Modifying tutorials for remote learners

In the last post I discussed the importance of the tutorial. It is a wonderful chance for students to either develop understanding, consolidate it, or extend it. However, it must be carefully designed with the tutor being acutely aware of the position each student holds on the learning continuum.

The virtual tutorial should not be treated any differently in terms of outcomes, but some modifications will need to be made to accommodate the technology that must accompany it and the increased challenge of being able to assess progress.

Like in a regular face to face tutorial, your students will present with different levels of competence, and managing this is indeed a great skill. To put it simply, you must be prepared! You must, whilst consolidating understanding for some in the tute (paired and completion examples) have something that those who are seeking to extend their thinking can do too (independent examples). To counter some of the difficulty in this, it is a good idea to get students to work on the paired problems BEFORE the tute. This gives the students time to go through the narrated problem first and practice it in order to consolidate their knowledge and memory of how to solve such a problem. It also can provide you with more information of who will need more help in the tute, and whether assigning these students into working groups might help.

WITHOUT QUESTION, providing videoed examples with the tutor narrating their thinking processes in solving a problem is the best form of example.

Using groups in a virtual tute

Knowing the strengths and weaknesses of students in the tute can help set up appropriate groups. Students could self-nominate too depending on their understanding of their needs on a particular topic. These homogeneous groups, which you can set up in Zoom before the tute begins, can serve to take some of the pressure off you as you try to negotiate managing the demands of 10 – 15 online students. The more independent groups can almost propel themselves, with you only checking in occasionally to clarify or encourage/congratulate. The majority of your time then can be dedicated to the strugglers at the paired example level. Those at the completion problem stage still need attention, but some in this group may be able to offer advice that sets them right.

Whilst working in the groups, lecturers like Eshan Sharifi at Adelaide University encourage students to ‘chat’ using their regular social media tool to informally engage with the questions. This type of peer learning is very powerful, as long as it is set up so that ‘near transfer‘ of knowledge is achieved: learning that is close to the original context in which the original knowledge was learnt.

Tools such as Zoom have a learning curve. Ensure that you provide students adequate time to become accustomed to the technology before requiring them to engage with it. Ensure that they have set-up their audio correctly and know how to do so. Ensure they know how to mute and the role of muting as a sign of respect for the group and to mitigate embarrassing moments.

Tim Klapdor

Mitigating plagiarism

Of great concern is the ease with which a student could copy their group partners’ answers, and thus not learn very much at all. Besides triangulating assessment to give you a better indication if this is actually happening, and ensuring your design of problems promotes ‘near’ learning, the tutor can call on specific students to see their workings on problems that have just been given to them.

I’m a huge believer that success motivates success, and when students are confident and succeeding in solving problems, they will do it as often as possible without anyone else’s help. They won’t cheat because the feeling of getting things right and understanding concepts is a far better feeling than simply getting the grade by itself. All it takes is to honour the learning continuum, identify the extent of students’ schemata, and support their development using examples.

The tutorial then can be sectioned in time, with groups working together on tasks and then each coming together to demonstrate knowledge to the tutor at intervals.

Making it virtual

Below I will discuss the required adaptations needed to facilitate the 3 key components of a successful tutorial. Please read here about what worked examples are before you continue.

  1. Worked examples
  2. Discussions and questions
  3. Wandering the room

1 Worked examples

Type of exampleModificationHow it’s checked/submitted
Paired examples – verbally narrating the workings out as you take students through a problem, then giving them a completed problem with annotations and an unsolved problem of the exact level of difficulty to use as a guide.  The tutor will need to use a camera of some description to show students their workings. The camera/visualiser would then be a shared screen in Zoom. (See below for how to achieve this)The student then uses their phone as a camera to demonstrate their written completed paired problem. If the tutor sees misconceptions, they can ask for the student to photograph the work and upload it by sharing their screen. (See below for how to achieve this). The shared images could be added to a discussion page set up specifically for the tute.
Completion examples – getting students to complete partially solved/written problemsAs above, and then hand over to students. It is better if the students write out the full problem, or you could provide this for them in a resource section connected to tutorials in the LMSAs above
Independent solvingNAAs above

Technical considerations

There are 2 technical considerations to master to make the virtual tutorial as effective as a face to face experience.

The tutor – there are several ways to connect a camera to your computer that can then be seen via Zoom by your remote students.

  • The easiest option is to use a visualiser, purchased for @$120. This gives you lots of flexibility and you can move the camera around quite a bit. The best bit is that you can host the tute from your office if necessary. *note: the camera’s software driver will need to be installed on your computer
  • The next possibility is to use the document cameras supplied in lecture theatres and rooms around the university. *note: the camera’s software driver will need to be installed on your computer
  • An innovative approach is to use your phone as a camera held above your workings. If you can find a flexible holder that allows you to position the phone appropriately then this is a cheap and easy solution. The issue though is the size of the phone’s screen in trying to complete the rest of the tute and seeing other’s workings.

The student – some students may be a step ahead of you in terms of finding tech solutions, but lots won’t, so providing explicit clear instructions how to go about participating and submitting work in virtual settings is imperative. The students have several options to submit their work:

  • Using a laptop – students have watched your worked example and are now doing their own, probably on a piece of paper. This completed task now needs to be uploaded and shared to the tutor:
    • take a photo of it on a phone
    • share it to the laptop
    • share it to Zoom
  • Using a phone – as above, except they share to Zoom straight from their phone. In fact, there is an option to take a photo to share, reducing the number of processes which some students will prefer.

The uploaded responses will provide you with lots of formative assessment. With a student’s consent, particular misconceptions could be used as examples and worked through to adjust thinking. The potential embarrassment of the initial mistake will be evaporated when the student finally understand the process – truth be told, a clever teacher is able to use the example without it causing any embarrassment whatsoever – it’s all about the tone and level of expectations you set, that learning is hard at times, and that students should be proud for putting themselves on the journey.

2 Discussions and questions

Because students are able to hear via Zoom, you can conduct your questioning strategy in much the similar way to face to face questioning. The process remains consistent:

  • Asking questions
  • Waiting before seeking responses so students can think about an answer
  • Checking for understanding by asking several students for a response BEFORE saying if they are right or wrong.
  • Extending thinking by delving deeper into some answers: ask for contrasts, opposites, connections to other learning, how it could apply to other contexts, etc.

However, virtual etiquette will need to be explicitly taught and trained over several sessions before it is mastered. Explain to students expectations for responding. Explain to them, and demonstrate it, that they WILL be called on at some point in the session – that they won’t be able to hide. If you develop their metacognition and explain why you are asking lots of questions: that you are developing their schema via retrieval practice, and that a participation grade will only be rewarded when they attempt questions asked of them, students will have significantly more buy-in to what you are trying to achieve.

The virtual space can make it easier for students to hide from conversations, with a typical response to a question being silence. But this won’t happen if you conscientiously spread the questioning around. Continuous questions combined with students demonstrating their problem solving by uploading their paired, completed and eventual independent examples turn the virtual tutorial into an excellent source of formative assessment.

As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’

Sharing work – very reliant on student consent, getting students to work in small homogeneous groups can be an effective strategy in the virtual tutorial. Students can easily share their screens with invited others, and this can be a good way for peer tutoring to be utilised. However, the selection of groups is key as is the timing of this strategy being used – it should be reserved for the completion example stage and beyond only.

3 Wandering the room

Obviously this isn’t possible in the virtual tutorial. However, it is important to keep a track on the virtual participants by asking lots of questions and using students’ names as often as possible. Direct address has a powerful effect on participation. If you have someone who is not comfortable responding with others listening, post questions to them and monitor their response.

The grading of work

This is then up to the tutor: perhaps after every second tute a summative type task is given to assess students’ understanding of the immediate domain of knowledge being taught. The frequency of the assessment is crucial. The more time between assessments the more chance of learning gaps developing, but more poignantly, the less chances students get to experience success after deliberate scaffolding. If you provide consistent smaller assessment that facilitates success, the more engaged they will be.

You may say that from experience the opposite is true – that students will realise that the assessments aren’t worth much and so won’t bother. BUT, before you equate this approach with previous experience, have you set the learning up in such a deliberate way that no learning gaps are possible, where students are continually made aware of their successes in answering questions and are continuously succeeding in assessment and so seeing the value in attendance and learning in general?

The next post will discuss using online quizzes.

*installing the software is simple, and can be done remotely by ITDS if required. You can download it here.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter