Yes, and no.                                                

Having the chat option open from the word go in an online tutorial can present problems for both you and the students. Whilst it may seem ideal for students to be able to interact when something comes to mind, the reality is that whatever else you are hoping will happen at the time they are chatting, like them listening to information or explanations, just won’t happen. This can be explained by dual coding theory.

Dual coding theory essentially tells us that we encode information via 2 channels in the brain, the auditory channel and the visual. Reading, listening and writing all fall under the auditory channel, and seeing and physical interactions fall in the visual channel. The theory informs educators that combining content in multi modal forms will enhance the encoding of that content, but crucially, also tells us that if you present multiple pieces of information in a single channel, the working memory will have to decide what to attend to, at the expense of the competing stimuli.

In other words, you can’t do two things at once in a single channel. If you expect students to read at the same time as listen to instructions or explanations, one of those requests will be compromised (a common mistake made in lecture theatres and classrooms worldwide when talking over PPT slides full of text). If you expect students to write at the same time as listening to instructions or explanations, they won’t be able to do it as efficiently as if only focusing on one stimulus. So, students typing away and responding to the online chat means they aren’t listening to you or paying attention to any text you may be presenting. It would be the same in a face to face setting: they would be talking to each other and therefore not attending to you.   

My advice would be, analogous to a regular face to face learning context, to restrict the availability of the chat to specific times in the session. Assessing for learning is of course crucial in a session, and the chat area is a good means of doing this, but you can’t hope to assess for learning if the students weren’t listening in the first place. Opening the chat up at specific times will maximise this avenue of assessing for learning.

Having said that, we do want to encourage students to write down questions that arise from your delivery, otherwise they undoubtedly will be forgotten. So to facilitate this, using Zoom, you would select the ‘HOST ONLY’ option (see the images below for how to do this). Only you will see the questions, and this means that other students won’t get distracted – and certainly not by completely unrelated comments that inevitably will propagate in the space. You will then perhaps dedicate a time after your delivery to address those questions that have come up…and then open up the chat lines for interactions.

Select the ellipsis on the RHS of the chat box
Select host only

For a step by step guide, view this video

So, in summary, by reducing the opportunities students have to lose concentration in a learning environment, you will increase the likelihood that they will be attending to what it is you want them to be focusing on. Of course, some classes will have the maturity to engage appropriately with the chat function and such measures of control won’t be necessary.

In the next post I will discuss other ASSESSMENT FOR LEARNING opportunities in the online space.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.


Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.


Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau


  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   


A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.


Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.


As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.


When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.


STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.


I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.


Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

An evidence based pedagogical plan for a Zoom tutorial

  1. Provide a pre-loaded/flipped worked problem – here’s why
  2. Begin with a quiz – here’s why
  3. Work through a problem analogous in difficulty to the pre-loaded problem – here’s why
  4. Present a new problem a little more difficult -– here’s why
  5. Have students break out into homogeneous ability rooms – here’s why
  6. Have students demonstrate their solutions – here’s why
  7. Provide feedback  – here’s why
  8. Set more practice tasks – here’s why

More rationale here

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  

ZOOM BOOM! Maximising virtual lessons

Using a virtual platform requires as much planning, preparation and expectation as a regular lesson. Of course there are differences to a face to face context, but like any good learning sequence, being aware of pedagogical principles will ensure the session is an active, useful learning experience.


  • knowing the tech
  • preparing the students and the session
  • managing the session

Knowing the tech

At Adelaide University we have developed a range of resources that will take the academic from the basics of downloading zoom to their computer to being able to proficiently place students into virtual breakout classrooms here. I know many other Universities also have good resources, like this one from UQ. We recommend the following:

  • Set yourself small goals in mastering one aspect of the tool at a time.
    • Practice amongst your peers and learn about the functionality of the platform.
    • Perhaps the most important thing to remember is that your skill using the tech will improve considerably with practice, and that you what may seem overwhelming now, will be an automatic teaching method soon.

Preparing the students and the session

  • Students:
    • make sure the students understand the tech.
    • Provide clear and explicit instructions how to download and use the tool – we have developed these already.
    • Provide clear and explicit expectations about participation and etiquette.

In the end, the online session is still a classroom, and the behaviours for learning you would expect in a classroom to maximise learning are the ones you should expect and demand in a virtual setting. As soon as your expectations drop because you aren’t confident that the setting can produce learning, then you’ll lose the student engagement.  

  • The session: it is imperative that you are clear what the objectives of the session are. Is the goal to teach a new idea, check for understanding, to correct misconceptions, to extend thinking or simply to practice and consolidate existing knowledge? When used in conjunction with a recorded lecture in Echo 360, or a pre-loaded or flipped activity in a discussion board, the zoomed tutorial is often used to check for understanding. Have clear sectioned elements to the tutorial:
    • a recap of the last session (an introductory retrieval quiz is best)
    • a modelled example to introduce the desired content
    • opportunity for students to demonstrate their understanding
    • opportunity for students to ask questions
    • opportunity to practise

Managing the session

Always remember the session is an opportunity for learning, and what you would do in a regular learning context is what has to be applied here too.

  • Start on time – have students login 5 minutes before the start so you are not waiting for stragglers and being interrupted when the tutorial begins by having to add them manually to the session. The waiting room can have the session rules attached as seen above.
  • As soon as the session begins have students complete a recap quiz – also provides something for punctual students to do whilst you’re waiting for others to join. Retrieval is everything in learning!
  • Go through answers briefly
  • Discuss the expectations and rules of engagement of the current session. Repeat these many times over lots of sessions, so the process eventually becomes automatic for students.
  • Be friendly and encouraging – and patient whilst students become familiar with the process
  • Go through an example similar in difficulty to the pre-loaded activity as a warm up, narrating your workings. See here for more on the power of worked examples.
  • Present the pre-loaded activity
  • Check for understanding
    1. By asking questions: don’t take one or 2 student responses as an indication of the whole group’s understanding. See here for how to ask the right questions.
    2. By getting students to upload or show their learning,
  • Use at least 2 student examples to provide feedback – discussing their strengths and weaknesses will be another teaching moment
  • Present another activity of analogous difficulty to strengthen understanding. Consider breaking cohort into homogeneous groups, have them discuss the problem and present a consensus back to the main cohort’s discussion page.  
  • Present a final activity that is harder

Successful Zoom sessions will offer you a unique opportunity to check for understanding or to extend student knowledge.  It also offers an opportunity to place yourself in the shoes of the learner, the learner who is constantly introduced to a lot of new content and problems and may feel overwhelmed at times in the process. The more conscious you are of helping students manage the cognitive load when introducing new material, the better you will design and sequence that learning. Concomitant with that is articulating your method and helping students become stronger at understanding the metacognitive process.

Mastering Zoom will take practice, but that’s like everything you first began.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  


This is the 7th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.


Multiple choice assessments have anecdotally been the pariah of the assessment family. But its perceived inferiority as a valid form of assessment is unfounded, as research by Smith and Karpicke (2014) attests. However, for the format to be just as effective as short answer questions, the design of the test requires careful consideration, and I shall now outline the key characteristics of an effective multiple choice test.


Understanding schema is everything, as always. An awareness that you are building your students’ schema of a topic will help shape the design of your multiple choice questions. Butler, A. C., Marsh, E. J., Goode M. K., & Roediger, H. L., III (2006) discovered that adding too many lures as distractors to the novice learner not only negatively impacted on motivation, but also inhibited later recall of that content when compared to the performance of a student with better developed schema. This makes sense because novices are not yet able to distinguish between the distractors because their knowledge is not secure enough. While it may be tempting to make the questions harder by adding in lots of other knowledge, it is not an effective strategy.

We also know that there is the possibility that a novice will learn from the ‘incorrect’ lures/distractors presented (Marsh, E.J., Roediger, H.L., Bjork, R.A. (2007)), further evidence that we need to be cautious and precise when designing multiple choice questions for novice learners.


The design of the questions should emulate the way the knowledge was taught: incrementally building in difficulty.


Initially, individual pieces of knowledge that form part of a larger key concept need to be retrieved. Much of the content of multiple choice questions at this stage of the learning journey would be based on factual knowledge that simply has to be retained to help shape understanding of more complex knowledge at a later time. The advantage of using these questions to dominate the fundamental stages of your retrieval strategy is that you will be able to isolate misconceptions and gaps in learning immediately; the reality is that if a student is struggling at this stage, then they either haven’t studied or paid enough attention to the content. By approaching the design of your assessment in this way, you are ensuring that your students can walk before you expect them to run.


As a retrieval strategy, multiple choice tests should help a student master individual components of the course before they strive to test several and eventually all components of the course.  


There are several design choices that strengthen the validity of a multiple choice question being able to assess learning.

  • Brame, C., (2013) has written a superb resource on multiple choice design considering factors such as writing an appropriate stem, suitable alternatives (distractors), why none of the above and all of the above make it easier to guess through deduction (which means you’re not testing what you want to test) and how to engage higher order thinking.
  • Odegard, T. N., & Koen, J. D. (2007) suggest that there are certain questions, such as ‘none of the above’ that you shouldn’t ask, as they potentially don’t encourage retrieval as none of the relevant information is being recalled. Also, of concern is that one of the wrong answers may incidentally and inadvertently be retrieved. 
  • Answers should include at least 2 plausible options, otherwise a student can choose an answer by elimination, which is not necessarily strengthening the retrieval of the correct answer. For example, a poor design would be: What is the capital of Australia? A) London, B) Canberra C) Paris, D) Berlin. In this question the student doesn’t have to know it is Canberra, they could just eliminate the other options that they would have heard of before. If option D) was Sydney, then they would have to think and retrieve harder.
  • The number of plausible options should increase as the retrieval stretches to include multiple components of the course.
  • As the course proceeds and the domain of knowledge increases, the range of questions increases to include previous learning as well as the current learning. Adding options that are wrong in the current question but correct to another question has been shown to be effective: Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). This strategy is only useful however when a student has a well developed schema about the content, otherwise incorrect answers could be again inadvertently retrieved, but now on two occasions.
  • Feedback AS RETRIEVAL – Besides automatic marking, multiple choice questions provide 2 extra bonuses: they help make feedback more precise, and a prepared discussion of why certain plausible options are not quite the right answer presents another excellent retrieval opportunity as students see the correct answer in context and how it is connected to other pieces of knowledge. Below is a good example of this:


There is a science and an art to designing multiple choice questions. Understanding the research on what works and what doesn’t will render your design an effective assessment for/of learning tool as well as an excellent retrieval activity, or simply a tokenistic waste of time.


By asking several questions about the same concept the tutor can safely eliminate that students have guessed their way to success.

The same can be done by ensuring there are at least 4 options as answers for questions: every extra option statistically reduces the chance of guessing correctly.  


If you provide enough questions, and enough options inside those questions, statistically you’ll be in a better position to assess learning.


Eventually, the multiple choice test you design will strive to assess not just individual pieces of knowledge, but more of the domain. The domain will be made of many individual components, which are in turn made up of many individual pieces of knowledge. When designing the domain tests, questions should be created with a mastery approach in mind, where there will be 3 streams of knowledge: core, developmental, and foundational.

A student who incorrectly answers a question in the core stream shouldn’t be encouraged to continue with the quiz in this ‘core’ stream of questions until they can address the error: the error produces a learning gap that can be confounded later if not fixed now.

A mastery pathway enables this by redirecting the student to a ‘developmental’ stream of questions to help strengthen and eventually secure the correct knowledge necessary to return to the core stream. The developmental stream is comprised of 3 – 4 questions that are hierarchical in difficulty, eventually building to be analogous to the original question. Students who simply made a mistake or pressed the wrong choice for example, are encouraged by this process to be more precise in the future – they are also presented further retrieval opportunity, and so still gain from the perceived waste of time, provided they are aware of the teaching strategy (more on the power of metacognition in the next post).

If a student is unsuccessful in the developmental stream they are indicating that they need further knowledge building. Such a student would be redirected to a ‘foundational’ stream, where questions take the student back to basic factual and elementary pieces of knowledge. Success in this stage provides access back to the developmental stream and then eventually back to the core stream, and crucially, possessing the required knowledge to progress in the course. The video below illustrates this process.


It may take some students longer to arrive at the required level of knowledge, but at least they will eventually arrive – that is not something that every teacher could guarantee presently.


Of course, designing a multiple choice sequence is a time consuming affair. Sometimes coming up with the ‘wrong’ distractor options is actually quite difficult. Having to then design extra questions to satisfy a mastery pathway is even more demanding. But, once created, the multiple choice test is able to be used multiple times, over many years, and will have significant benefits to students who present with learning gaps. Also, it will actually save you time in the long run as less energy will have to be spent addressing gaps further into a course.

So, in summary, the key things to consider when designing multiple choice questions are:


 Butler, A. C., Marsh, E. J., Goode, M. K., & Roediger, H. L., III (2006). When additional multiple-choice lures aid versus hinder later memory. Applied Cognitive Psychology, 20, 941-956.

Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23, 1337-1344.

Marsh, E.J., Roediger, H.L., Bjork, R.A. et al. The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review 14, 194–199 (2007).

Odegard, T. N., & Koen, J. D. (2007). “None of the above” as a correct and incorrect alternative on a multiple-choice test: Implications for the testing effect. Memory, 15, 873-885.

Smith MA and Karpicke JD (2014) Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory 22: 784–802. 

I’m Paul Moss. I’m a learning designer. Follow me @edmerger


This is the 6th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Memory is a fascinating thing. Essentially, the more we replay something that has happened to us in our mind, the stronger the chance that it will move into the long-term memory, and thus be remembered for some time. The replaying can take many forms. It may be someone asks you a question about your day, a question about something they know you heard on the news, or simply you sitting on the train on your way home going over an incident that really annoyed you. All of these retrievals of the already happened moments strengthen the memory of them. However, the strength of the memory is related to how much work you have to do to replay it. If you merely think about it, the memory won’t be as strong as if you had to tell someone about it (Roediger and Karpicke, 2006)

This theory of retrieval has enormous implications for education.

If you want students’ memory of key concepts to improve, provide opportunities for them to retrieve that content. One of the most efficient ways of doing this is to ‘test’ student knowledge using low stakes assessment. This can be done formatively by asking questions and by getting students to write down or represent what they know. This process has several benefits:

  • It helps you to see what students do or don’t know, which means you can adjust your learning sequences if necessary to correct misconceptions
  • It helps students strengthen the neural pathways the information flows in which makes remembering the information easier at a later stage.
  • The ease of remembering frees the working memory for new information to be encoded more efficiently


In 1913, Ebbinghaus came to the conclusion that when learning something new, ‘With any considerable number of repetitions a suitable distribution of them over a space of time is decidedly more advantageous than the massing of them at a single time.‘ The theory came to light after he realised that we begin to forget information as soon as we encode it. The ‘forgetting curve’ demonstrates this aptly. When Ebbinghaus interrupted the forgetting by retrieving the information at certain points, he could consequently ‘remember’ the information at a later date.

So, interrupting the forgetting curve by including retrieval into your sequence of learning is paramount. But the timing of that interruption matters. Bjork suggests that if you have students retrieving information too soon after encoding the effects on memory are weak (high retrieval strength but low storage strength), but if you wait too long, the information may need to actually be retaught. Joe Kirby explains this well here. There seems to be a sweet spot in terms of timing the retrieval practice. Of course, students will vary in what that timing should be, depending on various factors, including what their attentiveness was like when first presented with the content. However, effective teaching will realise that students invariably need access to information on at least 3 occasions for it to have a chance of being converted into the long term memory (Nuthall), and so continuously returning to previously taught content by weaving it into the current learning is a must.

As already stated, this HOW of delivering retrieval is pertinent. What is ideal is to create a situation that is not too easy, quite challenging, yet no too hard. Bjork alludes to this notion when he discusses ‘desirable difficulties’, where the testing makes the activity ‘desirable because it can trigger encoding and retrieval processes that support learning, comprehension, and remembering.’

What is important however, like in all learning design, is to ascertain where students are on the learning continuum before creating the retrieval: ‘If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties.’ This insight rationalises why simply re-reading notes or a textbook has been consistently found to be significantly less impactful on learning than actively demanding a response from a student.

Engaging students in having to actively retell what they know can take several forms, including completing a concept map about a topic, writing down everything one knows about an idea, or answering questions about the content. The really useful has a host of ways to enact the strategy here. It’s a practice that shouldn’t be bound by sector, or discipline, and in fact should be implemented as soon as learning begins, as some primary teachers are now demonstrating.

But perhaps the most effective form of retrieval practice is the test, where students have to search their memories to produce answers. The next post discusses the power of the online multiple choice test.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE pt 5 – Modifying tutorials for remote learners

In the last post I discussed the importance of the tutorial. It is a wonderful chance for students to either develop understanding, consolidate it, or extend it. However, it must be carefully designed with the tutor being acutely aware of the position each student holds on the learning continuum.

The virtual tutorial should not be treated any differently in terms of outcomes, but some modifications will need to be made to accommodate the technology that must accompany it and the increased challenge of being able to assess progress.

Like in a regular face to face tutorial, your students will present with different levels of competence, and managing this is indeed a great skill. To put it simply, you must be prepared! You must, whilst consolidating understanding for some in the tute (paired and completion examples) have something that those who are seeking to extend their thinking can do too (independent examples). To counter some of the difficulty in this, it is a good idea to get students to work on the paired problems BEFORE the tute. This gives the students time to go through the narrated problem first and practice it in order to consolidate their knowledge and memory of how to solve such a problem. It also can provide you with more information of who will need more help in the tute, and whether assigning these students into working groups might help.

WITHOUT QUESTION, providing videoed examples with the tutor narrating their thinking processes in solving a problem is the best form of example.

Using groups in a virtual tute

Knowing the strengths and weaknesses of students in the tute can help set up appropriate groups. Students could self-nominate too depending on their understanding of their needs on a particular topic. These homogeneous groups, which you can set up in Zoom before the tute begins, can serve to take some of the pressure off you as you try to negotiate managing the demands of 10 – 15 online students. The more independent groups can almost propel themselves, with you only checking in occasionally to clarify or encourage/congratulate. The majority of your time then can be dedicated to the strugglers at the paired example level. Those at the completion problem stage still need attention, but some in this group may be able to offer advice that sets them right.

Whilst working in the groups, lecturers like Eshan Sharifi at Adelaide University encourage students to ‘chat’ using their regular social media tool to informally engage with the questions. This type of peer learning is very powerful, as long as it is set up so that ‘near transfer‘ of knowledge is achieved: learning that is close to the original context in which the original knowledge was learnt.

Tools such as Zoom have a learning curve. Ensure that you provide students adequate time to become accustomed to the technology before requiring them to engage with it. Ensure that they have set-up their audio correctly and know how to do so. Ensure they know how to mute and the role of muting as a sign of respect for the group and to mitigate embarrassing moments.

Tim Klapdor

Mitigating plagiarism

Of great concern is the ease with which a student could copy their group partners’ answers, and thus not learn very much at all. Besides triangulating assessment to give you a better indication if this is actually happening, and ensuring your design of problems promotes ‘near’ learning, the tutor can call on specific students to see their workings on problems that have just been given to them.

I’m a huge believer that success motivates success, and when students are confident and succeeding in solving problems, they will do it as often as possible without anyone else’s help. They won’t cheat because the feeling of getting things right and understanding concepts is a far better feeling than simply getting the grade by itself. All it takes is to honour the learning continuum, identify the extent of students’ schemata, and support their development using examples.

The tutorial then can be sectioned in time, with groups working together on tasks and then each coming together to demonstrate knowledge to the tutor at intervals.

Making it virtual

Below I will discuss the required adaptations needed to facilitate the 3 key components of a successful tutorial. Please read here about what worked examples are before you continue.

  1. Worked examples
  2. Discussions and questions
  3. Wandering the room

1 Worked examples

Type of exampleModificationHow it’s checked/submitted
Paired examples – verbally narrating the workings out as you take students through a problem, then giving them a completed problem with annotations and an unsolved problem of the exact level of difficulty to use as a guide.  The tutor will need to use a camera of some description to show students their workings. The camera/visualiser would then be a shared screen in Zoom. (See below for how to achieve this)The student then uses their phone as a camera to demonstrate their written completed paired problem. If the tutor sees misconceptions, they can ask for the student to photograph the work and upload it by sharing their screen. (See below for how to achieve this). The shared images could be added to a discussion page set up specifically for the tute.
Completion examples – getting students to complete partially solved/written problemsAs above, and then hand over to students. It is better if the students write out the full problem, or you could provide this for them in a resource section connected to tutorials in the LMSAs above
Independent solvingNAAs above

Technical considerations

There are 2 technical considerations to master to make the virtual tutorial as effective as a face to face experience.

The tutor – there are several ways to connect a camera to your computer that can then be seen via Zoom by your remote students.

  • The easiest option is to use a visualiser, purchased for @$120. This gives you lots of flexibility and you can move the camera around quite a bit. The best bit is that you can host the tute from your office if necessary. *note: the camera’s software driver will need to be installed on your computer
  • The next possibility is to use the document cameras supplied in lecture theatres and rooms around the university. *note: the camera’s software driver will need to be installed on your computer
  • An innovative approach is to use your phone as a camera held above your workings. If you can find a flexible holder that allows you to position the phone appropriately then this is a cheap and easy solution. The issue though is the size of the phone’s screen in trying to complete the rest of the tute and seeing other’s workings.

The student – some students may be a step ahead of you in terms of finding tech solutions, but lots won’t, so providing explicit clear instructions how to go about participating and submitting work in virtual settings is imperative. The students have several options to submit their work:

  • Using a laptop – students have watched your worked example and are now doing their own, probably on a piece of paper. This completed task now needs to be uploaded and shared to the tutor:
    • take a photo of it on a phone
    • share it to the laptop
    • share it to Zoom
  • Using a phone – as above, except they share to Zoom straight from their phone. In fact, there is an option to take a photo to share, reducing the number of processes which some students will prefer.

The uploaded responses will provide you with lots of formative assessment. With a student’s consent, particular misconceptions could be used as examples and worked through to adjust thinking. The potential embarrassment of the initial mistake will be evaporated when the student finally understand the process – truth be told, a clever teacher is able to use the example without it causing any embarrassment whatsoever – it’s all about the tone and level of expectations you set, that learning is hard at times, and that students should be proud for putting themselves on the journey.

2 Discussions and questions

Because students are able to hear via Zoom, you can conduct your questioning strategy in much the similar way to face to face questioning. The process remains consistent:

  • Asking questions
  • Waiting before seeking responses so students can think about an answer
  • Checking for understanding by asking several students for a response BEFORE saying if they are right or wrong.
  • Extending thinking by delving deeper into some answers: ask for contrasts, opposites, connections to other learning, how it could apply to other contexts, etc.

However, virtual etiquette will need to be explicitly taught and trained over several sessions before it is mastered. Explain to students expectations for responding. Explain to them, and demonstrate it, that they WILL be called on at some point in the session – that they won’t be able to hide. If you develop their metacognition and explain why you are asking lots of questions: that you are developing their schema via retrieval practice, and that a participation grade will only be rewarded when they attempt questions asked of them, students will have significantly more buy-in to what you are trying to achieve.

The virtual space can make it easier for students to hide from conversations, with a typical response to a question being silence. But this won’t happen if you conscientiously spread the questioning around. Continuous questions combined with students demonstrating their problem solving by uploading their paired, completed and eventual independent examples turn the virtual tutorial into an excellent source of formative assessment.

As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’

Sharing work – very reliant on student consent, getting students to work in small homogeneous groups can be an effective strategy in the virtual tutorial. Students can easily share their screens with invited others, and this can be a good way for peer tutoring to be utilised. However, the selection of groups is key as is the timing of this strategy being used – it should be reserved for the completion example stage and beyond only.

3 Wandering the room

Obviously this isn’t possible in the virtual tutorial. However, it is important to keep a track on the virtual participants by asking lots of questions and using students’ names as often as possible. Direct address has a powerful effect on participation. If you have someone who is not comfortable responding with others listening, post questions to them and monitor their response.

The grading of work

This is then up to the tutor: perhaps after every second tute a summative type task is given to assess students’ understanding of the immediate domain of knowledge being taught. The frequency of the assessment is crucial. The more time between assessments the more chance of learning gaps developing, but more poignantly, the less chances students get to experience success after deliberate scaffolding. If you provide consistent smaller assessment that facilitates success, the more engaged they will be.

You may say that from experience the opposite is true – that students will realise that the assessments aren’t worth much and so won’t bother. BUT, before you equate this approach with previous experience, have you set the learning up in such a deliberate way that no learning gaps are possible, where students are continually made aware of their successes in answering questions and are continuously succeeding in assessment and so seeing the value in attendance and learning in general?

The next post will discuss using online quizzes.

*installing the software is simple, and can be done remotely by ITDS if required. You can download it here.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE pt 4 – worked examples

This is the 4th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Even though the nature of higher education makes it harder to formatively assess, it can be done. Below is a list of sources of data that a tutor can use to triangulate their understanding of where a student sits on the learning journey, and importantly, whether what they think they are teaching is actually being learnt:

  1. Using the lecture
  2. Using the tutorial
  3. Using online quizzes
  4. Using mastery pathways
  5. Using online discussion boards
  6. Using groups
  7. Using participation
  8. Using analytics

Using the tutorial

The tutorial is very much the place to check for learning. In the much smaller populated room, the tutor can use techniques that are known to be effective in a regular classroom, including using worked/completion examples, asking lots of questions, and wandering the room when students are solving problems to check progress. As the very wise Tim Klapdor suggests, ‘tutorials are not a time to lecture students or introduce new concepts.’

1 Using Worked and Completion Examples

Worked examples are priceless in learning. The lecture ideally was full of many completed examples related to the topic, each part of the example deliberately verbally narrated to help students begin the process of either connecting the new content with existing schema, or actually building new schema. The tutorial is now the place where the tutor can assess where the students currently sit on the learning continuum, and this will determine the stage of worked example they present.

To begin the session, the tutor may present a problem of similar ilk from the lecture. If students appear to not be secure in their knowledge the tutor will realise that the schema is not established sufficiently for any independent work. The image below from Sweller’s Efficiency in Learning captures the progression necessary to develop the relevant schema and move learners from novice to expert/independent.

backwards fade.png

The narration of processes involved in solving problems must now take place. The tutor articulates their own schema in this process, providing a live model for students to capture in their own memory. It is this captured memory they will draw from later to solve similar problems. In this way, learning is truly constructivist. Consequently, through logic, this stage can’t be rushed, or worse, bypassed, as it is by those those who conflate the epistemology of constructivism with a method of teaching, rendering learning to a free for all of unscaffolded inquiry, inquiry that inevitably fails as students exhaustively scramble to locate relevant connections in their minds that simply aren’t there.

cogload theory
BIg thanks to Tom Needham for enlightening me on worked examples

To further deepen the memory of the worked example, students should complete paired examples at this point. This means that they are provided with a completely worked solution and one to solve that is analogous to the one presented. The key here is analogous. It must be of the same difficulty and according to Engelmann, differing in as few elements as possible. This allows students to build the required schema that can then be transferred to similar problems later.

Once students are able to do this, then they move onto the completion problems, where a solution is only partially completed and they have to finish it. Eventually, after sufficient practice that helps to automatise the processes, the established schema allows a multitude of problems to be able to be solved. It is here they have become expert in the topic, and are able to inquire about it independently and creatively.

Preventing plagiarism – I’m a huge believer that success motivates success, and when students are confident and succeeding in solving problems, they will do it as often as possible without anyone else’s help. They won’t cheat because the feeling of getting things right and understanding concepts is a far better feeling than simply getting the grade by itself. All it takes is to honour the learning continuum, identify the extent of students’ schemata, and support their development using examples. I talk lots more about this in the online assessment posts, because it is online where plagiarism can be difficult to stop.

2 Asking Lots of Questions

Effective questioning is a powerful way to assess for learning. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By doing so it is quite easy to gauge the knowledge of a tutorial sized group. By carrying out this formative assessment you will be able to direct the next sequence of learning with far greater precision.

3 Wandering the room checking for understanding

These opportunities would present themselves at each of the worked example stages. Initially, the extra guidance afforded to the student could be enough to make a final connection to understanding if it hasn’t sunk in yet, or it could be, at a latter stage, a chance to deepen thinking by asking more open ended questions and applying them to different contexts.

By the end of each tutorial, your assessment for learning and the modifications you make to teaching as a result would have facilitated the development of relevant and necessary schema in your students’ minds.

Grading tutorials

The tutorial could then be used as a means of assessment, with you providing a grade for participation as well as solved problems.

  • The participation will almost be tokenistic, but you will know that the easy marks rewarded are merely a superficial representation of the greater significance and incentive for their attendance and work ethic: the development of schema.
  • The latter quarter of the tutorial (or perhaps a whole tute after several tutorials of practising) could also be assigned for the testing of students independently solving problems. The final 5 minutes would be peer marking from your displayed answer sheets so you don’t have to do any marking, only the recording of their grades.

As students walk out of the tutorial, be explicit with what they have achieved. ‘Jane, today you not only solved lots of problems, and clearly got past a bit of a barrier, but you also picked up all of your eligible participation points. Well done!’ Guaranteed, they’ll be back next week.

The next post discusses how you can adapt to a virtual tutorial.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger



This is the second in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it. The previous blogs can be found here

Assessment in higher education is a complex affair. The autonomy given to students and the scale of the organisations that provide higher ed traditionally reduce assessment to its summative form. Much to the dismay of tutors, sometimes the autonomy, particularly in the online submissions of student work, maculates the spirit of the offering when academic integrity is compromised. But it is not just this that renders the practice of reverting to summative assessment as an impotent means of measuring student understanding: it is the loss of opportunity to check for misconceptions and gaps in knowledge along the learning journey that attenuate the potential of a higher education. Formative assessment is the antidote.


Assessing formatively in higher ed is not as easy as in other education sectors, but it can be done. If the regular method of asking lots of questions in a classroom or tutorial isn’t as practical in a large lecture theatre, the tutor needs to think innovatively and look for other ways to formatively assess. The answer is to triangulate the assessment process.

O’Donoghue and Punch define triangulation as a method of cross-checking data from multiple sources to search for regularities in the research data. As a tutor, the more information you garner about student progress and understanding the more you will be able to evaluate if the design of your learning sequence is as effective as you believe it to be, and thus be able to adjust and reteach certain topics if necessary, or provide specific support to fill learning gaps. This iterative approach will have an enormous impact on a student’s ability to succeed in your course, and ultimately, in a time of increasing accountability, support your own well-being in knowing you have used as much of the available evidence as possible to support your students.

Below I have detailed some options you can choose from to gain a triangulated perspective of progress. However, the list is certainly not exhaustive, and I welcome further ideas if you have some. Click on each option as it becomes available for ideas in how to formatively assess your students:

  1. Using the lecture
  2. Using the tutorial
  3. Using online quizzes
  4. Using mastery pathways
  5. Using online discussion boards
  6. Using groups
  7. Using participation
  8. Using analytics

I’m Paul Moss. I’m a learning designer. Follow me @edmerger