chunking lectures – it’s a bit of a no-brainer

Breaking a lecture up into distinct chunks or sections is a bit of a no-brainer. It is all to do with understanding the implications of cognitive load theory, specifically that the brain can only process a small amount of new information at once. Presenting more information than the brain’s architecture can handle leads to overloading the working memory, and usually a significant decrease in learning.

Breaking your lecture into chunks provides students a chance to process each chunk before new material is presented. Designing opportunities for students to be active (black boxes) in the processing of the content also assists in facilitating the content’s understanding, and eventual transfer into long term memory.

So, here’s a possible live -streamed lecture design that considers cognitive load implications, the need for the student to be active in their learning, and is very manageable for the lecturer. The model can be applied to both live and recorded lectures, but the recorded lecture will need some more specific context discussed, which I will do in another post.

I’ve talked before about the possible mixed-mode future of live lecturing, with it being able to facilitate a breakout room. The below model considers this as a possibility.

lesson segmentrationaletech to assist
introThe lesson begins with a retrieval quiz.
The benefit of retrieval is enormous. It strengthens the memory of key ideas and content. The purpose of this is so the knowledge can be automatically brought to cognition when new learning is presented, without taxing the working memory. The more knowledge the student can draw from, the greater the opportunity to delve into more higher order independent learning, so building students’ schema through retrieval is is a bit of a no-brainer.
The lecturer will place answers on the screen, and spend 2-3 minutes explaining answers if common errors were made.
Polls
Echo 360
Mentimeter
Quizziz
Canvas quiz
teachingDelivering content.
10-12 min.
Incremental building to application is is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts before tehy are presented with more content. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture, and is a bit of a no-brainer.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
teachingDiscussion of last task if necessary – may not be if practising or completing examples.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
Formative assessmentChecking for learning
A quiz of short answer opportunity to see if what you have presented so far has been understood is is a bit of a no-brainer. The questions also provide another opportunity for a student to process the content and develop a better understanding.
Questions up on screen.
Zoom polling.
Using Canvas discussions as student answer repository.
Mentimeter.
Quizziz.
teaching Check answers – you may need to pivot the lecture if misconceptions are still prevalent.
Delivering content.
10-12 min.
Incremental building to application is a bit of a no-brainer. The lecturer is conscious of the need to present content clearly and simply, very much aware of multimedia principles that promote the efficient encoding of new information. They are also aware of the importance of modelling problem solving and incorporate worked examples into the presentation. Where appropriate, the lecturer connects the new learning to real world applications, not just to make the content relevant, but more so to build the mental patterns and analogies in the students’ schemata.
The lecturer also frequently mentions the reasons why decisions in the teaching are being made so as to strengthen the students’ metacognition.
PPT slides.
Document camera.
Students can take notes in Echo, can raise confusion flag, and ask a question at precise point in either the live stream.
student activityStrengthening understanding
This provides students a chance to take in what has just been presented, and think about the concepts. Essentially the student is trying to convert the abstract to the concrete. Providing students with the opportunity to complete worked examples, practise solving similarly structured problems, or discussing with a peer possible analogies to the content is valuable at this point in the lecture.
Breakout rooms.
Mentimeter open question.
Echo discussions. Canvas discussions.
GoFormative.
summary Recapping key ideas. Tying the lecture all together: linking it to previous learning and real word contexts. Discussion and questions asking students to link their learning is a great way to draw attention to the key concepts again, and is a bit of a no-brainer. Mentimeter open ended question.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

TRAINING STUDENTS FOR ONLINE EXAMS REDUCES COGNITIVE OVERLOAD

Teaching to the test doesn’t work. But teaching students about the test is imperative. Not only that, exam performance IS a thing, and you can assist students to get better at that performance. It’s all about mitigating cognitive load.

GAME TIME – Any sports person will tell you that match fitness is everything. Regardless of how much you prepare, you never achieve the same level of fitness and game knowledge compared to actually playing. Why? Because when the real thing happens, not only do nerves and adrenaline consume vast amounts of energy, interfering with the ability you have coming to the surface, but lots of other unexpected occurrences happen, all leading to increased cognitive load, and leading to exhaustion quicker. The cognitive load can be so debilitating that the player has to rely on muscle memory to get them through. When a student sits an exam, adrenaline and anxiety will naturally surge through their veins. Helping them revise the content is a must, but importantly, helping them become more familiar with the game/exam context is climactical, and this can be achieved by training students to automaticity with exam technique.

ABOUT THE TEST

1. Exam layouts

 Show students, and get them used to, the layout of the online exam. The more they see the module and layout of the exam and understand what the expectations are of each section, the less pressure they’ll feel when they see the real thing.  

Of particular importance with students having to complete exams online is detailing the processes involved if they experience technical issues. Take them through the procedures so if it happens during the exam they don’t lose all confidence and panic. ALSO: Ensure students have read the academic integrity policy and that you discuss it repeatedly – the more you talk about academic integrity the more of it you’ll get.

MANAGEABLE Student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%

2. Question requirements

Ensure students know what each question is demanding of them.

How long is a piece of string?

What does a short answer look like? What gets you full marks? What does a long answer look like? What gets you full marks? How much working out is necessary? How much detail is required?

Don’t expect students to guess the answers to these questions. Students who have to worry about what constitutes a good answer expend lots of valuable cognitive load. Model the expectations by showing previous examples, past exams, etc.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%

IN THE EXAM

1. Time training

Training students with timings of questions in exams will significantly propitiate cognitive load. It’s one thing to know what the question demands of you, but another to actually do it in a stressed environment. If a student isn’t used to the pressure of time, the longer the exam goes on, the greater the likelihood of their cognitive load increasing and their performance reducing as they panic with the evaporation of time. So, get them to practice doing a mock of a section in the exam – let them experience what it’s like to type in the allocated time – do their fingers get tired? What’s it like to upload if necessary etc. The more practice they get the better, but if you are running out of lesson time to train students, at least give students the chance to practice once – just one section that requires an upload process for example.

This image has an empty alt attribute; its file name is image-2.png

The other aspect of time training is in helping students to set personal timers. Obviously, the online exam doesn’t have all the usual cues that an invigilated exam offers: a large clock, a warning by the invigilator of 5 minutes to go, and even the cues of students completing and organising their work on the next desk. But an advantage of online exams is that students can set their own alarms to negotiate each individual section of the exam, and not accidentally spend too much time on a certain section:

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%

2. Editing their work

Rereading responses is difficult for exhausted students to do at the end of a lengthy exam. It is usually at this point that they have a sense of relief, and the last thing they want to do is reread what they’ve done. Of course, it’s madness not to, to ensure there are no silly mistakes, particularly in multiple choice questions, or content mistakes. Even checking for structural, punctuation and/or spelling issues could benefit the overall grade. 

So, I have to build that practice into their normal way of working, so it becomes a part of the process, and not an add on. This can really only be achieved by repeatedly physically getting students to do it: at the end of each ‘mock’ assessment, stop the test and get students to spend 4 – 5 minutes in dedication to proof reading…and explain the rationale, repeatedly: I always tell my students they WILL lose more marks with errors (they can fix) than they are able to gain by writing more response in the last 5 minutes. But without it being a normal way of working, exhausted students won’t do it automatically.   

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%

3. Being professional

Not panicking in certain situations is crucial in reducing cognitive load. Taking students through possible scenarios will help to calm them if the situation presents in the exam, scenarios such as:  If you’re running out of time what should you focus on to get you the most marks? What to do if you can’t answer a question – do you panic and lose total focus for the rest? Should you move on and come back to questions? Are you aware that the brain will warm up and so coming back later may be easier than it is now? This last point is absolutely crucial to convey to students. As the exam progresses, lots of the exam content itself may trigger or cue retrieval of content that couldn’t previously be answered, so teaching students this metacognitive notion could make a significant difference to their overall performance.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%
Being professional 10%5%

As you can see by the very much made up numbers, the cognitive load experienced by Student A is significantly greater than Student B, and would indubitably affect performance in the exam. The student’s knowledge would have to fight a great deal to break through the pressure. 

BEGIN NOW!

The more you do something the better at it you get, provided of course you’re doing it the right way. Students don’t really get that many opportunities to learn to negotiate the exam environment on their own, especially in the current context of moving to online non-invigilated exams, and so providing them with such training is critical. 

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger, and follow this blog for more thoughts on education in general.  

WHY MAFS IS A GOOD MODEL FOR EDUCATION

It ostensibly seems like a very tenuous link, but there is actually a strong corollary between the way the show is edited and the way educators should approach the delivery of their courses.

To maintain the intensity of the central driver of the show, the emotional connections the audience make with the actors*, the editors continuously replay certain scenes that are contingent to a theme or storyline they believe will generate the maximum reaction from the audience. Either deliberately or intuitively, by frequently recalling key content, the producers facilitate the retrieval of the content for its audience, which in turn strengthens their propensity to remember it. Being able to remember what has happened is critical for the audience to connect their feelings to the new drama, and maintain the necessary emotional intensity required for the show to be successful.

The show runs for nearly an hour on television or on demand, but the amount of ‘fresh’ material in each show would amount to about a third of the overall content. The show is unconscionably peppered with adverts, sometimes inserted after just 3 minutes of viewing and of equal length, but upon returning from the break, the show unfailingly recaps what happened just before the advertisements. The editors cleverly build drama before each advert break, and by replaying the intense moment upon returning, the audience’s memory of their pre-advert reaction is resurrected, strengthened, and can now be exploited to react to the next adventure presented. The editors also replay scenes from several shows ago to jog the audience’s memories of those events. This not only strengthens the memories of those episodic events, but crucially allows the producers to precisely position the audience’s emotional reaction as they structure and direct the connections between the scenes for them.

This continuous recapping of key content is how education works best. When new content is presented, the skilled tutor realises that in order for that content to become cemented in the learner’s memory it needs to be retrieved on several occasions, and over time. The necessity for the learning to become a part of the long-term memory is so that is can be drawn from when new content is introduced. This stems from the way our brains learn. Students construct new knowledge by making connections between new ideas and existing mental models, and then building on them. The ease with which the learner can recall these newly constructed understandings affects the load on the working memory, with automatic recall allowing the learner to make newer connections with comparative ease. Nuthall suggests that learners need at least 3 exposures to a concept before they have any hope of moving it into their long-term memories. By replaying key concepts many times, the learner’s construction of new content is supported. Again, either deliberately or intuitively, Married At First Sight has mastered this approach.

The imperative of replaying the key content to secure future recall, by logic, has implications for how much new content should be introduced at a time. Engelmann believes that the amount of new content introduced vs the practising and recapping of old should be approximately 20 : 80%. I wonder how many courses are designed that facilitate such recapping? Quite simply, without dedicated opportunities for the old stuff to be practised and recapped over and over again, the less likely it will actually be learned.

Married At First Sight teaches us absolutely nothing in terms of how to be a good human, but it utilises what is understood about memory, and demonstrates that if you want someone to make connections to previous emotions, you have to recap the scenes that led to those emotions many times. The same is true for educators. If you want a learner to make connections to previously taught key concepts, you have to recap those key moments many times.  

*are they actors? If not professional, surely they are directed by the producers to behave in certain ways and to ask specific questions of each other?

I’m Paul Moss. I manage a team of learning designers. Follow me on @twitter

10 WAYS TO ENCOURAGE PARTICIPATION USING ZOOM

Participation is crucial in any learning environment, and a Zoom session is no different. Participation encourages attention, which is a requisite for learning. If students aren’t attending to the content or discussions on offer, they have no chance of encoding that content and then being able to use it at a later time: in other words, learning it. Being skillful in ensuring participation is therefore imperative.

Varying the way students are asked to participate is a powerful way to encourage engagement. Zoom can encourage participation in several different modes, which sometimes is not possible in a regular face to face session. Here’s how a teacher/tutor can engage students in a Zoom session:

  • Immediate quiz/questions
  • Explaining your method
  • Non-verbal feedback
  • Verbal questions
  • Written questions
  • Polls/quizzes
  • Breakout rooms
  • Screen sharing
  • Using the whiteboard
  • Modifying content

1. IMMEDIATE QUIZ/QUESTIONS

Because of the way our memories function, recapping content from previous sessions is essential to help the knowledge move into the long-term memory where it can then be recalled automatically to assist in processing new information. Students who arrive on time to your Zoom session should immediately be put to work, either doing a 3 or 4 question quiz on previous learning, or producing short answers to a question or 2. Both of these are shared from your screen. This then does 2 things: firstly, it activates prior knowledge that will assist in today’s learning, and secondly, it gets the students involved straight away. Late comers also won’t miss the new content. Answers to the quiz etc are briefly discussed and then the current session begins with students’ minds active.

2. EXPLAINING YOUR METHOD

By articulating the strategies you will employ in the session up front you are likely to alleviate students’ anxieties about some of the processes they’ll experience during the session, and therefore encourage participation. Explaining why you are repeating questions, why you are talking about things from previous sessions, why you are asking for different types of responses and feedback, why you are insisting everyone responds before you move on, why you are using polls and why you are so keen on student participation and its effect on learning will help students feel more comfortable during the session and feel more able to participate.

3. NON-VERBAL FEEDBACK

You will have to turn on NON-VERBAL FEEDBACK in the settings:

Getting students to indicate a yes or no or a thumbs up encourages participation. Whilst you can’t guarantee such an assessment for learning truly proves students have understood your question, as students could just be guessing or indicating to avoid being asked why they haven’t, it still gets students involved. Even if a student answers to try to avoid a follow up question when the tutor sees they haven’t responded they are still actively listening, which is a condition of learning. Varying the type of questions can also generate some humour and fun in a session – asking if students are breathing, or if they know that Liverpool football club is the best team in the world for example. Non-verbal feedback is best used in triangulation with other assessment for learning options, such as verbal questions:

4. VERBAL QUESTIONS

Effective questioning is a powerful way to assess for learning and guarantee participation. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Keeping the questions ‘alive’ is important to encourage participation because as soon as you provide an answer the majority of students will stop thinking about the answer – they have no need to keep thinking: allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By randomly choosing students to answer you not only get a sense of their levels of understanding which allows you to pivot the next sequence if necessary, but it also keeps students on their toes as they realise that they may be called on next. This random selection of students will even work in a very large tutorial.

Sometimes it’s the little things. Be aware that you might naturally tend to favour interacting with those you can see in the session. Those without their cameras on, as in the image below, may not get asked as many questions, so an awareness of this and conscious questioning of unseen students will encourage a broad participation in the session.

5. WRITTEN QUESTIONS

Using the chat section to elicit answers to check for learning encourages participation. It is a variation on simply just listening and answering verbally. Having students write down an answer proves they know or don’t know the content. Dedicating a time in a session for this process not only varies the type of participation, but can be a great indicator that students have the required knowledge to continue. Opening up the chat lines for student to student interactions also encourages participation as some will answer questions and feel empowered in the process, and some will just enjoy the interactions. It is important though that the chat area is monitored as it can lead to the wrong kind of participation – like students just chatting in the classroom/lecture theatre which means they are not paying attention to the content. You can’t write/read and listen at the same time. I write about that here.

6. POLLS/QUIZZES

Using the poll function in Zoom is easy. You have to ensure it is turned on in the settings:

Once you’ve designed your questions, preferably before the session, you can then launch the poll.

Students then participate by responding. You then share the results, which at this point are anonymous, with the whole group. This serves as an assessment for learning opportunity, and you can pivot the session based on the answers if necessary. In answering the questions, students’ minds are activated as they search for knowledge in their schemata. There is an art to designing effective polls and multiple choice questions, and I discuss that art form here.  

Canvas quiz can also be incorporated into the Zoom session. The advantage of this is that it has a variety of question types that further encourage participation. There are many other apps too, such as Quizizz, Kahoot, and Mentimeter, but should be used with caution if not supported by your institution, as students may not want to sign-up for such platforms that essentially require them to surrender their data.

7. BREAKOUT ROOMS

Sending students into groups to discuss a concept or problem is a fantastic way to encourage participation. Homogeneous groups tend to work best, because those with vastly different levels of developed schema tend not to engage with each other as well as those with closer skill levels. It is sometimes of benefit of the more knowledgeable student to help another peer, but this then relies on effective teaching skills to work, and in reality that is a big ask of a student. So setting them up before a session may be your best bet.

Providing guidance on what to do when students are in the session is crucial, and it is worth popping in to each group to see how it is progressing. As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’ Attributing a member of the group to be a scribe is also worth doing, so that when the group returns to the main session they are able to share their screen and discuss their work/findings/solutions etc.

8. SCREEN SHARING

Getting students to share their screen encourages participation. This is especially effective coming out of a breakout room, but can be used at any point in a session. A student may be asked to demonstrate their workings of a problem, an answer to an essay question etc and the tutor can use it as a model to provide feedback. Of course caution would be used here, and only positive/constructive feedback provided.

9. USING THE WHITEBOARD

Sharing the whiteboard and getting students to interact with the content you or they put on there is a great way to encourage participation. You could model your thinking process in this medium and explain or annotate examples to discuss how students could gain a better understanding of the content. You could also have students annotate the board, asking them to underline key words, complete equations etc. Getting multiple students to add their own annotations is probably more beneficial with smaller groups, such as in the breakout rooms. Unfortunately in Zoom you can’t paste an image on the whiteboard, only text.

10. MODIFYING CONTENT

I firmly believe that there will only be a very small percentage of students who are genuinely unwilling to participate in this medium. Such students would be expected to use the chat option and only ‘send to the host’ for example to ensure they are still participating. If you have tried all of the above strategies and your students are still not really getting involved, it is likely that they just don’t know the answers. As humans, we naturally want to succeed, and non-participation may indicate to you that you need to strip it back a bit, and come back to some foundational knowledge. It doesn’t matter what you think students should know, it is about what they actually do, and the relevant development of their schema. It is better that you facilitate the construction of knowledge, and provide questions that students will know answers to so they can build up their confidence in participating, By doing this, you will slowly, but surely, build their schemata so they will want to get involved consistently.

Online participation is essential for the session to be effective. If you have other tips and advice how to engage participation, please let me know and i’ll add to the list.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.

REFLECTION

Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.

WHAT DO YOU WANT TO TEST IN AN EXAM?

Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau

4 REQUISITES OF A WELL DESIGNED OPEN EXAM

  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   

CRITICISMS OF OPEN BOOK EXAMS

A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.

EXPOSURE TO OPEN BOOK ASSESSMENT IS CRITICAL

Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.

A PEDAGOGICAL UPSHOT

As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

LOOKING TO MINIMISE PLAGIARISM IN AN ONLINE ASSESSMENT?

When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.

HOW TO SET AN ASSESSMENT FOR A SPECIFIED TIME

STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


USING CANVAS QUESTION BANKS

Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.

USING CANVAS FORMULA QUESTIONS

I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.

FINAL THOUGHTS

Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

An evidence based pedagogical plan for a Zoom tutorial

  1. Provide a pre-loaded/flipped worked problem – here’s why
  2. Begin with a quiz – here’s why
  3. Work through a problem analogous in difficulty to the pre-loaded problem – here’s why
  4. Present a new problem a little more difficult -– here’s why
  5. Have students break out into homogeneous ability rooms – here’s why
  6. Have students demonstrate their solutions – here’s why
  7. Provide feedback  – here’s why
  8. Set more practice tasks – here’s why

More rationale here

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  

ZOOM BOOM! Maximising virtual lessons

Using a virtual platform requires as much planning, preparation and expectation as a regular lesson. Of course there are differences to a face to face context, but like any good learning sequence, being aware of pedagogical principles will ensure the session is an active, useful learning experience.

HOSTING A SUCCESSFUL ZOOM SESSION REQUIRES 3 ESSENTIAL ELEMENTS:

  • knowing the tech
  • preparing the students and the session
  • managing the session

Knowing the tech

At Adelaide University we have developed a range of resources that will take the academic from the basics of downloading zoom to their computer to being able to proficiently place students into virtual breakout classrooms here. I know many other Universities also have good resources, like this one from UQ. We recommend the following:

  • Set yourself small goals in mastering one aspect of the tool at a time.
    • Practice amongst your peers and learn about the functionality of the platform.
    • Perhaps the most important thing to remember is that your skill using the tech will improve considerably with practice, and that you what may seem overwhelming now, will be an automatic teaching method soon.

Preparing the students and the session

  • Students:
    • make sure the students understand the tech.
    • Provide clear and explicit instructions how to download and use the tool – we have developed these already.
    • Provide clear and explicit expectations about participation and etiquette.

In the end, the online session is still a classroom, and the behaviours for learning you would expect in a classroom to maximise learning are the ones you should expect and demand in a virtual setting. As soon as your expectations drop because you aren’t confident that the setting can produce learning, then you’ll lose the student engagement.  

  • The session: it is imperative that you are clear what the objectives of the session are. Is the goal to teach a new idea, check for understanding, to correct misconceptions, to extend thinking or simply to practice and consolidate existing knowledge? When used in conjunction with a recorded lecture in Echo 360, or a pre-loaded or flipped activity in a discussion board, the zoomed tutorial is often used to check for understanding. Have clear sectioned elements to the tutorial:
    • a recap of the last session (an introductory retrieval quiz is best)
    • a modelled example to introduce the desired content
    • opportunity for students to demonstrate their understanding
    • opportunity for students to ask questions
    • opportunity to practise

Managing the session

Always remember the session is an opportunity for learning, and what you would do in a regular learning context is what has to be applied here too.

  • Start on time – have students login 5 minutes before the start so you are not waiting for stragglers and being interrupted when the tutorial begins by having to add them manually to the session. The waiting room can have the session rules attached as seen above.
  • As soon as the session begins have students complete a recap quiz – also provides something for punctual students to do whilst you’re waiting for others to join. Retrieval is everything in learning!
  • Go through answers briefly
  • Discuss the expectations and rules of engagement of the current session. Repeat these many times over lots of sessions, so the process eventually becomes automatic for students.
  • Be friendly and encouraging – and patient whilst students become familiar with the process
  • Go through an example similar in difficulty to the pre-loaded activity as a warm up, narrating your workings. See here for more on the power of worked examples.
  • Present the pre-loaded activity
  • Check for understanding
    1. By asking questions: don’t take one or 2 student responses as an indication of the whole group’s understanding. See here for how to ask the right questions.
    2. By getting students to upload or show their learning,
  • Use at least 2 student examples to provide feedback – discussing their strengths and weaknesses will be another teaching moment
  • Present another activity of analogous difficulty to strengthen understanding. Consider breaking cohort into homogeneous groups, have them discuss the problem and present a consensus back to the main cohort’s discussion page.  
  • Present a final activity that is harder

Successful Zoom sessions will offer you a unique opportunity to check for understanding or to extend student knowledge.  It also offers an opportunity to place yourself in the shoes of the learner, the learner who is constantly introduced to a lot of new content and problems and may feel overwhelmed at times in the process. The more conscious you are of helping students manage the cognitive load when introducing new material, the better you will design and sequence that learning. Concomitant with that is articulating your method and helping students become stronger at understanding the metacognitive process.

Mastering Zoom will take practice, but that’s like everything you first began.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  

ASSESSMENT IN HE – pt 7: ONLINE QUIZZES

This is the 7th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

DESIGNING EFFECTIVE MULTIPLE CHOICE QUESTIONS

Multiple choice assessments have anecdotally been the pariah of the assessment family. But its perceived inferiority as a valid form of assessment is unfounded, as research by Smith and Karpicke (2014) attests. However, for the format to be just as effective as short answer questions, the design of the test requires careful consideration, and I shall now outline the key characteristics of an effective multiple choice test.

1 BUILD THE LEVEL OF DIFFICULTY, gradually

Understanding schema is everything, as always. An awareness that you are building your students’ schema of a topic will help shape the design of your multiple choice questions. Butler, A. C., Marsh, E. J., Goode M. K., & Roediger, H. L., III (2006) discovered that adding too many lures as distractors to the novice learner not only negatively impacted on motivation, but also inhibited later recall of that content when compared to the performance of a student with better developed schema. This makes sense because novices are not yet able to distinguish between the distractors because their knowledge is not secure enough. While it may be tempting to make the questions harder by adding in lots of other knowledge, it is not an effective strategy.

We also know that there is the possibility that a novice will learn from the ‘incorrect’ lures/distractors presented (Marsh, E.J., Roediger, H.L., Bjork, R.A. (2007)), further evidence that we need to be cautious and precise when designing multiple choice questions for novice learners.

KEY TAKE AWAY

The design of the questions should emulate the way the knowledge was taught: incrementally building in difficulty.

2 MASTER SPECIFIC KNOWLEDGE FIRST

Initially, individual pieces of knowledge that form part of a larger key concept need to be retrieved. Much of the content of multiple choice questions at this stage of the learning journey would be based on factual knowledge that simply has to be retained to help shape understanding of more complex knowledge at a later time. The advantage of using these questions to dominate the fundamental stages of your retrieval strategy is that you will be able to isolate misconceptions and gaps in learning immediately; the reality is that if a student is struggling at this stage, then they either haven’t studied or paid enough attention to the content. By approaching the design of your assessment in this way, you are ensuring that your students can walk before you expect them to run.

KEY TAKE AWAY

As a retrieval strategy, multiple choice tests should help a student master individual components of the course before they strive to test several and eventually all components of the course.  

3 ACTIVELY ENGAGE RETRIEVAL

There are several design choices that strengthen the validity of a multiple choice question being able to assess learning.

  • Brame, C., (2013) has written a superb resource on multiple choice design considering factors such as writing an appropriate stem, suitable alternatives (distractors), why none of the above and all of the above make it easier to guess through deduction (which means you’re not testing what you want to test) and how to engage higher order thinking.
  • Odegard, T. N., & Koen, J. D. (2007) suggest that there are certain questions, such as ‘none of the above’ that you shouldn’t ask, as they potentially don’t encourage retrieval as none of the relevant information is being recalled. Also, of concern is that one of the wrong answers may incidentally and inadvertently be retrieved. 
  • Answers should include at least 2 plausible options, otherwise a student can choose an answer by elimination, which is not necessarily strengthening the retrieval of the correct answer. For example, a poor design would be: What is the capital of Australia? A) London, B) Canberra C) Paris, D) Berlin. In this question the student doesn’t have to know it is Canberra, they could just eliminate the other options that they would have heard of before. If option D) was Sydney, then they would have to think and retrieve harder.
  • The number of plausible options should increase as the retrieval stretches to include multiple components of the course.
  • As the course proceeds and the domain of knowledge increases, the range of questions increases to include previous learning as well as the current learning. Adding options that are wrong in the current question but correct to another question has been shown to be effective: Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). This strategy is only useful however when a student has a well developed schema about the content, otherwise incorrect answers could be again inadvertently retrieved, but now on two occasions.
  • Feedback AS RETRIEVAL – Besides automatic marking, multiple choice questions provide 2 extra bonuses: they help make feedback more precise, and a prepared discussion of why certain plausible options are not quite the right answer presents another excellent retrieval opportunity as students see the correct answer in context and how it is connected to other pieces of knowledge. Below is a good example of this:

KEY TAKE AWAY

There is a science and an art to designing multiple choice questions. Understanding the research on what works and what doesn’t will render your design an effective assessment for/of learning tool as well as an excellent retrieval activity, or simply a tokenistic waste of time.

4 MITIGATE GUESSING

By asking several questions about the same concept the tutor can safely eliminate that students have guessed their way to success.

The same can be done by ensuring there are at least 4 options as answers for questions: every extra option statistically reduces the chance of guessing correctly.  

KEY TAKE AWAY

If you provide enough questions, and enough options inside those questions, statistically you’ll be in a better position to assess learning.

5 MASTERY PATHWAYS

Eventually, the multiple choice test you design will strive to assess not just individual pieces of knowledge, but more of the domain. The domain will be made of many individual components, which are in turn made up of many individual pieces of knowledge. When designing the domain tests, questions should be created with a mastery approach in mind, where there will be 3 streams of knowledge: core, developmental, and foundational.

A student who incorrectly answers a question in the core stream shouldn’t be encouraged to continue with the quiz in this ‘core’ stream of questions until they can address the error: the error produces a learning gap that can be confounded later if not fixed now.

A mastery pathway enables this by redirecting the student to a ‘developmental’ stream of questions to help strengthen and eventually secure the correct knowledge necessary to return to the core stream. The developmental stream is comprised of 3 – 4 questions that are hierarchical in difficulty, eventually building to be analogous to the original question. Students who simply made a mistake or pressed the wrong choice for example, are encouraged by this process to be more precise in the future – they are also presented further retrieval opportunity, and so still gain from the perceived waste of time, provided they are aware of the teaching strategy (more on the power of metacognition in the next post).

If a student is unsuccessful in the developmental stream they are indicating that they need further knowledge building. Such a student would be redirected to a ‘foundational’ stream, where questions take the student back to basic factual and elementary pieces of knowledge. Success in this stage provides access back to the developmental stream and then eventually back to the core stream, and crucially, possessing the required knowledge to progress in the course. The video below illustrates this process.

KEY TAKE AWAY

It may take some students longer to arrive at the required level of knowledge, but at least they will eventually arrive – that is not something that every teacher could guarantee presently.

THE CURSE OF TIME

Of course, designing a multiple choice sequence is a time consuming affair. Sometimes coming up with the ‘wrong’ distractor options is actually quite difficult. Having to then design extra questions to satisfy a mastery pathway is even more demanding. But, once created, the multiple choice test is able to be used multiple times, over many years, and will have significant benefits to students who present with learning gaps. Also, it will actually save you time in the long run as less energy will have to be spent addressing gaps further into a course.

So, in summary, the key things to consider when designing multiple choice questions are:

References

 Butler, A. C., Marsh, E. J., Goode, M. K., & Roediger, H. L., III (2006). When additional multiple-choice lures aid versus hinder later memory. Applied Cognitive Psychology, 20, 941-956.

Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23, 1337-1344.

Marsh, E.J., Roediger, H.L., Bjork, R.A. et al. The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review 14, 194–199 (2007). https://doi.org/10.3758/BF03194051

Odegard, T. N., & Koen, J. D. (2007). “None of the above” as a correct and incorrect alternative on a multiple-choice test: Implications for the testing effect. Memory, 15, 873-885.

Smith MA and Karpicke JD (2014) Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory 22: 784–802. 

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 6 – THE POWER OF RETRIEVAL

This is the 6th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Memory is a fascinating thing. Essentially, the more we replay something that has happened to us in our mind, the stronger the chance that it will move into the long-term memory, and thus be remembered for some time. The replaying can take many forms. It may be someone asks you a question about your day, a question about something they know you heard on the news, or simply you sitting on the train on your way home going over an incident that really annoyed you. All of these retrievals of the already happened moments strengthen the memory of them. However, the strength of the memory is related to how much work you have to do to replay it. If you merely think about it, the memory won’t be as strong as if you had to tell someone about it (Roediger and Karpicke, 2006)

This theory of retrieval has enormous implications for education.

If you want students’ memory of key concepts to improve, provide opportunities for them to retrieve that content. One of the most efficient ways of doing this is to ‘test’ student knowledge using low stakes assessment. This can be done formatively by asking questions and by getting students to write down or represent what they know. This process has several benefits:

  • It helps you to see what students do or don’t know, which means you can adjust your learning sequences if necessary to correct misconceptions
  • It helps students strengthen the neural pathways the information flows in which makes remembering the information easier at a later stage.
  • The ease of remembering frees the working memory for new information to be encoded more efficiently

SPACING RETRIEVAL

In 1913, Ebbinghaus came to the conclusion that when learning something new, ‘With any considerable number of repetitions a suitable distribution of them over a space of time is decidedly more advantageous than the massing of them at a single time.‘ The theory came to light after he realised that we begin to forget information as soon as we encode it. The ‘forgetting curve’ demonstrates this aptly. When Ebbinghaus interrupted the forgetting by retrieving the information at certain points, he could consequently ‘remember’ the information at a later date.

So, interrupting the forgetting curve by including retrieval into your sequence of learning is paramount. But the timing of that interruption matters. Bjork suggests that if you have students retrieving information too soon after encoding the effects on memory are weak (high retrieval strength but low storage strength), but if you wait too long, the information may need to actually be retaught. Joe Kirby explains this well here. There seems to be a sweet spot in terms of timing the retrieval practice. Of course, students will vary in what that timing should be, depending on various factors, including what their attentiveness was like when first presented with the content. However, effective teaching will realise that students invariably need access to information on at least 3 occasions for it to have a chance of being converted into the long term memory (Nuthall), and so continuously returning to previously taught content by weaving it into the current learning is a must.

As already stated, this HOW of delivering retrieval is pertinent. What is ideal is to create a situation that is not too easy, quite challenging, yet no too hard. Bjork alludes to this notion when he discusses ‘desirable difficulties’, where the testing makes the activity ‘desirable because it can trigger encoding and retrieval processes that support learning, comprehension, and remembering.’

What is important however, like in all learning design, is to ascertain where students are on the learning continuum before creating the retrieval: ‘If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties.’ This insight rationalises why simply re-reading notes or a textbook has been consistently found to be significantly less impactful on learning than actively demanding a response from a student.

Engaging students in having to actively retell what they know can take several forms, including completing a concept map about a topic, writing down everything one knows about an idea, or answering questions about the content. The really useful Retrievalpractice.org has a host of ways to enact the strategy here. It’s a practice that shouldn’t be bound by sector, or discipline, and in fact should be implemented as soon as learning begins, as some primary teachers are now demonstrating.

But perhaps the most effective form of retrieval practice is the test, where students have to search their memories to produce answers. The next post discusses the power of the online multiple choice test.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger