ZOOM BOOM! Maximising virtual lessons

Using a virtual platform requires as much planning, preparation and expectation as a regular lesson. Of course there are differences to a face to face context, but like any good learning sequence, being aware of pedagogical principles will ensure the session is an active, useful learning experience.

HOSTING A SUCCESSFUL ZOOM SESSION REQUIRES 3 ESSENTIAL ELEMENTS:

  • knowing the tech
  • preparing the students and the session
  • managing the session

Knowing the tech

At Adelaide University we have developed a range of resources that will take the academic from the basics of downloading zoom to their computer to being able to proficiently place students into virtual breakout classrooms here. I know many other Universities also have good resources, like this one from UQ. We recommend the following:

  • Set yourself small goals in mastering one aspect of the tool at a time.
    • Practice amongst your peers and learn about the functionality of the platform.
    • Perhaps the most important thing to remember is that your skill using the tech will improve considerably with practice, and that you what may seem overwhelming now, will be an automatic teaching method soon.

Preparing the students and the session

  • Students:
    • make sure the students understand the tech.
    • Provide clear and explicit instructions how to download and use the tool – we have developed these already.
    • Provide clear and explicit expectations about participation and etiquette.

In the end, the online session is still a classroom, and the behaviours for learning you would expect in a classroom to maximise learning are the ones you should expect and demand in a virtual setting. As soon as your expectations drop because you aren’t confident that the setting can produce learning, then you’ll lose the student engagement.  

  • The session: it is imperative that you are clear what the objectives of the session are. Is the goal to teach a new idea, check for understanding, to correct misconceptions, to extend thinking or simply to practice and consolidate existing knowledge? When used in conjunction with a recorded lecture in Echo 360, or a pre-loaded or flipped activity in a discussion board, the zoomed tutorial is often used to check for understanding. Have clear sectioned elements to the tutorial:
    • a recap of the last session (an introductory retrieval quiz is best)
    • a modelled example to introduce the desired content
    • opportunity for students to demonstrate their understanding
    • opportunity for students to ask questions
    • opportunity to practise

Managing the session

Always remember the session is an opportunity for learning, and what you would do in a regular learning context is what has to be applied here too.

  • Start on time – have students login 5 minutes before the start so you are not waiting for stragglers and being interrupted when the tutorial begins by having to add them manually to the session. The waiting room can have the session rules attached as seen above.
  • As soon as the session begins have students complete a recap quiz – also provides something for punctual students to do whilst you’re waiting for others to join. Retrieval is everything in learning!
  • Go through answers briefly
  • Discuss the expectations and rules of engagement of the current session. Repeat these many times over lots of sessions, so the process eventually becomes automatic for students.
  • Be friendly and encouraging – and patient whilst students become familiar with the process
  • Go through an example similar in difficulty to the pre-loaded activity as a warm up, narrating your workings. See here for more on the power of worked examples.
  • Present the pre-loaded activity
  • Check for understanding
    1. By asking questions: don’t take one or 2 student responses as an indication of the whole group’s understanding. See here for how to ask the right questions.
    2. By getting students to upload or show their learning,
  • Use at least 2 student examples to provide feedback – discussing their strengths and weaknesses will be another teaching moment
  • Present another activity of analogous difficulty to strengthen understanding. Consider breaking cohort into homogeneous groups, have them discuss the problem and present a consensus back to the main cohort’s discussion page.  
  • Present a final activity that is harder

Successful Zoom sessions will offer you a unique opportunity to check for understanding or to extend student knowledge.  It also offers an opportunity to place yourself in the shoes of the learner, the learner who is constantly introduced to a lot of new content and problems and may feel overwhelmed at times in the process. The more conscious you are of helping students manage the cognitive load when introducing new material, the better you will design and sequence that learning. Concomitant with that is articulating your method and helping students become stronger at understanding the metacognitive process.

Mastering Zoom will take practice, but that’s like everything you first began.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger  

ASSESSMENT IN HE – pt 7: ONLINE QUIZZES

This is the 7th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

DESIGNING EFFECTIVE MULTIPLE CHOICE QUESTIONS

Multiple choice assessments have anecdotally been the pariah of the assessment family. But its perceived inferiority as a valid form of assessment is unfounded, as research by Smith and Karpicke (2014) attests. However, for the format to be just as effective as short answer questions, the design of the test requires careful consideration, and I shall now outline the key characteristics of an effective multiple choice test.

1 BUILD THE LEVEL OF DIFFICULTY, gradually

Understanding schema is everything, as always. An awareness that you are building your students’ schema of a topic will help shape the design of your multiple choice questions. Butler, A. C., Marsh, E. J., Goode M. K., & Roediger, H. L., III (2006) discovered that adding too many lures as distractors to the novice learner not only negatively impacted on motivation, but also inhibited later recall of that content when compared to the performance of a student with better developed schema. This makes sense because novices are not yet able to distinguish between the distractors because their knowledge is not secure enough. While it may be tempting to make the questions harder by adding in lots of other knowledge, it is not an effective strategy.

We also know that there is the possibility that a novice will learn from the ‘incorrect’ lures/distractors presented (Marsh, E.J., Roediger, H.L., Bjork, R.A. (2007)), further evidence that we need to be cautious and precise when designing multiple choice questions for novice learners.

KEY TAKE AWAY

The design of the questions should emulate the way the knowledge was taught: incrementally building in difficulty.

2 MASTER SPECIFIC KNOWLEDGE FIRST

Initially, individual pieces of knowledge that form part of a larger key concept need to be retrieved. Much of the content of multiple choice questions at this stage of the learning journey would be based on factual knowledge that simply has to be retained to help shape understanding of more complex knowledge at a later time. The advantage of using these questions to dominate the fundamental stages of your retrieval strategy is that you will be able to isolate misconceptions and gaps in learning immediately; the reality is that if a student is struggling at this stage, then they either haven’t studied or paid enough attention to the content. By approaching the design of your assessment in this way, you are ensuring that your students can walk before you expect them to run.

KEY TAKE AWAY

As a retrieval strategy, multiple choice tests should help a student master individual components of the course before they strive to test several and eventually all components of the course.  

3 ACTIVELY ENGAGE RETRIEVAL

There are several design choices that strengthen the validity of a multiple choice question being able to assess learning.

  • Brame, C., (2013) has written a superb resource on multiple choice design considering factors such as writing an appropriate stem, suitable alternatives (distractors), why none of the above and all of the above make it easier to guess through deduction (which means you’re not testing what you want to test) and how to engage higher order thinking.
  • Odegard, T. N., & Koen, J. D. (2007) suggest that there are certain questions, such as ‘none of the above’ that you shouldn’t ask, as they potentially don’t encourage retrieval as none of the relevant information is being recalled. Also, of concern is that one of the wrong answers may incidentally and inadvertently be retrieved. 
  • Answers should include at least 2 plausible options, otherwise a student can choose an answer by elimination, which is not necessarily strengthening the retrieval of the correct answer. For example, a poor design would be: What is the capital of Australia? A) London, B) Canberra C) Paris, D) Berlin. In this question the student doesn’t have to know it is Canberra, they could just eliminate the other options that they would have heard of before. If option D) was Sydney, then they would have to think and retrieve harder.
  • The number of plausible options should increase as the retrieval stretches to include multiple components of the course.
  • As the course proceeds and the domain of knowledge increases, the range of questions increases to include previous learning as well as the current learning. Adding options that are wrong in the current question but correct to another question has been shown to be effective: Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). This strategy is only useful however when a student has a well developed schema about the content, otherwise incorrect answers could be again inadvertently retrieved, but now on two occasions.
  • Feedback AS RETRIEVAL – Besides automatic marking, multiple choice questions provide 2 extra bonuses: they help make feedback more precise, and a prepared discussion of why certain plausible options are not quite the right answer presents another excellent retrieval opportunity as students see the correct answer in context and how it is connected to other pieces of knowledge. Below is a good example of this:

KEY TAKE AWAY

There is a science and an art to designing multiple choice questions. Understanding the research on what works and what doesn’t will render your design an effective assessment for/of learning tool as well as an excellent retrieval activity, or simply a tokenistic waste of time.

4 MITIGATE GUESSING

By asking several questions about the same concept the tutor can safely eliminate that students have guessed their way to success.

The same can be done by ensuring there are at least 4 options as answers for questions: every extra option statistically reduces the chance of guessing correctly.  

KEY TAKE AWAY

If you provide enough questions, and enough options inside those questions, statistically you’ll be in a better position to assess learning.

5 MASTERY PATHWAYS

Eventually, the multiple choice test you design will strive to assess not just individual pieces of knowledge, but more of the domain. The domain will be made of many individual components, which are in turn made up of many individual pieces of knowledge. When designing the domain tests, questions should be created with a mastery approach in mind, where there will be 3 streams of knowledge: core, developmental, and foundational.

A student who incorrectly answers a question in the core stream shouldn’t be encouraged to continue with the quiz in this ‘core’ stream of questions until they can address the error: the error produces a learning gap that can be confounded later if not fixed now.

A mastery pathway enables this by redirecting the student to a ‘developmental’ stream of questions to help strengthen and eventually secure the correct knowledge necessary to return to the core stream. The developmental stream is comprised of 3 – 4 questions that are hierarchical in difficulty, eventually building to be analogous to the original question. Students who simply made a mistake or pressed the wrong choice for example, are encouraged by this process to be more precise in the future – they are also presented further retrieval opportunity, and so still gain from the perceived waste of time, provided they are aware of the teaching strategy (more on the power of metacognition in the next post).

If a student is unsuccessful in the developmental stream they are indicating that they need further knowledge building. Such a student would be redirected to a ‘foundational’ stream, where questions take the student back to basic factual and elementary pieces of knowledge. Success in this stage provides access back to the developmental stream and then eventually back to the core stream, and crucially, possessing the required knowledge to progress in the course. The video below illustrates this process.

KEY TAKE AWAY

It may take some students longer to arrive at the required level of knowledge, but at least they will eventually arrive – that is not something that every teacher could guarantee presently.

THE CURSE OF TIME

Of course, designing a multiple choice sequence is a time consuming affair. Sometimes coming up with the ‘wrong’ distractor options is actually quite difficult. Having to then design extra questions to satisfy a mastery pathway is even more demanding. But, once created, the multiple choice test is able to be used multiple times, over many years, and will have significant benefits to students who present with learning gaps. Also, it will actually save you time in the long run as less energy will have to be spent addressing gaps further into a course.

So, in summary, the key things to consider when designing multiple choice questions are:

References

 Butler, A. C., Marsh, E. J., Goode, M. K., & Roediger, H. L., III (2006). When additional multiple-choice lures aid versus hinder later memory. Applied Cognitive Psychology, 20, 941-956.

Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological Science, 23, 1337-1344.

Marsh, E.J., Roediger, H.L., Bjork, R.A. et al. The memorial consequences of multiple-choice testing. Psychonomic Bulletin & Review 14, 194–199 (2007). https://doi.org/10.3758/BF03194051

Odegard, T. N., & Koen, J. D. (2007). “None of the above” as a correct and incorrect alternative on a multiple-choice test: Implications for the testing effect. Memory, 15, 873-885.

Smith MA and Karpicke JD (2014) Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory 22: 784–802. 

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 6 – THE POWER OF RETRIEVAL

This is the 6th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Memory is a fascinating thing. Essentially, the more we replay something that has happened to us in our mind, the stronger the chance that it will move into the long-term memory, and thus be remembered for some time. The replaying can take many forms. It may be someone asks you a question about your day, a question about something they know you heard on the news, or simply you sitting on the train on your way home going over an incident that really annoyed you. All of these retrievals of the already happened moments strengthen the memory of them. However, the strength of the memory is related to how much work you have to do to replay it. If you merely think about it, the memory won’t be as strong as if you had to tell someone about it (Roediger and Karpicke, 2006)

This theory of retrieval has enormous implications for education.

If you want students’ memory of key concepts to improve, provide opportunities for them to retrieve that content. One of the most efficient ways of doing this is to ‘test’ student knowledge using low stakes assessment. This can be done formatively by asking questions and by getting students to write down or represent what they know. This process has several benefits:

  • It helps you to see what students do or don’t know, which means you can adjust your learning sequences if necessary to correct misconceptions
  • It helps students strengthen the neural pathways the information flows in which makes remembering the information easier at a later stage.
  • The ease of remembering frees the working memory for new information to be encoded more efficiently

SPACING RETRIEVAL

In 1913, Ebbinghaus came to the conclusion that when learning something new, ‘With any considerable number of repetitions a suitable distribution of them over a space of time is decidedly more advantageous than the massing of them at a single time.‘ The theory came to light after he realised that we begin to forget information as soon as we encode it. The ‘forgetting curve’ demonstrates this aptly. When Ebbinghaus interrupted the forgetting by retrieving the information at certain points, he could consequently ‘remember’ the information at a later date.

So, interrupting the forgetting curve by including retrieval into your sequence of learning is paramount. But the timing of that interruption matters. Bjork suggests that if you have students retrieving information too soon after encoding the effects on memory are weak (high retrieval strength but low storage strength), but if you wait too long, the information may need to actually be retaught. Joe Kirby explains this well here. There seems to be a sweet spot in terms of timing the retrieval practice. Of course, students will vary in what that timing should be, depending on various factors, including what their attentiveness was like when first presented with the content. However, effective teaching will realise that students invariably need access to information on at least 3 occasions for it to have a chance of being converted into the long term memory (Nuthall), and so continuously returning to previously taught content by weaving it into the current learning is a must.

As already stated, this HOW of delivering retrieval is pertinent. What is ideal is to create a situation that is not too easy, quite challenging, yet no too hard. Bjork alludes to this notion when he discusses ‘desirable difficulties’, where the testing makes the activity ‘desirable because it can trigger encoding and retrieval processes that support learning, comprehension, and remembering.’

What is important however, like in all learning design, is to ascertain where students are on the learning continuum before creating the retrieval: ‘If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties.’ This insight rationalises why simply re-reading notes or a textbook has been consistently found to be significantly less impactful on learning than actively demanding a response from a student.

Engaging students in having to actively retell what they know can take several forms, including completing a concept map about a topic, writing down everything one knows about an idea, or answering questions about the content. The really useful Retrievalpractice.org has a host of ways to enact the strategy here. It’s a practice that shouldn’t be bound by sector, or discipline, and in fact should be implemented as soon as learning begins, as some primary teachers are now demonstrating.

But perhaps the most effective form of retrieval practice is the test, where students have to search their memories to produce answers. The next post discusses the power of the online multiple choice test.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE pt 5 – Modifying tutorials for remote learners

In the last post I discussed the importance of the tutorial. It is a wonderful chance for students to either develop understanding, consolidate it, or extend it. However, it must be carefully designed with the tutor being acutely aware of the position each student holds on the learning continuum.

The virtual tutorial should not be treated any differently in terms of outcomes, but some modifications will need to be made to accommodate the technology that must accompany it and the increased challenge of being able to assess progress.

Like in a regular face to face tutorial, your students will present with different levels of competence, and managing this is indeed a great skill. To put it simply, you must be prepared! You must, whilst consolidating understanding for some in the tute (paired and completion examples) have something that those who are seeking to extend their thinking can do too (independent examples). To counter some of the difficulty in this, it is a good idea to get students to work on the paired problems BEFORE the tute. This gives the students time to go through the narrated problem first and practice it in order to consolidate their knowledge and memory of how to solve such a problem. It also can provide you with more information of who will need more help in the tute, and whether assigning these students into working groups might help.

WITHOUT QUESTION, providing videoed examples with the tutor narrating their thinking processes in solving a problem is the best form of example.

Using groups in a virtual tute

Knowing the strengths and weaknesses of students in the tute can help set up appropriate groups. Students could self-nominate too depending on their understanding of their needs on a particular topic. These homogeneous groups, which you can set up in Zoom before the tute begins, can serve to take some of the pressure off you as you try to negotiate managing the demands of 10 – 15 online students. The more independent groups can almost propel themselves, with you only checking in occasionally to clarify or encourage/congratulate. The majority of your time then can be dedicated to the strugglers at the paired example level. Those at the completion problem stage still need attention, but some in this group may be able to offer advice that sets them right.

Whilst working in the groups, lecturers like Eshan Sharifi at Adelaide University encourage students to ‘chat’ using their regular social media tool to informally engage with the questions. This type of peer learning is very powerful, as long as it is set up so that ‘near transfer‘ of knowledge is achieved: learning that is close to the original context in which the original knowledge was learnt.

Tools such as Zoom have a learning curve. Ensure that you provide students adequate time to become accustomed to the technology before requiring them to engage with it. Ensure that they have set-up their audio correctly and know how to do so. Ensure they know how to mute and the role of muting as a sign of respect for the group and to mitigate embarrassing moments.

Tim Klapdor

Mitigating plagiarism

Of great concern is the ease with which a student could copy their group partners’ answers, and thus not learn very much at all. Besides triangulating assessment to give you a better indication if this is actually happening, and ensuring your design of problems promotes ‘near’ learning, the tutor can call on specific students to see their workings on problems that have just been given to them.

I’m a huge believer that success motivates success, and when students are confident and succeeding in solving problems, they will do it as often as possible without anyone else’s help. They won’t cheat because the feeling of getting things right and understanding concepts is a far better feeling than simply getting the grade by itself. All it takes is to honour the learning continuum, identify the extent of students’ schemata, and support their development using examples.

The tutorial then can be sectioned in time, with groups working together on tasks and then each coming together to demonstrate knowledge to the tutor at intervals.

Making it virtual

Below I will discuss the required adaptations needed to facilitate the 3 key components of a successful tutorial. Please read here about what worked examples are before you continue.

  1. Worked examples
  2. Discussions and questions
  3. Wandering the room

1 Worked examples

Type of exampleModificationHow it’s checked/submitted
Paired examples – verbally narrating the workings out as you take students through a problem, then giving them a completed problem with annotations and an unsolved problem of the exact level of difficulty to use as a guide.  The tutor will need to use a camera of some description to show students their workings. The camera/visualiser would then be a shared screen in Zoom. (See below for how to achieve this)The student then uses their phone as a camera to demonstrate their written completed paired problem. If the tutor sees misconceptions, they can ask for the student to photograph the work and upload it by sharing their screen. (See below for how to achieve this). The shared images could be added to a discussion page set up specifically for the tute.
Completion examples – getting students to complete partially solved/written problemsAs above, and then hand over to students. It is better if the students write out the full problem, or you could provide this for them in a resource section connected to tutorials in the LMSAs above
Independent solvingNAAs above

Technical considerations

There are 2 technical considerations to master to make the virtual tutorial as effective as a face to face experience.

The tutor – there are several ways to connect a camera to your computer that can then be seen via Zoom by your remote students.

  • The easiest option is to use a visualiser, purchased for @$120. This gives you lots of flexibility and you can move the camera around quite a bit. The best bit is that you can host the tute from your office if necessary. *note: the camera’s software driver will need to be installed on your computer
  • The next possibility is to use the document cameras supplied in lecture theatres and rooms around the university. *note: the camera’s software driver will need to be installed on your computer
  • An innovative approach is to use your phone as a camera held above your workings. If you can find a flexible holder that allows you to position the phone appropriately then this is a cheap and easy solution. The issue though is the size of the phone’s screen in trying to complete the rest of the tute and seeing other’s workings.

The student – some students may be a step ahead of you in terms of finding tech solutions, but lots won’t, so providing explicit clear instructions how to go about participating and submitting work in virtual settings is imperative. The students have several options to submit their work:

  • Using a laptop – students have watched your worked example and are now doing their own, probably on a piece of paper. This completed task now needs to be uploaded and shared to the tutor:
    • take a photo of it on a phone
    • share it to the laptop
    • share it to Zoom
  • Using a phone – as above, except they share to Zoom straight from their phone. In fact, there is an option to take a photo to share, reducing the number of processes which some students will prefer.

The uploaded responses will provide you with lots of formative assessment. With a student’s consent, particular misconceptions could be used as examples and worked through to adjust thinking. The potential embarrassment of the initial mistake will be evaporated when the student finally understand the process – truth be told, a clever teacher is able to use the example without it causing any embarrassment whatsoever – it’s all about the tone and level of expectations you set, that learning is hard at times, and that students should be proud for putting themselves on the journey.

2 Discussions and questions

Because students are able to hear via Zoom, you can conduct your questioning strategy in much the similar way to face to face questioning. The process remains consistent:

  • Asking questions
  • Waiting before seeking responses so students can think about an answer
  • Checking for understanding by asking several students for a response BEFORE saying if they are right or wrong.
  • Extending thinking by delving deeper into some answers: ask for contrasts, opposites, connections to other learning, how it could apply to other contexts, etc.

However, virtual etiquette will need to be explicitly taught and trained over several sessions before it is mastered. Explain to students expectations for responding. Explain to them, and demonstrate it, that they WILL be called on at some point in the session – that they won’t be able to hide. If you develop their metacognition and explain why you are asking lots of questions: that you are developing their schema via retrieval practice, and that a participation grade will only be rewarded when they attempt questions asked of them, students will have significantly more buy-in to what you are trying to achieve.

The virtual space can make it easier for students to hide from conversations, with a typical response to a question being silence. But this won’t happen if you conscientiously spread the questioning around. Continuous questions combined with students demonstrating their problem solving by uploading their paired, completed and eventual independent examples turn the virtual tutorial into an excellent source of formative assessment.

As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’

Sharing work – very reliant on student consent, getting students to work in small homogeneous groups can be an effective strategy in the virtual tutorial. Students can easily share their screens with invited others, and this can be a good way for peer tutoring to be utilised. However, the selection of groups is key as is the timing of this strategy being used – it should be reserved for the completion example stage and beyond only.

3 Wandering the room

Obviously this isn’t possible in the virtual tutorial. However, it is important to keep a track on the virtual participants by asking lots of questions and using students’ names as often as possible. Direct address has a powerful effect on participation. If you have someone who is not comfortable responding with others listening, post questions to them and monitor their response.

The grading of work

This is then up to the tutor: perhaps after every second tute a summative type task is given to assess students’ understanding of the immediate domain of knowledge being taught. The frequency of the assessment is crucial. The more time between assessments the more chance of learning gaps developing, but more poignantly, the less chances students get to experience success after deliberate scaffolding. If you provide consistent smaller assessment that facilitates success, the more engaged they will be.

You may say that from experience the opposite is true – that students will realise that the assessments aren’t worth much and so won’t bother. BUT, before you equate this approach with previous experience, have you set the learning up in such a deliberate way that no learning gaps are possible, where students are continually made aware of their successes in answering questions and are continuously succeeding in assessment and so seeing the value in attendance and learning in general?

The next post will discuss using online quizzes.

*installing the software is simple, and can be done remotely by ITDS if required. You can download it here.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE pt 4 – worked examples

This is the 4th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Even though the nature of higher education makes it harder to formatively assess, it can be done. Below is a list of sources of data that a tutor can use to triangulate their understanding of where a student sits on the learning journey, and importantly, whether what they think they are teaching is actually being learnt:

  1. Using the lecture
  2. Using the tutorial
  3. Using online quizzes
  4. Using mastery pathways
  5. Using online discussion boards
  6. Using groups
  7. Using participation
  8. Using analytics

Using the tutorial

The tutorial is very much the place to check for learning. In the much smaller populated room, the tutor can use techniques that are known to be effective in a regular classroom, including using worked/completion examples, asking lots of questions, and wandering the room when students are solving problems to check progress. As the very wise Tim Klapdor suggests, ‘tutorials are not a time to lecture students or introduce new concepts.’

1 Using Worked and Completion Examples

Worked examples are priceless in learning. The lecture ideally was full of many completed examples related to the topic, each part of the example deliberately verbally narrated to help students begin the process of either connecting the new content with existing schema, or actually building new schema. The tutorial is now the place where the tutor can assess where the students currently sit on the learning continuum, and this will determine the stage of worked example they present.

To begin the session, the tutor may present a problem of similar ilk from the lecture. If students appear to not be secure in their knowledge the tutor will realise that the schema is not established sufficiently for any independent work. The image below from Sweller’s Efficiency in Learning captures the progression necessary to develop the relevant schema and move learners from novice to expert/independent.

backwards fade.png

The narration of processes involved in solving problems must now take place. The tutor articulates their own schema in this process, providing a live model for students to capture in their own memory. It is this captured memory they will draw from later to solve similar problems. In this way, learning is truly constructivist. Consequently, through logic, this stage can’t be rushed, or worse, bypassed, as it is by those those who conflate the epistemology of constructivism with a method of teaching, rendering learning to a free for all of unscaffolded inquiry, inquiry that inevitably fails as students exhaustively scramble to locate relevant connections in their minds that simply aren’t there.

cogload theory
BIg thanks to Tom Needham for enlightening me on worked examples

To further deepen the memory of the worked example, students should complete paired examples at this point. This means that they are provided with a completely worked solution and one to solve that is analogous to the one presented. The key here is analogous. It must be of the same difficulty and according to Engelmann, differing in as few elements as possible. This allows students to build the required schema that can then be transferred to similar problems later.

Once students are able to do this, then they move onto the completion problems, where a solution is only partially completed and they have to finish it. Eventually, after sufficient practice that helps to automatise the processes, the established schema allows a multitude of problems to be able to be solved. It is here they have become expert in the topic, and are able to inquire about it independently and creatively.

Preventing plagiarism – I’m a huge believer that success motivates success, and when students are confident and succeeding in solving problems, they will do it as often as possible without anyone else’s help. They won’t cheat because the feeling of getting things right and understanding concepts is a far better feeling than simply getting the grade by itself. All it takes is to honour the learning continuum, identify the extent of students’ schemata, and support their development using examples. I talk lots more about this in the online assessment posts, because it is online where plagiarism can be difficult to stop.

2 Asking Lots of Questions

Effective questioning is a powerful way to assess for learning. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By doing so it is quite easy to gauge the knowledge of a tutorial sized group. By carrying out this formative assessment you will be able to direct the next sequence of learning with far greater precision.

3 Wandering the room checking for understanding

These opportunities would present themselves at each of the worked example stages. Initially, the extra guidance afforded to the student could be enough to make a final connection to understanding if it hasn’t sunk in yet, or it could be, at a latter stage, a chance to deepen thinking by asking more open ended questions and applying them to different contexts.

By the end of each tutorial, your assessment for learning and the modifications you make to teaching as a result would have facilitated the development of relevant and necessary schema in your students’ minds.

Grading tutorials

The tutorial could then be used as a means of assessment, with you providing a grade for participation as well as solved problems.

  • The participation will almost be tokenistic, but you will know that the easy marks rewarded are merely a superficial representation of the greater significance and incentive for their attendance and work ethic: the development of schema.
  • The latter quarter of the tutorial (or perhaps a whole tute after several tutorials of practising) could also be assigned for the testing of students independently solving problems. The final 5 minutes would be peer marking from your displayed answer sheets so you don’t have to do any marking, only the recording of their grades.

As students walk out of the tutorial, be explicit with what they have achieved. ‘Jane, today you not only solved lots of problems, and clearly got past a bit of a barrier, but you also picked up all of your eligible participation points. Well done!’ Guaranteed, they’ll be back next week.

The next post discusses how you can adapt to a virtual tutorial.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT in HE pt 2

ASSESSING in HIGHER EDUCATION

This is the second in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it. The previous blogs can be found here

Assessment in higher education is a complex affair. The autonomy given to students and the scale of the organisations that provide higher ed traditionally reduce assessment to its summative form. Much to the dismay of tutors, sometimes the autonomy, particularly in the online submissions of student work, maculates the spirit of the offering when academic integrity is compromised. But it is not just this that renders the practice of reverting to summative assessment as an impotent means of measuring student understanding: it is the loss of opportunity to check for misconceptions and gaps in knowledge along the learning journey that attenuate the potential of a higher education. Formative assessment is the antidote.

TRIANGULATING ASSESSMENT

Assessing formatively in higher ed is not as easy as in other education sectors, but it can be done. If the regular method of asking lots of questions in a classroom or tutorial isn’t as practical in a large lecture theatre, the tutor needs to think innovatively and look for other ways to formatively assess. The answer is to triangulate the assessment process.

O’Donoghue and Punch define triangulation as a method of cross-checking data from multiple sources to search for regularities in the research data. As a tutor, the more information you garner about student progress and understanding the more you will be able to evaluate if the design of your learning sequence is as effective as you believe it to be, and thus be able to adjust and reteach certain topics if necessary, or provide specific support to fill learning gaps. This iterative approach will have an enormous impact on a student’s ability to succeed in your course, and ultimately, in a time of increasing accountability, support your own well-being in knowing you have used as much of the available evidence as possible to support your students.

Below I have detailed some options you can choose from to gain a triangulated perspective of progress. However, the list is certainly not exhaustive, and I welcome further ideas if you have some. Click on each option as it becomes available for ideas in how to formatively assess your students:

  1. Using the lecture
  2. Using the tutorial
  3. Using online quizzes
  4. Using mastery pathways
  5. Using online discussion boards
  6. Using groups
  7. Using participation
  8. Using analytics

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT in HE pt 3

This is the third in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

Even though the nature of higher education makes it harder to formatively assess, it can be done. Below is a list of sources of data that a tutor can use to triangulate their understanding of where a student sits on the learning journey, and importantly, whether what they think they are teaching is actually being learnt:

  1. Using the lecture
  2. Using the tutorial
  3. Using online quizzes
  4. Using mastery pathways
  5. Using online discussion boards
  6. Using groups
  7. Using participation
  8. Using analytics

Using the lecture

In a large lecture theatre it can be difficult to continuously check for learning. You may have asked the odd question previously, but got an answer that most in the room couldn’t hear, and realised that in the process many other students lost interest. However, asking questions in lectures very much can help in assessing for learning. It’s also all about expectations: if you set the bar high and focus on metacognition by explaining your process from the word go, and make the lecture an active learning space, students, who are very used to this in school, will play the game.

To get the most out of asking questions in a lecture theatre, there are 4 strategies to use.

1 Get good at asking questions

The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Allowing time for students to think about the answer gets the retrieval process activated. When an answer is given, repeat it out loud – ask for a show of hands if they think it is correct before you say it is or isn’t. If many get it wrong, it tells you that that section may need to be retaught.

2 Get students to write down their answers

Sure, some won’t and you can’t really check, but lots will and they will benefit from the retrieval process. Again, a show of hands before revealing the actual answer is a powerful indication of understanding.

3 Use technology

Technology for the sake of it is pointless. But it can be an effective way to assess learning in a large class. Echo 360 has a question function that allows you to pose questions during a lecture that students answer on their phone or laptop for you to collectively see on the main screen. If you set this habit up early in the course students will engage with it. There are other apps too that achieve this: @goformative, (see image below) @padlet, @socrative@peardeck @nearpod. The answers on the main screen allow you to quickly assess understanding, correct common issues, or discuss particular answers that are interesting or thought provoking. Some students may not get involved, but you could quickly gauge how many answers you have with the number in the room and prompt the recalcitrant with a reminder that their participation grade is triangulated. Even if you only do this once a session to assess the key concept taught, it will be very useful.

using tech to view students work in ‘real time’

4 In lecture quizzes

Again, utilising tech, set up quizzes during the lecture to test understanding of the KEY CONCEPTS. If most get the answers correct, then you can proceed. If most get them wrong, only the intransigent would continue as planned – the wiser tutor would reteach the section.

As discussed in the this post, Graham Nuthall talks of the need to expose students to content at least 3 times for them to process it effectively. Even if you don’t engage the retrieval process, referring back to key content numerous times in lectures and over subsequent lectures is a powerful way to provide that access. Also, asking rhetorical questions is effective as it will still stimulate the retrieval process in many of your student’s minds.

The next post will discuss using tutorials effectively.

I’m Paul Moss. I’m a learning designer. Follow me at @edmerger

THE CONTINUOUS RETRIEVAL APPROACH

Assessing for learning needn’t be signposted by explicit formative tasks. Formative assessment can be implicitly woven into your teaching, and this can be achieved by using continuous retrieval practice. *

Asking questions in class/lectures/tutorials is a form of retrieval practice. The questions actively force the brain to try to recall the knowledge, and since understanding is memory in disguise, this strategy is an excellent way of assessing whether what you’ve taught your students has actually been learnt. Questions needn’t be verbal: they are any form of interaction that demands a student to fill in a gap.

Nuthall’s research found that for students to be able to understand a concept, they needed to be exposed to the complete set of information about the concept on at least 3 different occasions. This has enormous implications for how we teach, because in order for us to be able to assess for learning, we have to provide adequate opportunity for students to actually encode the information. Bearing in mind that attention is necessary to constitute a single exposure (as without actually attending to something it is impossible to encode it), and that sometimes student attention can waver (oh, is that a fly on my page), we may in fact need to increase the number of times we facilitate their exposure to necessary and important content. Continuous retrieval practice then not only challenges and thus strengthens the neural pathways the information is stored in, helping secure that content into the long term memory, but also provides another exposure of content to students who haven’t reached the magical number 3 yet.

Without using retrieval, the teacher can’t be sure the knowledge is secure in the student’s mind until they use a more formal assessment. But by this time, there could already be large gaps in the knowledge base that will take longer to unpick, and undoubtedly prevent the student being able to understand the next sequence in any sort of depth. This may manifest in the student who appears to be always struggling to keep up.

Assessment then should be seen as a continuous but incremental method of checking for learning, as depicted in this image:

The above graphic represents 5 units (U) of work in a course. The metaphor is that when a unit is taught, retrieval practice is embedded (looped back) into the unit before moving onto the next: the teacher explicitly focuses student attention on key aspects of the unit that are essential and requisite knowledge for the next. A summative assessment (S) measures student knowledge at the end of the unit. When the next unit is taught, the retrieval practice not only focuses on the content of the 2nd unit, but also the summative content of Unit 1, as the spiral for U2 overlaps at the S1 sector. The process continues, but crucially, each subsequent unit must draw from every unit previously taught.

Let’s look at this more closely:

Figure 1 represents the content taught in Unit 1. Figure 2 represents the retrieval process in Unit 1 with the loop feeding back into the shape.

Figure 3 represents the teaching of the second unit. But critically, the summative content (S1) from Unit 1 is very much a part of the sequence. That, as well as the new content of Unit 2, now forms part of the summative content (S2) for that unit.

This design is very deliberate. It stems from an awareness that the exposure to the new information must incrementally build on what the student already knows. Willingham (p6) suggests that when posed a problem, our brains search** for solutions by invoking previous knowledge about a topic or at least something related to it, both declarative and procedural. (This by the way, is why worked examples are so integral to effective teaching practice.) The thinking about the previous knowledge and how it fits with the current knowledge is how we begin to develop schema. Without this precise design of a sequence of learning, the schema can’t form, and this has large implications for teaching new content.

Of course, you won’t be able to test all of the content at each summative (S) point, but this is where spaced retrieval comes into play. Spaced retrieval not only helps you to plan to incorporate all the relevant content over the duration of the course, but perhaps more importantly, it helps students to learn the same amount of content without having to put in extra study. It’s simply a very efficient use of study time.

The process continues until all units have been taught (figure 5), with each new unit drawing from and incorporating previous learnt material as part of the new sequence. At the very end, a final summative test is given, but as you can probably deduce, it will be not that different from what has been happening all the way along. It may simply be a longer test. The likelihood of success in this final test/exam will be significantly higher as students have been given multiple opportunities to access the content over the course, facilitating the movement of knowledge into the long term memory, and very much reducing the enormous anxiety that exams can create, and the criticism of their validity.

Well this is OK in a classroom or a tutorial, but what happens when I have a lecture with more than 40 students I hear you ask? Can I still use this approach? That’s the topic of the next post

In the next post I will outline the ways formative assessment can be applied in HE

*I know that retrieval.org suggest we shouldn’t view retrieval as an assessment strategy, but rather as a learning tool. I think though that teaching is essentially broken into 3 parts: delivering content, assessing its understanding, and influencing emotional intelligence. I see every question we ask as a tool to assess and to inspire thinking.

**I am aware that the tangible processes I discuss are indeed metaphoric.

I’m Paul Moss. Follow me at @edmerger

START STRONG, FINISH STRONG – strategy 1

In the previous post, I introduced the rationale for implementing a range of strategies to help students start strong in their University courses. The implications for failure extend further out than we might imagine, and can have severe effects on students and staff alike. In this post I introduce the first of 3 teaching and learning strategies that lecturers can use to assist students in being able to make a more informed decision about their academic aptitude in a course.

1. THE SEQUENCE OF LEARNING AND ITS PURPOSE

Effective and precise design of a learning sequence is imperative if students are to succeed in a course. Clear and manageable learning outcomes must drive the design of learning activities and assessment. Whilst it is not necessary to cater to the whims of students’ interests, it is necessary that a student sees a purpose of taking the course in relation to their personal aspirations. One way to begin the design of the sequence that covers these demands is to develop a visual curriculum map. Such a map shows a student how the topics within the course are intertwined and how the accumulation of the knowledge taught within the course leads to future opportunities. 

CREATING A VISUAL COURSE MAP

WHY IS THIS EFFECTIVE PRACTICE?

‘The scientist must organise. One makes a science with facts in the way that one makes a house with stones. But an accumulation of facts is no more a science than a pile of stones is a house.’ Henri Poincare

As the expert, trained for many years in your respective field, you would have built and developed a large web of interconnected ideas (schema) for your subject. It is this schema, or parts of it at least, that you will teach. As the expert, you understand how the parts of the schema fit together, how they feed off each other, and the sequence of learning required to arrive at such a full and complex understanding. But the novice learner arriving into your lecture theatre has little of this knowledge. To them, everything will initially appear very abstract and disparate, particularly pre-census. The abstraction makes it difficult to make connections that will lead to the acquisition of schema, an essential determinant of further learning.  

The visual course map serves as a model of your thinking, an explicit representation of the processes required to create relevant schema. As Clark and Mayer (2008) suggest, this immediately offers some context and orientation to your students, and facilitates what Willingham believes to be an essential need in learning in making the abstract more concrete. Such a process is easily recognised considering our own learning – we naturally convert the abstract into meaningful concrete information. Showing students the journey they are about to embark on and providing an otherwise closed window into your mind, and into the course’s structure, helps novices to transform the abstract into the more digestible concrete.

So, SHOW STUDENTS THE SCHEMA!

THE WALK THROUGH

Once made visible, walking students through the schema is the next step. Explaining how each piece of the puzzle fits in with the next is crucial in a sequence of learning. Focusing on the connections and links between disparate ideas is how we move from a pile of stones to the building of a house. Ensuring each connection is secure through formative assessment, particularly through the online supplement, is necessary to avoid the ‘curse of knowledge’ and to know that your students are able to move onto the next component of the course. The curse of knowledge is the idea that when you know something well it is difficult to imagine that others don’t, and so we tend to brush over simple but important links and connections between content. Often, these links are actually vital for a novice to develop their own schema on a topic.  This 1976 cartoon by James Stevenson visualises the issue well:

USING TECHNOLOGY TO ENHANCE LEARNING

Shortly, I will be able to provide you with an example of this map being interactive, where students will be able to click on a relevant section and be taken to the relevant learning associated with it. This can be done using H5P and then utilising mastery pathways (more on this soon in the 2nd strategy post).

MODELLING THINKING AND PROCESSES

There is an enormous amount of research (Clark and Mayer 2008) validating the effectiveness of modelling your own thinking and processes to students to move them from novices with immature schemata to experts with developed, sophisticated schemata. The novice is indeed a different type of learner to the expert, their less developed schemata severely impacting the cognitive load on working memory, and thus having significant implications to the types of questions and activities you engage them in. The table below illustrates the need to understand the learning continuum when planning a sequence of learning.

Actively explaining the ‘glue’ that binds topics and how you arrived at your understanding provides a model for students to learn and use in subsequent learning, learning in which they are more likely to make their own independent ‘glue’ as they will have more knowledge to draw from and more automaticity in their working memory.

KNOWING WHERE YOU ARE GOING INCREASES ENGAGEMENT

Not only is it useful to highlight to students how each topic fits together to form the schema in a course, but it is also useful to show students how the course fits into a larger picture of learning. A course map should also articulate to students the possible exit pathways that acquiring the knowledge in the present course facilitates. The TEQSA framework for teaching (3.1.1) is clear in this being required:

 The design for each course of study is specified and the specification includes: g.  exit pathways, articulation arrangements, pathways to further learning.

Research has found that students are often ‘… not aware how different elements of courses functioned as building blocks in the development of their research skills and knowledge.’ An increased awareness of the connections between courses within a program would serve to provide greater opportunity for students to think more about them, and consequently develop the necessary schemata. The visual course map is ideally suited to provide the context and purpose of a course in relation to others in the program. Seeing possible overlaps in outcomes by viewing colleagues’ maps provides opportunity to identify the connections and make them explicit in your teaching sequence. This will deepen learning as the explicit connections will strengthen students’ memory of the content through the continuous retrieval process that such a strategy affords.

This then further encourages students to participate in your course as they will revisit/need the content in other courses too, and the overlap will reduce pre-census cognitive load.

WHAT’S THE EFFECT ON METACOGNITION

STUDENTS: The visual journey map allows students to self-evaluate their own understandings of each section, and source extra information, resources and practice to fill any gaps. This is particularly important in the first 4 weeks of teaching, even though the schema at this point would be only partially complete. I will provide lots more advice on metacognition in the 3rd strategy post.

YOU: The added benefit to this strategy is that it helps you fine tune your course, ensuring that there is a logical sequential flow to the sequence of teaching. It will help you define the key aspects that you want students to focus on, and give you direction on how to structure resources and assessment based around those.

HOW TO CREATE THIS RESOURCE

  1. Create the map as a rough mind map articulating the key components of your course.
  2. Work backwards and add in assessment (see part 2) at key junctions
  3. Then either on your own, or with help from a learning designer, create a series of visuals that sequence the growth of the schema.

HOW TO USE THIS RESOURCE

  1. The map would be displayed as the first image in your first lecture, as well as the dominant image in the online supplement.
  2. The first teachings would then highlight the section of the map currently being addressed, with the remaining sections faded out.
  3. Crucially though, the map should be continually referred to as the learning continues and builds on itself. This not only provides context, but assists the retrieving of knowledge, as students make stronger neural connections to what has already been taught from the map because of it being continually referred to and thus recalled. The students are then beginning to build the schema in their own minds.
  4. The final lectures would display the map and encourage learners to fill in the links. This could form an excellent formative assessment task prior to exams to help students identify areas of weakness. 

The next post will provide strategies for designing the support presented to students in terms of scaffolding cognition.

I’m Paul Moss. Follow me on Twitter (@edmerger) or on LinkedIn for more discussions about learning design.

Cover image: Credit: © Images.com/Corbis

START STRONG, FINISH STRONG

A STRATEGY FOR PRE-CENSUS LEARNING DESIGN

RATIONALE

the data suggest that students who start strong finish strong

Enabling students to make educated decisions about whether or not they should continue with a course post census is of paramount importance. Poor decisions have large financial, social and employment implications that are inextricably tied to them, and they also weigh heavy on the conscientious lecturer. This is the first in a series of posts designed to support the capability of faculties in the application of 3 strategies to help reduce the number of students who drop out after census without being able to formulate precise understandings about their aptitude for learning in a particular course.

The numbers in black represent hypothetical, but likely familiar, attrition of 1st year undergraduate students in respective faculties filtered by low participation in online engagement. Since modern higher education is very much characterised by a blended learning experience, where the online component is used to address the lack of personalisation in the face to face offering, the data suggest that students who start strong finish strong, and conversely, those who don’t won’t.  

This leads to the burning question: why aren’t these slow starters getting involved? There could be multiple reasons, but what this resource proposes is that it is not simply that they don’t like the look or navigation of a page, but that dis/engagement is also affected by the sequence of instruction and perhaps most critically, by the levels of support embedded in the sequence specifically dedicated to the development and building of schema

Modern learning design then needs to be considered on several fronts:

  • the sequence of learning and its purpose
  • the support presented to students in terms of scaffolding cognition
  • the user experience  

I suggest that attention to these factors would increase the participation levels of these disengaged students, and give them a better indication if the content they are engaging with is suitable either in terms of academic difficulty or actual interest in the course.

For some students, disengagement at the first signs of challenge can become the default behaviour; then failure is not seen as their fault ( a learned helplessness) – they are able to maintain dignity. The trouble is that they and indeed we will never know if they were actually capable of achieving in the course. Supporting cognitive load from the first instance will ‘catch’ some of these students too.

IMPROVING STAFF WELL-BEING

Students failing your course is never a nice feeling. The impetus for attention then is not solely limited to the plight of the student, but for faculties eager to retain students initially drawn to them, and the lecturer who has to bear the statistics. Of course, sometimes students simply get it wrong and enrol in a course they would never be suited to, and faculties are forced to work harder to guide and reposition them in something more appropriate. But even in such a context, this resource is still of use, in helping students arrive at an understanding quicker, and at a more informed and conclusive decision. Once the strategies are applied, the lecturer can safely conclude that they did all they could to sustain their students’ attention, and not feel a gnawing sense of guilt or worse, shame at the darkness on the graph.

IT’S NOT JUST ABOUT MOTIVATION

Perhaps tellingly however, large numbers of students who persist through semester one and actually had mid-range online participation levels do not re-enrol in any course within the same faculty in semester two.

Notably, these numbers are larger than the disengaged numbers in semester one. Even with online engagement, these students did not experience enough satisfaction to continue their interest in the course; they were willing, but the course couldn’t support them. Because of this outcome, it could be argued that even though some of these students did start strong in terms of participation, it may have simply been motivation and therefore resilience that drove their engagement. Resilience at this stage of the student’s journey then is a poor proxy for success and reiterates the need for stronger learning design that works on building intrinsic engagement in students. Intrinsic engagement is only likely to form when students experience success in their learning, almost always the result of deeper understanding of concepts and topics – facilitated by scaffolded cognitive loading.

IN SUMMARY: Learning sequences that support the significant cognitive load demands on beginning students by:

  • explicitly focusing on the sequence of learning and its purpose,
  • by supporting students in terms of scaffolding cognition,
  • and by following the technological design principles necessary to engage the modern user

 will all combine to help students to begin their studies on the front foot, and eliminate poor design of a course as a possible contributor to discontinued enrolment.

Learning design that supports the building of intrinsic engagement then empowers students to make the correct choices in deciding to continue or discontinue with a course. Concomitantly, this resource also provides additional structure for students already experiencing success, helping them move more quickly from the novice learner to the expert learner, and thus independence, and adding weight to statistics that support the notion of Strong Start, Strong Finish.

In the next post I will discuss the first learning design focus, and explain the power and necessity of a curriculum map

I’m Paul Moss. Follow me on Twitter (@edmerger) or on LinkedIn for more discussions about learning design.