DO WE NEED TO TEACH SELF-REGULATION?

Achieving independence and self-regulation in learning is the holy grail of education, but how to go about it is as equally mystical. Essential to the quest is developing a rich schema through the building and interaction of knowledge, and whilst belief in the explicit teaching of students in how to think about their thinking processes (metacognition) and how to evaluate them as being an integral part of self-regulation is gaining momentum (EEF), this 2 part post will seek to extend the current understanding by discussing whether it is necessary to promote critical and creative thinking inside subject domains. The essay also expounds on Zimmerman and Moylan’s 2009 paper that theorises that motivation is inextricably linked to both of these metacognitive processes, can’t be omitted from the discussion, and in fact needs to be explicitly taught to students in equal measure. As Kuhn exhorts, ’People must see the point of thinking if they are to engage in it.’  

WE ALL WANT 21ST CENTURY SKILLS 

Whilst many argue that labelling skills such as critical thinking and creativity as ‘21st century’ does an injustice to those who for thousands of years exhibited such proficiency in them, few could argue that there is a growing demand for graduates to be strong in these areas in the age of increasingly automated and mechanised jobs. How to equip students with such skills then has become the mission of educators, but many well-intentioned educators have erroneously conflated the desired outcome with a direct pedagogy, succinctly stated by Kirschner: the epistemology of a discipline should not be confused with a pedagogy for teaching or learning it. The practice of a profession is not the same as learning to practise the profession. There are plenty of excellent voices who assent to this notion, none better then Daisy Christodoulou, specifically pointing to the fact that thinking critically or creatively relies entirely on a strong bedrock of knowledge and can’t be taught in the abstract. If we think about this it seems rather logical – you can’t think about things you have no knowledge of, and most creativity is the accommodation of knowledge already in existence. Such constraints make the application of such skills heavily context and domain dependent. But what tends to be lacking from such unequivocal pedagogy is the answer to this question: once the foundations of knowledge are secure, do students need explicit modelling of how to think critically and creatively with that knowledge? I contend that the answer is yes.  

If we consider how learning is characterised by the acquisition of schema, and how crucial modelling is in that continuum, I would argue that modelling how to play with knowledge is no less important than modelling the knowledge itself. However, it is something that is often overlooked in modern curricula for three reasons:  

  • Because we sometimes assume that students will naturally think in these ways  
  • Because of the need to fit in so much content in so little time  
  • Because it is hard to assess, relying on subjective and therefore unstable evaluation 

The first relies on Geary’s theory of primary vs secondary knowledge. The exposition of the theory is that once sufficient knowledge is obtained, the mixing/matching and challenging/critiquing of what is understood should become axiomatic. From my experience though, without the continuous prompting by the teacher to engage with the knowledge in this way, such an outcome tends to rely heavily on a student being highly motivated in a specific domain of knowledge, with the less interested, but equally as capable student, content with achieving in assessment but not necessarily interested in exploring the content further. But what is notable however about the self-motivated student, is that they still will undertake a process of learning in how to mix and match and challenge what they know, albeit, independently: it is through the experimentation of their thinking and its evaluation that they may eventually arrive at something unique and interesting, but this ostensibly natural skill is actually being practised and refined to be maximised – and quite possibly, inefficiently, compared to what some guidance in the process could afford. When motivation to pursue a discipline is not as high, students need to be prompted to engage in ‘higher order’ thinking. Interestingly, sometimes it is only after these higher order prompts that real interest and motivation is sparked, and so the explicit provocation of them in a learning environment is important.

Sweller’s addition to Geary’s thesis, that : ‘Organizing general skills to assist in the acquisition of subject matter knowledge may be more productive than attempting to teach skills that we have evolved to acquire automatically…’ supports the earlier statement that teaching critical and creative thinking in the abstract is pointless, but it is the focus on the word ‘organising’ that is crucial here: the conclusion then is that it’s not enough to assume students will naturally engage with this type of thinking – it is only through the explicit organisation and modelling of it that will facilitate students being able to self-regulate this thinking.

Practising the application of critical and creative thinking needs time and space for it to be strengthened, and this is why the existence of the 2nd obstacle in educational contexts is so concerning. The impetus of non-invigilated exams has certainly made apparent the need for assessment to involve the application of knowledge. But to do so requires a carefully designed curriculum that facilitates such opportunity in the sequence of learning.  I tend to promote a sequence patterned by the rhythm: learn, practise, apply. New knowledge is introduced by the expert, students interact with and practise using the knowledge to confirm understanding, students then apply their knowledge to do something with it. The application doesn’t have to be a large project type task. It may simply be the asking of higher order questions that include hypothesising, creating analogies, exploring various points of view, wondering if the content can be applied in other contexts, what the connections are to other aspects of the course, or brainstorming with a view to generate new ideas for a real-world context. The latter is especially relevant for the later stages of higher education.  

It is such a pattern of learning that models for students how to interact with the understood knowledge they now have in their possession, a modelling process that observes what Volet (1991) imports as the necessity of identifying and making explicit how an expert thinks. This is relevant to not just when the expert is presented with new problems, but also how they think with the knowledge they already have. Palincsar &Brown (1989) concur, ‘By demonstrating the different activities by which subject matter may be processed, problems solved, and learning processes regulated, the teacher makes knowledge construction and utilization activities overt and explicit that usually stay covert and implicit.’ Like all learning, the goal is to take the metacognition to automaticity so the propensity for self-regulation in the next sequence of learning isn’t compromised by cognitive overload.   

WHAT ABOUT TRANSFER?

Whether or not this explicit process of thinking within specific domains can be transferred to new contexts remains to be seen, but Simon, Anderson, & Reder (1999) arouse our curiosity when they suggest that transfer happens far more frequently than we might think. They cite reading as a prime example, but more specifically challenge a famous study by Gick and Holyoak who demonstrated that students were unable to see the abstract similarities between two problems even when they were presented side by side:  

One of the striking characteristics of such failures of transfer is how relatively transient they are. Gick and Holyoak were able to increase transfer greatly just by suggesting to subjects that they try to use the problem about the ‘general’. Exposing subjects to two such analogues also greatly increased transfer. The amount of transfer appeared to depend in large part on where the attention of subjects was directed during the experiment, which suggests that instruction and training on the cues that signal the relevance of an available skill might well deserve more emphasis than they now typically receive–a promising topic for cognitive research with very important educational implications.’  

They then continue to suggest that: ‘Representation and degree of practice are critical for determining the transfer from one task to another, and transfer varies from one domain to another as a function of the number of symbolic components that are shared.’ It follows then that for Dignath and Buttner’s claim to be valid, in their meta-analysis on Components of Fostering Self-regulated Learning, that ‘Providing students with opportunities to practice strategy use will foster the transfer of metastrategic knowledge to real learning contexts’, relies on students being able to recognise patterns or connections between contexts where they can apply their metacognition.  

As stated earlier, you can’t think critically and creatively without a strong foundation of knowledge, and further, some of that thinking will be only relevant in specific domains. But it does seem likely that some of the higher order strategies stated above (hypothesising etc) would be able to be applied in a range of disciplines, and that a student observing the modelled thinking processes of a teacher in a second context will recognise some (if not many) elements learnt from their first. Once reinforced through this observation, students will begin the regular learning continuum of taking the skills to automaticity through practice. Once achieved, being able to apply the thinking in new contexts is made more possible – it will be up to further research to ascertain whether, having met these conditions, such transfer is actually possible.  

WHAT DO WE WANT FROM EDUCATION? 

 Another consideration when teaching critical thinking draws from Kuhn, who exhorts that the development of epistemological understanding may be the most fundamental underpinning of critical thinking. In no uncertain terms, she beseeches that teachers provide the opportunity for students to reach an evaluative level of epistemological understanding, realising that simply possessing an absolute epistemology constrains and in fact eliminates a need for critical thinking, as does a ‘multiplist’ stance, allowing students a degree of apathy characterised by statements such as “I feel it’s not worth it to argue because everyone has their opinion.” The explicit modelling of an evaluative epistemology, where students are encouraged to the fact that people have a right to their views with the understanding that some views can nonetheless be more right than others, sets up a learning culture where students see the ‘weighing of alternative claims in a process of reasoned debate as the path to informed opinion, and they understand that arguments can be evaluated and compared based on their merit (Kuhn, 1991).’ Such a pedagogy may satiate an interesting question posed by Martin Robinson: ‘Should the result of a good education include all students thinking the same or thinking differently?’

The 3rd obstacle also looms large. Assessing creativity especially is a difficult thing due to its subjectivity. Rubrics are notoriously imprecise as a reliable reference in determining success or failure of creativity: what I may think satisfies one element of a rubric may be argued against by a colleague; maintaining consistency even with myself in marking is difficult. And if we don’t assess, will students not particularly interested in the topic lose motivation, and make the process a challenging one to manage? I think the answer lies within the answer to Martin Robinson’s question: surely we don’t want everyone robotically programmed. We want students to engage critically and creatively with concepts, and participate in the building of a dynamic and interesting world, so we have to have faith that the knowledge taught to our students, when learnt well, will provide avenues for curiosity that will engage them to participate. Such an epistemology then satisfies stakeholder desires to employ graduates who can think critically and creatively in a modern workplace.      

So how is motivation linked to it all?

 In the next post, I will extrapolate on Zimmerman’s imperative that metacognition is inextricably linked to motivation, and how educators can ensure they incorporate both in learning design.  

References 

Anderson, J. R., Reder, L.M., & Simon, H.A. (2000, Summer).Applications and Misapplications of Cognitive Psychology to Mathematics Education.Texas Educational Review. 

Dignath, C., Buttner, G. (2008). Components of fostering self-regulated learning among students. A metaanalysis on intervention studies at primary and secondary school level. Article in Metacognition and Learning · December 2008 retrieved from here 

Geary, D. (2001). Principles of evolutionary educational psychology.
Department of Psychological Sciences, University of Missouri at Columbia,
210 McAlester Hall, Columbia, MO 65211-2500, USA here

Palincsar, A. S., & Brown, A. L. (1989). Classroom dialogues to promote self-regulated comprehension. In J. Brophy (Ed.), Advances in research on teaching, Vol. 1 (pp. 35–67). Greenwich, CO: JAI Press. 

Sweller, J. (2008) Instructional Implications of David C. Geary’s Evolutionary Educational Psychology, Educational Psychologist, 43:4, 214-216, DOI: 10.1080/00461520802392208

Volet, S. E. (1991). Modelling and coaching of relevant metacognitive strategies for enhancing university students’ learning. Learning and Instruction, 1, 319–336. 

Zimmerman, B., Moylan, A. R. (2009). Self-Regulation from:
Handbook of Metacognition in Education. Routledge.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger

TRAINING STUDENTS FOR ONLINE EXAMS REDUCES COGNITIVE OVERLOAD

Teaching to the test doesn’t work. But teaching students about the test is imperative. Not only that, exam performance IS a thing, and you can assist students to get better at that performance. It’s all about mitigating cognitive load.

GAME TIME – Any sports person will tell you that match fitness is everything. Regardless of how much you prepare, you never achieve the same level of fitness and game knowledge compared to actually playing. Why? Because when the real thing happens, not only do nerves and adrenaline consume vast amounts of energy, interfering with the ability you have coming to the surface, but lots of other unexpected occurrences happen, all leading to increased cognitive load, and leading to exhaustion quicker. The cognitive load can be so debilitating that the player has to rely on muscle memory to get them through. When a student sits an exam, adrenaline and anxiety will naturally surge through their veins. Helping them revise the content is a must, but importantly, helping them become more familiar with the game/exam context is climactical, and this can be achieved by training students to automaticity with exam technique.

ABOUT THE TEST

1. Exam layouts

 Show students, and get them used to, the layout of the online exam. The more they see the module and layout of the exam and understand what the expectations are of each section, the less pressure they’ll feel when they see the real thing.  

Of particular importance with students having to complete exams online is detailing the processes involved if they experience technical issues. Take them through the procedures so if it happens during the exam they don’t lose all confidence and panic. ALSO: Ensure students have read the academic integrity policy and that you discuss it repeatedly – the more you talk about academic integrity the more of it you’ll get.

MANAGEABLE Student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%

2. Question requirements

Ensure students know what each question is demanding of them.

How long is a piece of string?

What does a short answer look like? What gets you full marks? What does a long answer look like? What gets you full marks? How much working out is necessary? How much detail is required?

Don’t expect students to guess the answers to these questions. Students who have to worry about what constitutes a good answer expend lots of valuable cognitive load. Model the expectations by showing previous examples, past exams, etc.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%

IN THE EXAM

1. Time training

Training students with timings of questions in exams will significantly propitiate cognitive load. It’s one thing to know what the question demands of you, but another to actually do it in a stressed environment. If a student isn’t used to the pressure of time, the longer the exam goes on, the greater the likelihood of their cognitive load increasing and their performance reducing as they panic with the evaporation of time. So, get them to practice doing a mock of a section in the exam – let them experience what it’s like to type in the allocated time – do their fingers get tired? What’s it like to upload if necessary etc. The more practice they get the better, but if you are running out of lesson time to train students, at least give students the chance to practice once – just one section that requires an upload process for example.

This image has an empty alt attribute; its file name is image-2.png

The other aspect of time training is in helping students to set personal timers. Obviously, the online exam doesn’t have all the usual cues that an invigilated exam offers: a large clock, a warning by the invigilator of 5 minutes to go, and even the cues of students completing and organising their work on the next desk. But an advantage of online exams is that students can set their own alarms to negotiate each individual section of the exam, and not accidentally spend too much time on a certain section:

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%

2. Editing their work

Rereading responses is difficult for exhausted students to do at the end of a lengthy exam. It is usually at this point that they have a sense of relief, and the last thing they want to do is reread what they’ve done. Of course, it’s madness not to, to ensure there are no silly mistakes, particularly in multiple choice questions, or content mistakes. Even checking for structural, punctuation and/or spelling issues could benefit the overall grade. 

So, I have to build that practice into their normal way of working, so it becomes a part of the process, and not an add on. This can really only be achieved by repeatedly physically getting students to do it: at the end of each ‘mock’ assessment, stop the test and get students to spend 4 – 5 minutes in dedication to proof reading…and explain the rationale, repeatedly: I always tell my students they WILL lose more marks with errors (they can fix) than they are able to gain by writing more response in the last 5 minutes. But without it being a normal way of working, exhausted students won’t do it automatically.   

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%

3. Being professional

Not panicking in certain situations is crucial in reducing cognitive load. Taking students through possible scenarios will help to calm them if the situation presents in the exam, scenarios such as:  If you’re running out of time what should you focus on to get you the most marks? What to do if you can’t answer a question – do you panic and lose total focus for the rest? Should you move on and come back to questions? Are you aware that the brain will warm up and so coming back later may be easier than it is now? This last point is absolutely crucial to convey to students. As the exam progresses, lots of the exam content itself may trigger or cue retrieval of content that couldn’t previously be answered, so teaching students this metacognitive notion could make a significant difference to their overall performance.

Manageable student
cognitive load
 Student A – no trainingStudent B – training
Before beginning exam20%20%
Exam layout5%0%
Exam content 30%0%
Exam timing training20%0%
Editing responses5%0%
Being professional 10%5%

As you can see by the very much made up numbers, the cognitive load experienced by Student A is significantly greater than Student B, and would indubitably affect performance in the exam. The student’s knowledge would have to fight a great deal to break through the pressure. 

BEGIN NOW!

The more you do something the better at it you get, provided of course you’re doing it the right way. Students don’t really get that many opportunities to learn to negotiate the exam environment on their own, especially in the current context of moving to online non-invigilated exams, and so providing them with such training is critical. 

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger, and follow this blog for more thoughts on education in general.  

Is it even possible to set an online open book mathematics exam?

When trying to offer advice on how to modify exams for the coming semester exams, some subjects have presented with unique issues. Mathematics, for example, has the unenviable dilemma of not being able to set calculation type questions as students can simply type them into an online calculator and be presented not just with the solution, but the workings out too.

The remedy presented to other subjects that require numerical calculations, such as statistics and accounting, of randomising questions, both through the formula question type in Canvas as well as question banks, is not appropriate for mathematics.

The only hope of confidently reducing the amount of ‘Googling’ during the exam is to create more complex questions, questions that require deeper understanding or the application of knowledge, which also requires deeper understanding. Whilst this is of course the ultimate goal of any subject, if such application demands haven’t been taught, then the likelihood of students producing quality answers in exams is limited. If the amount of content that has been introduced determines that only superficial understanding is possible, a breadth rather than depth approach, then question types in the exam can’t change because it’s now open book – students simply wouldn’t have been prepared sufficiently, and thus the exam will not produce valid inferences.

In defense of mathematics, many of the calculation questions that an ordinary invigilated exam would test are designed as such to strengthen fundamental processes and skills that are required for further study in the discipline. The building of the schema is essential to be able to apply understanding in further contexts. But open book exams now pose a large threat to such a design of curriculum. It may be in the future that a depth rather than breadth approach is the only feasible option, so that the depth of understanding in less of the content can open opportunity to assess the application of the knowledge, and thus mitigate against cheating.

Baby with the bathwater?

However, there is something that mathematics’ exam designers should also be conscious of before eliminating all questions that a student could simply look up. The beginning of an exam should really be designed to ease students into the process, to provide a quick boost as they solve a question they find relatively easy. The anxiety, practically 100% concomitant with sitting university examination, is immediately partially assuaged, and thus reduces cognitive overload and allows a student to think more clearly. Exams that begin with very difficult problems can throw off students’ confidence significantly, even those who know enough to pass. It may be that you still set those initial questions as fundamental skill questions that could be looked up but knowing that for the majority, who won’t need to look them up, they will benefit from gaining some confidence in the initial stages of the exam that will facilitate better attempts at the more difficult questions later on.

In the end, it’s not about those who will cheat, it’s about those who won’t.

I’m Paul Moss. I’m a Learning Designer at the University of Adelaide. Follow me on Twitter @edmerger

KNOWLEDGE TRANSFER and designing EXAMS

Few would argue that a goal of education is for knowledge to be able to be transferred from one context to another. However, making it happen is not as easy as it seems, and this has implications for epistemological decisions needing to be made in designing curricula, exams, and indeed, deciding on an institutional ethos.

From research discussed below, knowledge transfer relies on two conditions:

  • transfer is usually only possible when a student possesses a relatively well-developed schema: the closer to expert the better
  • the transfer needs to happen within or close to the known and acquired domain of knowledge.

WHAT THE RESEARCH SAYS

What characterises an expert is their acquisition of schema. Experts tend to have lots of knowledge about a subject, but knowledge that is organised and elaborate in how it connects it all together. Particularly important, in terms of knowledge transfer, is the expert’s ability to see the underlying deep structure of problems, regardless of surface differences. It is this ability to make analogies with what they have previously encountered that not only improves the encoding of new content, but also its retrieval:

  • Experts are better than novices at encoding structure in examples and recalling examples on the basis of structural commonalities (Dunbar, 2001). For example, Novick (1988) found that students completing a second set of mathematics problems all recalled some earlier problems with similar surface features to the present problems, but students with high Mathematics SAT scores recalled more structurally similar problems and were also better at rejecting the surface features than were students with low scores.
  • The reason for this is that when experts think about problems, they draw on/retrieve their large reserves of schema that have evolved, through practice and deliberate exposure to worked examples, to contain the deeper structural features of question types. On the other hand, novices tend to do the reverse, only being able to identify the surface structural characteristics and thus using an inefficient means-end solving strategy (Sweller 1998). The issue with this is that it heavily taxes the working memory, and often results in cognition being overloaded. What’s worse, is that such a taxing ultimately denies the problem from becoming a part of the schema for future use – so there’s a double loss.

The implications of this for education are enormous. The need for schema is irrefutable, from Bartlett to Ausubel and even to Bruner: but for novice students to develop it efficiently, they need to engage in learning that builds knowledge over time and experience, through examples they can store and eventually make analogies with, and interestingly, as Sweller states above, not through problem solving.

So, here’s how transfer can be developed:

  • a student learns by an example, which with the right conditions (retrieval), is then stored in their long-term memory. At this point, only the surface structure of the problem is recognised.
  • The student then encounters another example that has a similar surface structure. Now the student has 2 models to draw from. At this point, only surface characteristics are likely to be seen.
  • The student then is provided another example but this time the surface structure is different but the deeper structure is analogous. The teacher at this point must direct student attention to the analogous deeper connections, as they usually won’t see them for themselves, as proven by Duncker’s tumour problem – see the study below.
  • Repeating this process eventually builds the student’s repertoire of problems they can draw from to make analogies with. The more they have, the greater the chance of them behaving like an expert, identifying the deeper structural components and working forward with the problem, thereby using less cognitive load, and inevitably adding another example to the schema.

How to deliver the analogous examples

Gentner, Lowenstein and Thompson (2003) conducted a study to ascertain what the most efficient delivery combination was. The study used 2 negotiation scenarios, one from shipping and one from travelling as a means of training students to be better negotiators. 4 contexts of delivery were investigated:

  • separate examples, where student were presented both examples on separate pages. Students were asked questions about each text
  • comparison examples, where students saw both examples on the same page and were directed to think about the similarities between the 2 stories
  • active comparison group, where students were presented with the first example on one page and the solutions to that example were carried to a second page that presented the second example with questions asked about the similarities between the two
  • a group that had no training

Clark and Mayer (2008) adapted the findings and presented them graphically:

The results showed that an active comparison was a far superior technique to train the students

Implications for exam design

There are 2 considerations in this regard:

  • When designing open book exams that rely on the application of knowledge (in the current climate primarily to mitigate cheating), it is important to consider the cognitive conditions for transfer to take place. If you have taught your students a range of examples that have facilitated analysis of deeper structural connections, then your question in your exam can test understanding of the deeper structural connection. If you haven’t taught your students in such a way, then your question choice will be limited to more surface level questions. If you ‘jump’ to deeper structural questions, in an attempt to make the questions harder to compensate for the openness and accessibility of the content, then the results of the exam may well be invalid, as you have tested for something that students weren’t capable of doing.
  • On the other hand, knowing that you can safely change the superficial structural elements of a question and test ‘real’ understanding because transfer is difficult if the concept isn’t truly understood, also mitigates against cheating as students can’t simply rely on their notes. If they can’t make the connections, an indicator of a novice learner, then they can’t benefit from the notes as an expert would – who ironically, probably wouldn’t need them anyway.

Duncker’s tumour problem

A problem that has been studied by several researchers is Duncker’s (1945) radiation problem. In this problem, a doctor has a patient with a malignant tumour. The patient cannot be operated upon, but the doctor can use a particular type of ray to destroy the tumour. However, the ray will also destroy healthy tissue. At a lower intensity the rays would not damage the healthy tissue but would also not destroy the tumour. What can be done to destroy the tumour?

Gick and Holyoak used this story to test the transference success of knowledge. Prior to the tumour problem, students are then given the story below, and another group a second story to accompany the current 2. Both additional stories have superficial differences to the tumour case, but similar structural or convergent features. They found that most students who tried to solve the tumour problem on their own had difficulty, those with the aid of one story still struggled, but those with the aid of 2 stories could see the convergent abstract similarities. In other words, they were able to see the deeper structural analogies.

A small country was ruled from a strong fortress by a dictator. The fortress was situated in the middle of the country, surrounded by farms and villages. Many roads led to the fortress through the countryside. A rebel general vowed to capture the fortress. The general knew that an attack by his entire army would capture the fortress. He gathered his army at the head of one of the roads, ready to launch a full-scale direct attack. However, the general then learned that the dictator had planted mines on each of the roads. The mines were set so that small bodies of men could pass over them safely, since the dictator needed to move his troops and workers to and from the fortress. However, any large force would detonate the mines. Not only would this blow up the road, but it would also destroy many neighbouring villages. It therefore seemed impossible to capture the fortress. However, the general devised a simple plan. He divided his army into small groups and dispatched each group to the head of a different road. When all was ready he gave the signal and each group marched down a different road. Each group continued down its road to the fortress so that the entire army arrived together at the fortress at the same time. In this way, the general captured the fortress and overthrew the dictator.

References

Clark, R., Mayer, R. (2008). e-learning and the Science of Instruction. Pfeiffer, San Francisco, CA.

Image sourced from here

I’m Paul Moss. I’m a learning designer. Follow me on Twitter @edmerger

Principle of Learning #3 – Change

An eloquent presentation of the workings of working memory and the implications it holds for learning

Principles of Learning

This post presents a brief elaboration on the third of seven principles of learning:

Principle #3 – Change. Learning is a specific type of change, which is governed by principles of (a) repetition, (b) time, (c) step size, (d) sequence, (e) contrast, (f) significance, and (g) feedback.

Figure 5. Seven principles of change by which the inner mechanism by which learning is facilitated Figure 5. Seven principles of change by which the inner mechanism by which learning is facilitated

These seven principles of change are the inner mechanism by which learning is facilitated; in other words, the constraints and requirements of each of these principles must be satisfied in order for learning to take place. At first, changes in capacity and habit may be somewhat ephemeral and unstable. However, in accord with the seven principles of change which will now be discussed, these changes become long lasting and stable.

Principle #3a – Repetition. Learning is facilitated by repeated experience. Repetition in learning is much more…

View original post 1,837 more words

WHY MAFS IS A GOOD MODEL FOR EDUCATION

It ostensibly seems like a very tenuous link, but there is actually a strong corollary between the way the show is edited and the way educators should approach the delivery of their courses.

To maintain the intensity of the central driver of the show, the emotional connections the audience make with the actors*, the editors continuously replay certain scenes that are contingent to a theme or storyline they believe will generate the maximum reaction from the audience. Either deliberately or intuitively, by frequently recalling key content, the producers facilitate the retrieval of the content for its audience, which in turn strengthens their propensity to remember it. Being able to remember what has happened is critical for the audience to connect their feelings to the new drama, and maintain the necessary emotional intensity required for the show to be successful.

The show runs for nearly an hour on television or on demand, but the amount of ‘fresh’ material in each show would amount to about a third of the overall content. The show is unconscionably peppered with adverts, sometimes inserted after just 3 minutes of viewing and of equal length, but upon returning from the break, the show unfailingly recaps what happened just before the advertisements. The editors cleverly build drama before each advert break, and by replaying the intense moment upon returning, the audience’s memory of their pre-advert reaction is resurrected, strengthened, and can now be exploited to react to the next adventure presented. The editors also replay scenes from several shows ago to jog the audience’s memories of those events. This not only strengthens the memories of those episodic events, but crucially allows the producers to precisely position the audience’s emotional reaction as they structure and direct the connections between the scenes for them.

This continuous recapping of key content is how education works best. When new content is presented, the skilled tutor realises that in order for that content to become cemented in the learner’s memory it needs to be retrieved on several occasions, and over time. The necessity for the learning to become a part of the long-term memory is so that is can be drawn from when new content is introduced. This stems from the way our brains learn. Students construct new knowledge by making connections between new ideas and existing mental models, and then building on them. The ease with which the learner can recall these newly constructed understandings affects the load on the working memory, with automatic recall allowing the learner to make newer connections with comparative ease. Nuthall suggests that learners need at least 3 exposures to a concept before they have any hope of moving it into their long-term memories. By replaying key concepts many times, the learner’s construction of new content is supported. Again, either deliberately or intuitively, Married At First Sight has mastered this approach.

The imperative of replaying the key content to secure future recall, by logic, has implications for how much new content should be introduced at a time. Engelmann believes that the amount of new content introduced vs the practising and recapping of old should be approximately 20 : 80%. I wonder how many courses are designed that facilitate such recapping? Quite simply, without dedicated opportunities for the old stuff to be practised and recapped over and over again, the less likely it will actually be learned.

Married At First Sight teaches us absolutely nothing in terms of how to be a good human, but it utilises what is understood about memory, and demonstrates that if you want someone to make connections to previous emotions, you have to recap the scenes that led to those emotions many times. The same is true for educators. If you want a learner to make connections to previously taught key concepts, you have to recap those key moments many times.  

*are they actors? If not professional, surely they are directed by the producers to behave in certain ways and to ask specific questions of each other?

I’m Paul Moss. I manage a team of learning designers. Follow me on @twitter

10 WAYS TO ENCOURAGE PARTICIPATION USING ZOOM

Participation is crucial in any learning environment, and a Zoom session is no different. Participation encourages attention, which is a requisite for learning. If students aren’t attending to the content or discussions on offer, they have no chance of encoding that content and then being able to use it at a later time: in other words, learning it. Being skillful in ensuring participation is therefore imperative.

Varying the way students are asked to participate is a powerful way to encourage engagement. Zoom can encourage participation in several different modes, which sometimes is not possible in a regular face to face session. Here’s how a teacher/tutor can engage students in a Zoom session:

  • Immediate quiz/questions
  • Explaining your method
  • Non-verbal feedback
  • Verbal questions
  • Written questions
  • Polls/quizzes
  • Breakout rooms
  • Screen sharing
  • Using the whiteboard
  • Modifying content

1. IMMEDIATE QUIZ/QUESTIONS

Because of the way our memories function, recapping content from previous sessions is essential to help the knowledge move into the long-term memory where it can then be recalled automatically to assist in processing new information. Students who arrive on time to your Zoom session should immediately be put to work, either doing a 3 or 4 question quiz on previous learning, or producing short answers to a question or 2. Both of these are shared from your screen. This then does 2 things: firstly, it activates prior knowledge that will assist in today’s learning, and secondly, it gets the students involved straight away. Late comers also won’t miss the new content. Answers to the quiz etc are briefly discussed and then the current session begins with students’ minds active.

2. EXPLAINING YOUR METHOD

By articulating the strategies you will employ in the session up front you are likely to alleviate students’ anxieties about some of the processes they’ll experience during the session, and therefore encourage participation. Explaining why you are repeating questions, why you are talking about things from previous sessions, why you are asking for different types of responses and feedback, why you are insisting everyone responds before you move on, why you are using polls and why you are so keen on student participation and its effect on learning will help students feel more comfortable during the session and feel more able to participate.

3. NON-VERBAL FEEDBACK

You will have to turn on NON-VERBAL FEEDBACK in the settings:

Getting students to indicate a yes or no or a thumbs up encourages participation. Whilst you can’t guarantee such an assessment for learning truly proves students have understood your question, as students could just be guessing or indicating to avoid being asked why they haven’t, it still gets students involved. Even if a student answers to try to avoid a follow up question when the tutor sees they haven’t responded they are still actively listening, which is a condition of learning. Varying the type of questions can also generate some humour and fun in a session – asking if students are breathing, or if they know that Liverpool football club is the best team in the world for example. Non-verbal feedback is best used in triangulation with other assessment for learning options, such as verbal questions:

4. VERBAL QUESTIONS

Effective questioning is a powerful way to assess for learning and guarantee participation. The key to effective questioning is to ask, wait for students to process the question, and then check a number of answers before saying if the answers are right or wrong. Repeat the questions at least 3 times during the processing stage. Keeping the questions ‘alive’ is important to encourage participation because as soon as you provide an answer the majority of students will stop thinking about the answer – they have no need to keep thinking: allowing time for students to think about the answer gets the retrieval process activated as they search their minds for connections to previously encoded information. By randomly choosing students to answer you not only get a sense of their levels of understanding which allows you to pivot the next sequence if necessary, but it also keeps students on their toes as they realise that they may be called on next. This random selection of students will even work in a very large tutorial.

Sometimes it’s the little things. Be aware that you might naturally tend to favour interacting with those you can see in the session. Those without their cameras on, as in the image below, may not get asked as many questions, so an awareness of this and conscious questioning of unseen students will encourage a broad participation in the session.

5. WRITTEN QUESTIONS

Using the chat section to elicit answers to check for learning encourages participation. It is a variation on simply just listening and answering verbally. Having students write down an answer proves they know or don’t know the content. Dedicating a time in a session for this process not only varies the type of participation, but can be a great indicator that students have the required knowledge to continue. Opening up the chat lines for student to student interactions also encourages participation as some will answer questions and feel empowered in the process, and some will just enjoy the interactions. It is important though that the chat area is monitored as it can lead to the wrong kind of participation – like students just chatting in the classroom/lecture theatre which means they are not paying attention to the content. You can’t write/read and listen at the same time. I write about that here.

6. POLLS/QUIZZES

Using the poll function in Zoom is easy. You have to ensure it is turned on in the settings:

Once you’ve designed your questions, preferably before the session, you can then launch the poll.

Students then participate by responding. You then share the results, which at this point are anonymous, with the whole group. This serves as an assessment for learning opportunity, and you can pivot the session based on the answers if necessary. In answering the questions, students’ minds are activated as they search for knowledge in their schemata. There is an art to designing effective polls and multiple choice questions, and I discuss that art form here.  

Canvas quiz can also be incorporated into the Zoom session. The advantage of this is that it has a variety of question types that further encourage participation. There are many other apps too, such as Quizizz, Kahoot, and Mentimeter, but should be used with caution if not supported by your institution, as students may not want to sign-up for such platforms that essentially require them to surrender their data.

7. BREAKOUT ROOMS

Sending students into groups to discuss a concept or problem is a fantastic way to encourage participation. Homogeneous groups tend to work best, because those with vastly different levels of developed schema tend not to engage with each other as well as those with closer skill levels. It is sometimes of benefit of the more knowledgeable student to help another peer, but this then relies on effective teaching skills to work, and in reality that is a big ask of a student. So setting them up before a session may be your best bet.

Providing guidance on what to do when students are in the session is crucial, and it is worth popping in to each group to see how it is progressing. As Tim Klapdor, an online expert at Adelaide University suggests, ‘Encourage discussion by promoting the students’ voice. Use provocation as a tool for discussion. Ask the students to explain and expand on concepts, existing understanding and their opinions on topics. Get students to add to one another’s contributions by threading responses from different students. Promote a sense of community by establishing open lines of communication through positive individual contributions.’ Attributing a member of the group to be a scribe is also worth doing, so that when the group returns to the main session they are able to share their screen and discuss their work/findings/solutions etc.

8. SCREEN SHARING

Getting students to share their screen encourages participation. This is especially effective coming out of a breakout room, but can be used at any point in a session. A student may be asked to demonstrate their workings of a problem, an answer to an essay question etc and the tutor can use it as a model to provide feedback. Of course caution would be used here, and only positive/constructive feedback provided.

9. USING THE WHITEBOARD

Sharing the whiteboard and getting students to interact with the content you or they put on there is a great way to encourage participation. You could model your thinking process in this medium and explain or annotate examples to discuss how students could gain a better understanding of the content. You could also have students annotate the board, asking them to underline key words, complete equations etc. Getting multiple students to add their own annotations is probably more beneficial with smaller groups, such as in the breakout rooms. Unfortunately in Zoom you can’t paste an image on the whiteboard, only text.

10. MODIFYING CONTENT

I firmly believe that there will only be a very small percentage of students who are genuinely unwilling to participate in this medium. Such students would be expected to use the chat option and only ‘send to the host’ for example to ensure they are still participating. If you have tried all of the above strategies and your students are still not really getting involved, it is likely that they just don’t know the answers. As humans, we naturally want to succeed, and non-participation may indicate to you that you need to strip it back a bit, and come back to some foundational knowledge. It doesn’t matter what you think students should know, it is about what they actually do, and the relevant development of their schema. It is better that you facilitate the construction of knowledge, and provide questions that students will know answers to so they can build up their confidence in participating, By doing this, you will slowly, but surely, build their schemata so they will want to get involved consistently.

Online participation is essential for the session to be effective. If you have other tips and advice how to engage participation, please let me know and i’ll add to the list.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

SHOULD YOU CLOSE THE ONLINE CHAT OPTION IN A LESSON?

Yes, and no.                                                

Having the chat option open from the word go in an online tutorial can present problems for both you and the students. Whilst it may seem ideal for students to be able to interact when something comes to mind, the reality is that whatever else you are hoping will happen at the time they are chatting, like them listening to information or explanations, just won’t happen. This can be explained by dual coding theory.

Dual coding theory essentially tells us that we encode information via 2 channels in the brain, the auditory channel and the visual. Reading, listening and writing all fall under the auditory channel, and seeing and physical interactions fall in the visual channel. The theory informs educators that combining content in multi modal forms will enhance the encoding of that content, but crucially, also tells us that if you present multiple pieces of information in a single channel, the working memory will have to decide what to attend to, at the expense of the competing stimuli.

In other words, you can’t do two things at once in a single channel. If you expect students to read at the same time as listen to instructions or explanations, one of those requests will be compromised (a common mistake made in lecture theatres and classrooms worldwide when talking over PPT slides full of text). If you expect students to write at the same time as listening to instructions or explanations, they won’t be able to do it as efficiently as if only focusing on one stimulus. So, students typing away and responding to the online chat means they aren’t listening to you or paying attention to any text you may be presenting. It would be the same in a face to face setting: they would be talking to each other and therefore not attending to you.   

My advice would be, analogous to a regular face to face learning context, to restrict the availability of the chat to specific times in the session. Assessing for learning is of course crucial in a session, and the chat area is a good means of doing this, but you can’t hope to assess for learning if the students weren’t listening in the first place. Opening the chat up at specific times will maximise this avenue of assessing for learning.

Having said that, we do want to encourage students to write down questions that arise from your delivery, otherwise they undoubtedly will be forgotten. So to facilitate this, using Zoom, you would select the ‘HOST ONLY’ option (see the images below for how to do this). Only you will see the questions, and this means that other students won’t get distracted – and certainly not by completely unrelated comments that inevitably will propagate in the space. You will then perhaps dedicate a time after your delivery to address those questions that have come up…and then open up the chat lines for interactions.

Select the ellipsis on the RHS of the chat box
Select host only

For a step by step guide, view this video

So, in summary, by reducing the opportunities students have to lose concentration in a learning environment, you will increase the likelihood that they will be attending to what it is you want them to be focusing on. Of course, some classes will have the maturity to engage appropriately with the chat function and such measures of control won’t be necessary.

In the next post I will discuss other ASSESSMENT FOR LEARNING opportunities in the online space.

I’m Paul Moss. I’m a learning designer. Follow me on @twitter

ASSESSMENT IN HE: pt 9 – Is a proctored/invigilated online exam the only answer?

This is the 9th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

There are numerous tropes that aptly apply to the current context in higher education: necessity is the mother of invention, through adversity comes innovation, it’s the survival of the fittest, and all that. Our current adversity renders traditional invigilated exams impossible, and certainly requires us to be innovative to solve the dilemma, but instead of simply looking for technology to innovatively recreate what we have always done, maybe it’s time to think differently about how we design examination in the first place.

REFLECTION

Exams are summative assessments. They attempt to test a domain of knowledge and be the most equitable means of delivering an inference to stakeholders of what a student understands about that domain. They are certainly not the perfect assessment measure, as Koretz asserts here (conveyed in a blog by Daisy Christodoulou), but because they are standardised, and invigilated, they can and do serve a useful purpose.

Cheating is obviously easier in an online context and potentially renders the results of an exam invalid. Online proctoring companies, currently vigorously rubbing their hands together to the background sounds of ka-ching ka-ching, certainly mitigate some of these possibilities, with levels of virtual invigilation varying between locking screens, to some using webcams to monitor movements whilst being assessed. Timed released exams also help to reduce plagiarism because students have a limited amount of time to source other resources to complete the test, which inevitable self-penalizes them. I discuss this here. But the reality is, despite such measures, there is no way you can completely eliminate willful deceit in online exams.

So, do we cut our losses and become resigned to the fact that cheating is inevitable and that despite employing online proctoring that some will still manage to do the wrong thing? I’m not sure that’s acceptable, so I think it’s worth considering that if we design summative assessment differently, the need for online proctoring may be redundant.

WHAT DO YOU WANT TO TEST IN AN EXAM?

Do you want to see how much a student can recall of the domain, or do you want to test how they can apply this knowledge? If you want to test recall, then proctoring is a necessity, as answers will be mostly identical in all correct student responses. But should that be what an exam tests?

Few would argue that the aspiration of education is to set the students up in the course to be able to now apply their knowledge to new contexts. By designing a sequence of learning that incrementally delivers key content to students through the use of examples that help shape mental models of ‘how to do things’, and by continuously facilitating the retrieval of that knowledge to strengthen the capacity of students’ memory throughout the course (after all, understanding is memory in disguise – Willingham), we would have supported the development of their schema. This development enables students to use what’s contained in the schema to transfer knowledge and solve new problems, potentially in creative ways.

So exams needn’t be of the recall variety. They can test the application of knowledge.

Whilst we can’t expect the application of that knowledge to be too far removed from its present context (see discussion below), a well designed exam, and particularly those that require written expression, would generate answers that would be idiosyncratic, and then could be cross checked with TurniItIn to determine integrity.

In this way, timed exams in certain courses* could effectively be open book, eliminating a large component of the invigilator’s role. This may seem counter-intuitive, but the reality is that even if a student can simply access facts they haven’t committed to memory, they will still unlikely be able to produce a strong answer to a new problem. Their understanding of the content is limited simply because they haven’t spent enough time connecting it to previous knowledge which generates eventual understanding. The students will spend most of their working memory’s capacity trying to solve the problem, and invariably, in a timed exam, self-penalize in the process. It’s like being given all the words of a new language and being asked to speak it in the next minute. It’s impossible.

In order to successfully use the internet – or any other reference tool – you have to know enough about the topics you’re researching to make sense of the information you find.

David Didau

4 REQUISITES OF A WELL DESIGNED OPEN EXAM

  1. Students have relevant schema
  2. Students have practised applying it to new near contexts
  3. Exam questions seek near transfer of knowledge
  4. Exam is timed and made available at a specific time interval – see here

I have just discussed the importance of schema, but if we want students to be able to apply that knowledge to new contexts we have to model and train them in doing so. This may seem obvious, but curricula are usually so crammed that educators often don’t have time to teach the application of knowledge. Or, as an ostensible antidote to such a context, some educators have fallen for the lure of problem based or inquiry learning, where students are thrown into the deep end and expected, without a sufficient schema, to solve complex problems. Such an approach doesn’t result in efficient learning, and often favours those with stronger cultural literacy, thus exacerbating the Matthew Effect. The ideal situation then is to support the development of a substantial schema and then allow space in the curriculum to help students learn how to apply that knowledge… and then test it in an open book exam.

The third requisite is the design of the exam questions. A strong design would have to ensure that the expected transfer of knowledge is not too ‘far’, and in fact is closer to ‘near’ transfer. We often exult in education’s aspiration of being able to transfer knowledge into new contexts, but the actual reality of this may render us less optimistic. The Wason experiments illustrate this well, suggesting that our knowledge is really quite specific, and that what we know about problem solving in one topic is not necessarily transferable to others. If you don’t believe me, try this experiment below, and click on the link above to see the answers.  

Lots and lots of very smart people get this task wrong. What the experiment shows us is that it’s not how smart we are in being able to solve problems, but how much practice we’ve had related to the problem. So designing appropriate questions in an exam is crucial if we want the results to provide strong inferences about our students’ learning.   

CRITICISMS OF OPEN BOOK EXAMS

A criticism of open book exams is that students are lulled into a false sense of security and fail to study enough for the test, believing the answers will be easily accessible from their notes – the fallacy that you can just look it up in Google, as discussed above. However, because we know that we need most aspects of the domain to be memorised to support the automaticity of its retrieval when engaging in new learning, (cognitive load theory), and have thus incorporated retrieval practice into our teaching, the need for a student to actually have to look up information will be quite low.

EXPOSURE TO OPEN BOOK ASSESSMENT IS CRITICAL

Like any learnt skill, you have to build the knowledge associated with it, and then practice until made perfect. Never expect that knowing how to function in an open book exam is a given skill. It is important to train the students in how to prepare for such an exam, by helping them learn to summarise their notes to reflect key concepts, to organise their notes so they can be easily used in the exam, and how to plan answers before committing them to writing.

A PEDAGOGICAL UPSHOT

As mentioned previously, the need for students to memorise key facts is an essential aspect of the learning journey, but sometimes summative exams tend to focus on this type of knowledge too much, or worse, expect transfer of that knowledge without providing necessary practice in doing so. The upshot of open book exams is that it not only requires students have sufficient knowledge, but also sufficient practice in applying it, and so the open book exam becomes a paragon of good teaching.

*online open book exams may not be so easy in courses like mathematics and equation based courses that require identical solutions.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger

ASSESSMENT IN HE: pt 8 – mitigating cheating

This is the 8th in a series of blogs on assessment, which forms part of a larger series of blogs on the importance of starting strong in higher education and how academics can facilitate it.

LOOKING TO MINIMISE PLAGIARISM IN AN ONLINE ASSESSMENT?

When setting an online assessment, the fear of plagiarism is strong, despite the reality that the amount of online cheating doesn’t seem to be any different to the amount of cheating in face to face settings. But we still want to avoid it as much as possible. So, how can we ensure that students are submitting their own work?

  1. Be explicit about the damage plagiarism does. There is a lot of information for students about plagiarism and how they can avoid it here. Similarly, there is a lot of information for staff here, including an overview of using Turnitin here.
  2. Design assignments that build in difficulty incrementally. Supporting the building of their knowledge base will facilitate student success in assignments. Once motivation and schemata are established, students’ perceptions of assignments will change. I write about the way to avoid online proctoring here.
  3. USE TECH: set assessment in Canvas for a specific time and use questions banks and formula randomization.

By setting it for a specific time (see below for how to do this), you prevent students seeing the assessment before it goes ‘live’. The opportunity for exchanging information with others is reduced, as is the ability to source answers from the internet. Of course, students may still chat with each other during the assessment window, but this practice will tend to self-penalize as their time to complete the assessment will be shorter having spent valuable time conferring with others.

The design of the assessment then is critical – if you overestimate the time it should take, you will open up time for conferring. It may be better to set shorter assessments that students will only complete in the given time if they know the content. If you take this path, it is important to explicitly tell the students that the assessment is difficult in terms of time – an unsuccessful student tends to give up more easily if there appears to be a randomness to achievement.

HOW TO SET AN ASSESSMENT FOR A SPECIFIED TIME

STEP 1 – add an assignment and choose Turnitin as the submission type (for heavy text-based assignments). Select “External Tool” and then find “Turnitin”

Step 2 – Choose the relevant time to make the test open to students.


USING CANVAS QUESTION BANKS

Question banks help to randomise the questions a student receives. If you have 10 questions in the bank and you only assign 6 to the exam, then you can mitigate the chances that students will receive the same questions. A student trying to cheat will soon realise that their questions are different to their friends. Of course, not all of them will be, but the student who sees that several of them aren’t is less likely to bother as it is taking too long to find out what questions match and which ones don’t.

USING CANVAS FORMULA QUESTIONS

I will be posting here shortly a video demonstrating the fantastic application of the formula question in Canvas, a question that essentially allows you to change the variables in a question containing numbers so that multiple possibilities can be generated. This practically means that each student will receive a different question, but of the same difficulty level, rendering it still a valid and equitable assessment. So if John decides to call up Mary during the assessment and ask what she got for question 5, it will be pointless as Mary has a different question – the answers simply won’t match.

FINAL THOUGHTS

Everyone likes to succeed. This is why some students plagiarise. Careful design of assessment that incrementally builds student knowledge and confidence will TEACH students to get better at assessment. This, together with explicit discussions about it, will help many students steer clear of plagiarism.

In the next post I will discuss how modified online examinations shouldn’t necessarily try to completely emulate traditional examinations using technology.

I’m Paul Moss. I’m a learning designer. Follow me @edmerger