There’s an old adage in teaching, that you only have to be one day ahead of the students. Obviously, that’s an unsustainable approach, but every single one of us, at one point in time, when tired and burnt out and overwhelmed, has benefited from the advantage. The corollary of this is that when you’re not one step ahead, you get crucified. This is the exact scenario we find ourselves in now with Gen-AI.
When you’re not one step ahead or at least on the same level as the students, you can never really know if your courses’ assessments are at its mercy, if the technology can be used to leverage learning, or if your students are living in a perpetual state of ambiguity in how they should engage with the technology.
So, how can educators ensure their proficiency is above the student level, or at the very least AT the level? It’s simple, they have to USE IT THEMSELVES and get good at it!
Start with proofing assessment
There is a great deal of research promoting the benefits of authentic assessment design (Ajjawi et al, 2023), but a lack of resourcing and sometimes training and sometimes epistemological stance has prevented it from uptake at scale. But because Gen-AI is truly disrupting assessment and its validity, the window for more authentic assessment has been forced open, or at least for assessment to be reconsidered. Course authors cannot just bury their head in the sand with Gen-AI. they must test out their assessment to ensure it is AI proof.
To do this, each assessment must be passed through a more powerful licensed version of a Gen-AI tool, and not the free versions, and done so in the mindset of a student, who is likely to prompt and prompt to get a more nuanced response.
This will take some time to master. But doing so does two things. Firstly, it will create certainty in where the assessment stands about whether it is Gen-AI proof, and secondly, it helps to see where it can be leveraged.
Leverage the technology
There are 2 ideas here:
Comparative analysis
When inputting an assessment, the resulting output might not align with the expected answers. This provides the course author with some teachable moments. The best time to do this is when the topic related to the assessment is taught in the sequence. The educator could highlight that the upcoming assessment will have questions about the topic, and illustrate how Gen-AI would handle the challenge. Discussing why the answers aren’t right or close enough acts as a form of formative assessment and is useful because it not only encourages critical analysis but it also models the need to continuously evaluate the outputs of the technology. “Let’s see what AI generates if I ask this question…,” shows the learners that the educator is completely aware of the likely outputs of the technology.
This practice is particularly helpful in courses where content might diverge from Gen-AI’s generic predictive outputs. But it is also useful in courses that are dominated by immutable foundational concepts, because even in those cases, AI can be used to stimulate deeper thinking, such as applying theories to new contexts.
Using it as a tutor
When you know exactly what it can and can’t do, you are better placed to see if it can be used to explain elements of your course in ways that may assist struggling learners. Time is everyone’s enemy, and most don’t have enough of it to answer a few let alone a barrage of questions from students. Complex problems and ideas often need lots of analogies and examples for learners to grasp them, and Gen-AI may be able to provide them. It may be that you use it yourself to add the examples into the course, or perhaps more beneficial, would be to prompt learners to engage in Gen-AI (that you have vetted) at particular sections you know are complex and difficult and require lots of practice to master.
Remove ambiguity – create clarity on how it can be used
Students want clear guidance on how to use GenAI properly in their courses (Ryan et al, 2024). As this quite large study on AI perspectives in higher education suggests, learners are in a constant state of uncertainty about how and whether they can leverage AI and what may get them in trouble. Only when you have experimented with Gen-AI to better understand what types of information and responses it will generate to particular questions can you confidently teach students where the boundaries lie between leverage and academic integrity issues.
When students state “I feel like we should have like a course … having students to learn how to use AI, wisely, and you know how to really take advantage of it, instead of like making mistakes that get you trouble in academic integrity.” (Ebby, science student)(Ryan et al, 2024), the exact same can be applied to educators.
Action
The bottom line is that the technology is definitely here to stay, and it is only going to get more and more powerful and disruptive. The further behind the educator is in relation to the student, the less effective the educator can be.
References
AI in Higher Education: Student Perspectives. (2024). Results. [online] Available at: https://aiinhe.org/results/ [Accessed 3 Nov. 2024].
Ajjawi, R., Tai, J., Dollinger, M., Dawson, P., Boud, D., & Bearman, M. (2023). From authentic assessment to authenticity in assessment: broadening perspectives. Assessment & Evaluation in Higher Education, 49(4), 499–510. https://doi.org/10.1080/02602938.2023.2271193