Unsecured assessment and gen-AI: the apples aren’t apples anymore

If we believe there is an acceptable spectrum of AI influence on assessments that are not secure, as Curtis (2025) argues, we must carefully consider the design of these assessments and analyse the information they generate in new ways. It is not sufficient to continue our designs as business as usual by remaining steadfast in the assertion that the playing field was equally as treacherous as before the advent of gen-AI, where students could still cheat if they really wanted to. I can’t see any value in maintaining a comparison of a student who actively seeks contract help with a student who opens Chat-GPT and asks for assistance. The student who actively plagiarizes by just adding in Googled information is anachronistically unsophisticated compared to the student who uses gen-AI to develop and refine ideas related to the topic and augment their thinking. We are not comparing apples with apples.

You don’t need to wait for what will be continuously outdated research to accept that students are going to perceive using gen-AI as a learning tool, and that it will therefore have a significant influence on final outputs in assessments. This report by Josh Freeman is compelling enough, and it is worth considering Laura Bain’s proclamation that anyone who regularly engages with these tools understands that using AI is iterative and dynamic. Of course, the learner who solely uses it to write all of their assessment would be consciously aware they are doing the wrong thing, and so analogous to the ‘old school’ cheater, but the work now needs to be done on how to spot that person, because as AI becomes better at reasoning, finding that student will be akin to finding a needle in a haystack. Laura’s forewarning is rather pertinent here: ‘We’re writing rules to preserve current practices instead of questioning whether those practices still serve our students in a world where AI exists. Education cannot remain the same in the face of generative AI. That is the bottom line.’

More solutions needed

There is an understandable focus at the moment in asking lots of questions, debating and working towards a philosophical stance on the matter. Meanwhile, assessments are still being designed and taken as before, as though the apples are still apples. So, the time to ask these two questions is more pressing than ever: How do we mark unsecured student assessments when we know that some level of AI has influenced the output? And what are the implications for the weighting of secure and unsecured assessments in determining a final grade?

More on these in the next post.

References

Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research & Development, 1–8. https://doi.org/10.1080/07294360.2025.2476516

I’m Paul Moss. I’m a learning designer at The University of Adelaide. I’m on Twitter too

One comment

Leave a Reply