Evening all,
First, thankyou to those who provided feedback on the previous blog. I get that there's differing points of view on the way things are. There are those who defer to the individual, who correctly point out that people who work for unis have choices. All of you out there getting fucked by unis- choose not to get fucked. You're intelligent, motivated, committed people. I respect who you are and what you've done. Where unis don't match that, give em the Vs and walk. You'll find a good life out there, I have no doubt.
Moving on, what I actually set out to talk about today is assessment. As everyone not living under rocks knows, AI is here to stay. It has been startling to me that so few people seem to be thinking outside our existing paradigm about this fact. It strikes me, especially as it relates to academic integrity/assessment security, that reactions fall into two main camps: the Trads and the Punishers. Let's take a look at them shall we?
The Trads are those who seek comfort in ye olde pen and paper, wrapping the blankie around their institutions like Linus from the Peanuts cartoons.
The problem that I see with this approach is that in an era where unis have pushed the weighting of assessment tasks down, breaking assessment up into bite-size chunks, the pen and paper exam approach suggests that those exams would have to rise significantly in weighting to give any sort of assurance that the students taking them were competent and could pass a subject. It also begs the question, if these unis are going back to exams because they no longer trust non-invigilated assessments, what is the point of asking students to complete non-invigilated assessments? Of course, some might argue assessment for learning, but if a student simply prompts ChatGPT as unis fear, they're not learning anything are they?
So this is a kind of retro approach, but one that is more brown flares than Ziggy Stardust. All in all this approach strikes me as such a dull and unimaginative way to move forward that you might as well put students back in robes and be done with it.
Lined up in the red corner, weighing in as a giant ball of workload and rage, are the Punishers. This group strike me as similarly unimaginative, but in a very different way. They are panicking about AI, feverishly looking at the high Turnitin AI content numbers, and totally paralysed about what to do about it. So, they report academic integrity cases and seek solace in the belief that the resulting outcome will scare students straight, ridding them of any future impulse to use the cursed technology. However, as "ethics officers" and people like myself are discovering as we speak, students have embraced AI of various types with some gusto. People thought dealing with plagiarism cases was bad, when that was clearly marked as a bad and naughty behaviour. Use of AI, unis have barely managed to scramble any kind of coherent message out the door, and so students are clearly using widely. And so the workload tsunami has arrived on shore. Many students will be put through a meat grinder/student conduct process based on a ill-defined number, and decisions will be made by people who don't understand the technology, nor the tool used to control the use of it. Marvellous state of affairs.
Basically what I'm saying is that regardless of which retro approach unis take, they both expose really serious structural flaws in the way we work, and the way we assess students. I've touched on two, but there are many more. So when Jon Piccini says to me the following, he's right:
Waiting for the moment when they realise everything is already AI. And the response I hear so much (return to sit down exams, etc) won’t work. Nor does abandoning disciplinary standards (essays etc)
He says we need "a Third Pill." Now having had a few friends who have indulged over the years, I would generally suggest that a third pill usually ends in regret. But this time I completely concur. We need another way that isn't either rushing back to the past, or doubling down on a shitty future. We need a future that rebalances the imbalance we see all the time. Over-assessment of students, massive bureaucracies to deal with the fallout of that over-assessment, overwork, underpay, atomised learning. What parts of this teaching/assessment system are what we might want them be to in a perfect world? Yes I hear it now, "Don't led perfect be the enemy of the good"- Ask yourself without rose tinted glasses, is it even good now? We need another way. There's a few sound principles out there already. Phil Dawson, in his book "Defending assessment security in a digital world : preventing e-cheating and supporting academic integrity in higher education", suggests that the assessments we "secure" need to be reduced in number, but made more secure. At the moment it's pretty clear that a large proportion of our assessments are actually secure. But we treat every assessment, from the 2% quiz to the 50% essay, as if they are all equally important. We threaten punishments if there is a "academic integrity" breach, but otherwise have no idea how the submission came to be- we have zero security. Having high security assessments, assessing higher level learning outcomes, and incentivising students to achieve those outcomes, rather than policing countless lower order assessments, has to be a more sensible and efficient approach, right?
Programmatic assessment may be our Third pill, but if the collected intelligence embodied in academia turned its mind away from Tradition and Punishment, what else might we think of?
Have a good weekend all,
KM
Image-
Dall-E, "linus peanuts cartoons and Ziggy stardust take drugs"
The challenge for the 'fewer. more secure' path is incentives. Universities in that model would have to accept whatever outcomes there are in terms of success/failure. That is the only way to build in the incentives to actually do formative assessments that contribute to learning. Whatever assessment regimes are in place, student effort towards learning is a necessary condition for learning to occur. This effort is largely hidden action from our point of view. Assessment tasks can provides incentives to allocate effort and act as (noisy and imperfect) tools for monitoring effort (which is why I find 'overassessment' claims that ignore this maddening).
The 'fewer, more secure' approach removes the need for that monitoring as it requires decisive demonstrations that…