Evening all (it's evening somewhere I'm sure),
So there's a thing that I've been thinking about a lot lately, and that's the risks and reward calculation that (most if not all) students must process before they make a decision to cheat.
And yes, students most definitely do make this decision. Many are subject to pressures, absolutely. But nonetheless, cheating is a way to resolve some of those pressures, generally with minimal risk. Students consider the rewards, assess the risks, and jump.
There are a few of us, ranging the land on horseback, who are capable of changing that calculus. Now, with all respect to academics who have done the hard yards in their own disciplines identifying what contract cheating looks like, relying on individual academics to add more to their workload and deal with contract cheating cases one by one cannot be anything but a supplement to another strategy. It's impossible even to scrape the surface of the problem that way.
Conversely, the approach I take, shared by others such as Shaun Lehmann at UNSW and the Academic Integrity team at Uni Southern Qld (Hi Jasmine and Rian and crew), uses data that unis mostly already collect to understand patterns and processes of student behaviour. Essentially an adaptation of the concept of learning analytics to understand where learning has demonstrably not occurred, it has been proven to be a successful method of detecting cheating at scale. Given that current reliable estimates state that approximately 10% of all HE students in Australia are engaged in commercial contract cheating (Curtis et. al, 2022), isn't it time there was a systematic approach to this systematic problem? And make no mistake folks, generative AI has not killed usage of contract cheating services for students who need those services to pass their degrees. The age of AI has made this service provision easier, and cheaper. And as you can see below, "essay mills" are not even close to being the entirety of the contract cheating market.
While I only share specifics with colleagues and students with whom I raise concerns, I can give you a few pointers. I currently use code to identify "behaviours" which I've previously identified manually in student LMS (aka VLE) logs. There are quite a few of these, with varying strength. Some are a lock, others are indicators. But enriching the IP address data and thinking beyond an individual student is where you really start to gain insight and develop really strong evidence at scale. We often corroborate this evidence with document metadata (also cheap and scalable) and call upon academic evidence where the expertise is required and valuable. And I haven't even touched upon other available techniques that I've used to prove cheating has occurred, such as the technique described by Clare Johnson and colleagues in this paper. So when I talk about assessment security, this is some of the stuff I think should be included in that definition. As I've mentioned before, I'm not an advocate for invasive "proctoring" and such, but I absolutely think that having an unacceptable number of students being unqualified, but still graduating from our higher ed sector is a shame which reflects on us all. So we must act, but with thought and care.
In truth, finding evidence is both cheap and easy. Once you start finding reliable evidence at scale, you realise the size of the mountain we have ahead of us. Without going into the nitty gritty (I probably will lay it out in future) we need to seriously examine the way we deal with misconduct. I've written a lot of reports for decision makers about incidents of misconduct. Increasingly over the last few years the misconduct is so extensive in individual cases that the very existence of this case means all of the other 10, or 100, or 1000 cases will not be addressed in a timely fashion, because of a laborious process. Matters can now take weeks and months instead of days, simply because of the amount of writing required and the people required to do it. I'm starting to think very seriously about a new model for dealing with those cases where the Courageous Conversation only goes so far. This is partly why David House and I created Courageous Conversations (with no small contribution from Cath Ellis of UNSW), because we hit the hard borders of our previous process. But that's for another day I reckon.
I'll wrap up by saying that while students run terrible risks with riders like me and Shaun and Jasmine and Rian roaming the land, unis run terrible risks in their failure to act. I'll keep shouting this, have no fear.
Lastly for today, I am visiting Ireland in October and would like to meet as many people as possible in that corner of the world. If I can help you and your uni, drop me a line and we'll see what we can do.
Till next time,
KM
Header image: Photo by Markus Spiske: https://www.pexels.com/photo/green-and-yellow-printed-textile-330771/
I read your post discussing the risks and rewards calculation that students face before deciding to cheat in their academic pursuits. While our institutions may differ in their learning environments (you work at UNSW, and I am at MQ), I believe the issue of academic dishonesty is prevalent across many universities. As an academic at MQ, I have encountered a significant number of cheating cases in one of my units within the Faculty of Arts. I found that 27 out of 45 students exhibited authorship data issues, with most of these students being international students from China and India. Upon searching online, I discovered that the authors appeared to be editors from Kenya.
Interestingly, and rather comically, I recently identified…