Broad, sweeping statements
- Kane Murdoch
- 2 minutes ago
- 4 min read
Evening all,
As readers may be aware, broad, sweeping statements are a favourite rhetorical device of this blog. I'm often right, but also happy to adjust or resile my position if I'm not. The particular "broad, sweeping statement" in question today was one I added to a post on Bluesky recently:
For those who support the use of AI detectors, by all means feel free to attempt a response. But please don't say "I care about my students" or I care about academic integrity" because neither of those things can be true if you believe in the efficacy of detectors. Shocking, I know, but there you are.
Attached to this post was a link to a preprint paper that myself and a marvellous group of colleagues from across the educational spectrum posted last week. The title of the work is "Heads we win, tails you lose: AI detectors in education." Needless to say, I encourage you to read it.
However, for those who rear back in horror at my quoted statement above, prepared to fly into high dudgeon, I would like you to both read the paper, and take a second. Do a bit of box breathing before you launch online and, as one erudite commenter did, threaten to report me "to the admin."
Now that you've done a bit of selfcare, I'm going to explain why my post was correct.
When I've posted about the use of AI detectors, quite a number of people have responded with words to effect of "I USE AI DETECTORS BECAUSE I CARE ABOUT ACADEMIC INTEGRITY!!" In my mind they're always shouting madly, as if volume matters over the substance of their argument. To me these people are showing their whole arse on the matter. So bereft is their thinking on AI and detectors, and so lacking is their willingness to engage with arguments to the contrary (of which there are many detailed in the paper) that it's difficult to know where to start. But I'm going to choose two of my favourites, to illustrate the point.
Much is made of the so-called "false positive rate" of AI detectors, as if to say that only accusing 1% of our students, falsely, is an acceptable price to pay. However, it's not nearly that simple, not even close. Some simple calculations will show us why.
According to Australia's department of education there are, give or take, 1.65 million students in Australia. Let's assume that each of these students will potentially submit 4 assessments through an AI detector in a given year. Even with that conservative estimate of student assessment load (for many it will be a lot higher), at a 1% false positive rate there will be 66,000 assignments incorrectly reported as being AI generated. 66,000 potential breaches of academic integrity. That's a lot of (false)work. Now- here's the kicker- deep breaths people- there is no way to fairly sort false positive from true positives. That 66,000 cases, and no one really knows how many were false and how many were true. As our esteemed colleagues at ACU found last week, even jumping into this pool leads to a truly spectacular amount of work, of dubious purpose, along with an equally spectacular amount of media coverage. Of course, a typical response comes back "this is just the starting point for a conversation." However, being the fairness fanboi I am, starting with the AI detector in hand means you have poisoned the well of fairness from the get-go. Essentially, by actioning the results of this tech, even with a "discussion about their learning", you are biasing yourself against students and subjecting them to an inherently unfair process. This is like using lie detectors in court, or for those who follow the way of Xenu and his enchanted DC-8, E-Meters. let me be clear- AI detectors do not produce actionable evidence regarding this student, on that assignment, that reaches any standard of balanced probability. As someone who deals with evidence, and fairness, the actual workload of academic integrity, and takes a programmatic view of these things, it is patently a terrible idea even just from this singular perspective.
However, as I said, read the paper. If you don't like this argument, there are plenty more there waiting to (metaphorically) punch you in the face.
But moving on, I stand by the position that if you are prepared to put students through a process which is demonstrably unfair (even if you wish to believe otherwise), you do not "care about your students." You may well care about other things more than your students, you may demonstrate care in certain other ways, but it cannot be said that you care about your students, because you are clearly willing to sacrifice them upon this altar.
Of course, people can't be blamed for thinking in January of 2023, when the world had just turned upside down (thanks Billy Bragg), that AI detectors could have been part of the answer. But two and a half years have gone by since then, and although I was never a proponent of AI detectors, like most sensible people I've had cause to further develop my views. Some people in education should probably start that process now. Is that broad and sweeping enough? Get busy learning about how to do this better, or die trying.
Until next time,
KM