Stop buying magic beans, Jack.
- Kane Murdoch
- 1 minute ago
- 4 min read
Evening all,
I've subconsciously resisted letting AI consume my every waking thought over the last few years. I'm not particularly impressed by most of the actual products, I don't believe the big LLMs have a long-term future as it stands, suffering billions of losses every year with no obvious pathway to profitability that doesn't involve idiots handing them money for nothing cough unis cough. So, needless to say, they are desperately attempting to weasel their way into education, an industry which is uniquely prone to the challenges posed by Generative AI, but also what seems a unique tendency to purchase magic beans- "this time the beans will definitely work!"
This may be a long one, so bear with me as I meander my way to the point. Firstly, I saw this today from friend-of-the-blog, Dr Sarah Eaton, and it certainly angried up my blood:

Grammarly's says its "AI grader agent" can provide feedback based on uploaded course details and “publicly available” information about the instructor. The bot then gives tailored writing recommendations and estimates what grade the paper will receive in its current state, helping students make improvements prior to submission. The story goes on with more guff, but you get the gist. This is both shameful, and shameless. There is not even a pretense that students will be learning through this process, but rather it makes explicit that only grades matter. What's more, those grades can only exist in a world where the written submission is mostly what counts toward degrees. Take a look at these services they offer:


The poacher has not only turned gamekeeper but wants us to believe they should be both. This is clearly a morally bankrupt company, and I certainly hope they go financially bankrupt in the very near future.
However, it sometimes feels like higher education is populated exclusively with people who have the memory of a guppie (goldfish have surprisingly long memories, contrary to popular belief). I mean, prior to ChatGPT students not engaging in the work of learning was already a big problem. I would estimate a minimum of 15-20% of all students were engaging in contract cheating prior to ChatGPT, but you don't have to take my word for it. We should have learned the lesson that essays and other uninvigilated assessments were not reliable or valid a decade ago, or more. But institutions persisted, insisting that only a small percentage of students were doing the wrong thing. I'm no mathematician, but if 20% of cars are falsely advertised and sometimes dangerous to the public, we would surely consider that a dangerously high number, right? Why is that this percentage of graduates not having met all of the learning outcomes for their degree is considered business as usual, nothing to see here?
So, it's reasonably clear that companies like Grammarly shouldn't survive without written assessments, but our question is, should we? I'm not going to discuss unis that are genuinely adopting so-called "2-lane" strategies. If the safeguards are genuinely in place in those models, it will not be possible to complete a degree with the assistance of Grammarly or ChatGPT, and therefore we can probably focus on other things.
But in the rest of the higher education world, 2-lanes aren't the norm. Continuing on as if ChatGPT changed nothing is the norm. And the reason people are panicking about AI, desperate for solutions, is because their subjects and their degrees are Swiss cheese, but mostly full of holes. And so, in the absence of rethinking and reimagining what learning looks like once essay paper is out the window, they go looking for solutions. Stupid and grossly unfair solutions. The blurst kind.
I've previously discussed and added my views in writing on AI detectors (tl;dr they're worthless and terrible products, but here's a killer slide deck by Dr Mark Bassett from CSU which also defenestrates AI detection, take a look). But what I'm increasingly seeing is real evidence being used in procedurally unfair ways. The most common example I'm seeing is entirely predictable: non-existent references being used as evidence of AI usage. Newsflash-they're not the same thing. When you (yes, you!) find false references, I know AI usage instantly leaps into your mind. However, what you have evidence of is false references. Nothing more. Of course, your university's academic integrity policy may have a clause around falsification, but again- that's not the same thing! Falsification was intended to be used in situations like faked experimental data, not dodgy references. Nonetheless, convenors are reporting these false references as AI usage, and academic integrity officers are agreeing. This isn't fairness, it's a stitchup. What's more, students often fail entire subjects based on this flimsy and unfair process. It's clear that, for many academics, this approach is seen as the way to put a stop to unauthorised AI use. Let me tell ya, it won't. You'll drown yourself and your colleagues in work, only to discover that the hard work hasn't begun, because you wasted your time hoping on the magic beans. I could perhaps describe your efforts somewhat more charitably as akin to a little Dutch boy plugging the dike with his finger. Noting that I'm also not a civil engineer, I doubt this will be very effective against the flood.
So any students reading, I encourage you to go to the National Student Ombudsman and complain that the evidence used to prove you used AI was manifestly inadequate, and therefore you were denied procedural fairness. In fact, if students wished to contact me for advice they might find I'm a very willing assistant. Fairness is an absolute non-negotiable for me, I don't give two shits about how worried people are about their essay assessments or faked references. That's a 'you' problem. It's been your problem for more than a decade and here you are, still issuing the same assessment tasks. Start imagining different assessments within a different framework, rather than asserting your academic freedom to be as backwards and shit as you want to be. Start contributing to course-level learning outcomes and work your way down to the learning outcomes for your subject, rather than trying to squeeze your personal hobby horses into the course like squeezing a rottweiler into a bassinet.
I know sometimes I come off as harsh, and that's mainly when students get treated like shit. Stop doing that, release your desperate grasp on the way things used to be, we'll get along great.
Until next time, KM