top of page
Writer's pictureShaun Lehmann

Cost effective? I do not think it means what you think it means...

Updated: Nov 15

It's been a minute, but it's YetAnotherShaun again (per my Twitter/Bluesky handle - I still refuse to call Twitter anything other than Twitter). Anyone who knows me knows that about three-quarters of my brain operates in meme format. I couldn't pass by an opportunity to draw upon the meme-tastic goldmine that is 'The Princess Bride' in raising my latest bugbear:



Teaching and assessment conversations in the Higher Education sector are often interesting and exciting. At least to me. Over the years, I have participated in quite a few of these. In the past, this was primarily as someone who spent most of their time teaching, designing, and marking assessments, but in recent years my contributions have related primarily to academic integrity, with a focus on assessment security (or the lack thereof). Or, I should more properly say 'assessment validity', as I see security as a subset of the broader concept of assessment validity. On this, if you want a great read on the interplay between security and validity in assessment, I strongly recommend reading this fantastic paper by Dawson et al. The cut and thrust of my take-away is this: If a lack of security means that you can't be sure who actually produced the artefact that you are treating as evidence of learning, then the validity of the assessment is questionable.


In an environment where GenAI-anxiety has heightened concerns about assessment validity (in a better-late-than-never fashion given that large-scale collusion and contract cheating were already common), we have seen a proliferation of working groups, conferences, webinars, seminars, and so on that are set on exploring 'solutions' to assessment security (validity) problems. In these forums, we see experts (like Kane, Prof. Cath Ellis, and the excellent people at Griffith) suggesting that the future of assessment should involve a lot more face-to-face conversation about learning with students. This could take the form of presentations with a decent Q&A afterwards (though not without the Q&A - I've investigated plenty of contract cheated presentations), interactive oral forms like those advocated for by Griffith, viva voce, or hybrids of these things. And, without fail, in response there are are harrumphs about the cost of these forms of assessment. This is the source of today's gripe.


At various points, Kane and I, and Cath, have offered arguments that these forms of assessment cost no more to perform than many of the most commonly used forms of assessment, and indeed can cost less. Cath has spoken about her experience of re-allocating marking time from a written task to an interactive form of oral assessment (many of you who were at the TEQSA conference this week will have seen her speaking about this). Contrary to what many suppose, she found that it took no more time (perhaps even less time) to complete the marking of this task than the older written one. This is significant and worthy of a post of its own, but re-allocation of existing paid-for marking time to more valid forms of assessment isn't my primary focus today. I'm here to talk to you about a different kind of cost that is often quite invisible to assessment decision makers in Higher Education - that is the cost of assurance of validity, and carrying through any necessary responses to breakdowns of that validity (and again, to call out Cath's excellence, this is also something she touched on at TEQSA). As I'll show you, these invisible costs can cause many assessment types that are seen as 'cost effective' to become not only more expensive than most anticipate, but arguably some of the most cost-intensive assessment choices possible.


While movements toward fully online and largely self-marking forms of assessment were already well underway prior to Covid-19 (I worked on such a project), I think we can all agree that Covid-19 turbocharged the process. It was the perfect storm of the need to make changes to a fully online mode very quickly, and do so through a hyper-cost-focused lens as universities collectively shat themselves about enrolments, and enrolment-related revenue (to the career detriment of many wonderful people in the sector). During lockdown periods where students had no choice but to study from home, the use of online quizzes, weekly forum post tasks, and take-home online tests was a pragmatic choice for educators who had relatively few options available to them. However, these also came with the added boon of being largely self-marking, or not overly time-intensive to mark in many instances (at least compared to pre-Covid options). This led to gains in efficiency in assessing students, and as the lockdown periods came to an end and universities tried to figure out what the new normal would look like, many management teams were loathe to let go of the efficiency gains that had been made. Some (not all, I said some) teaching staff also enjoyed the reduction in hands-on marking time needed, and did not want to go back to more labour-intensive assessment forms. Students too wanted to retain many of the online assessment options that had been afforded them - some for legitimate reasons of convenience or accessibility, and others for less honest reasons. And, in an environment where the unhelpful myths that cheating is rare and/or purely due to desperation were dominant, there were few problems to be seen with this new normal. Universities could assess more students, for significantly less money per student - what was not to like?


And here we arrive at the current paradigm of these fully online forms of assessment, especially the largely self-marking ones, being viewed positively as 'cost effective' ways of going about the business-as-usual of assessment. When that voice in the room harrumphs about the cost of an interactive oral assessment, in many instances it is this paradigm that is the object of comparison.


Here's where it is important to introduce some very important 'ifs'. If the myth that cheating is rare and/or purely due to desperation was not a myth, but a truth, then the harrumphing individual would be making a somewhat sound argument. If you could count on the vast majority of students doing the assessment themselves in the way that the assessment designer imagined, then it would be reasonable to assume that the primary cost of carrying out the assessment task would be in the marking. Integrity cases and the costs of carrying them through are an afterthought, if they are thought about at all, because the assumption is that there will be few of them, and they will be simply handled where present (often due to a misunderstanding of what breaches look like - see my earlier post on Platogiarism's Cave).


But, as the research has shown us (see here, here, here, and here, among many other great papers), and as those of us who work in the integrity space can attest, the assumption that cheating is some kind of desperation-driven corner case is simply wrong. I have personally seen cases where dozens of students in a single subject were outsourcing assessment tasks to cheating providers, and where dozens of students were colluding over chat servers or shared online documents on online assessments. I'm being vague here about numbers on purpose, but when I say dozens you should take it that I mean many multiple dozens. This is the reality for many, possibly most, of these fully-online forms of assessment. I have not seen an institution yet that will say that it does not care about this, and that nothing should happen in response, so the harrumpher has no out there. Thus we arrive at the invisible and seldom spoken of, but very real, financial cost of assuring that learning has taken place in these assessments, and responding where it has not.


So what does this cost look like? If your institution is serious about detecting and responding to this kind of thing, it ought to be proactively looking for it. If the assessment task involves little-to-no marking, the odds of a marker picking up that something funny has happened are not especially good (it is picked up sometimes, but it is missed much more often than not). Proactive monitoring involves employing at least one integrity data scientist or skilled integrity investigator of some kind. You won't find one of those in Australia for less than about 120k a year (plus super). If you want a really good one, you'll be looking at more like 150k a year. You will then need to investigate once cases have been identified. Our analysis tools help us do this work much more efficiently than just about anyone in the sector, but you are still looking at hours of reviewing investigation reports, reviewing responses from students, and preparing documents for decision-makers. Oh, and if you are doing this sort of work properly, you will also be looking at more than the one subject for each student. One of the papers I linked above found that where a student has done this once, there are better than even odds that they have done it multiple times - so you had better check the other subjects they have taken too. So most cases end up spanning multiple subjects. If your institution makes use of panels to decide matters, then you also need to factor in the cost of having senior academics or administrators sitting and reading investigation reports, and talking about them to reach decisions. You get the picture.


If you've got your mental calculator going, you should hopefully now be realising that it isn't unrealistic for one fully online assessment (especially low-touch marking ones) for an average-sized university subject to spawn a couple of dozen integrity investigations, with most of those turning into multi-subject affairs, and requiring hours and hours of time from highly paid university staff before they are resolved. Something like 30k or more in staff hours (subject academics, investigators, panels, etc.) is probably conservative... from one low-security assessment that has been properly analysed and investigated. Doesn't look so cost effective now, does it? But, alas, this cost is almost never a factor in deciding whether a given set of assessments are cost effective, despite this work being necessary to assure that learning happened, and respond where it didn't (the work involved in patching validity leaks).


So next time someone harrumphs about the cost of a viva voce assessment or similar, have them work out whether it would cost 30k plus in staff hours to carry out. I can almost guarantee that it wouldn't, and it would have the benefit of being a form of assessment that is much more secure (and thus valid).


'Cost effective'... I do not think it means what you think it means...


YetAnotherShaun







290 views0 comments

Recent Posts

See All

Yorumlar

5 üzerinden 0 yıldız
Henüz hiç puanlama yok

Puanlama ekleyin
Post: Blog2_Post
bottom of page