##plugins.themes.bootstrap3.article.main##

Jill Barber

Abstract

Adaptive Comparative Judgement (ACJ) is an alternative to conventional marking in which the assessor (judge) merely compares two answers and chooses a winner. (Scripts are typically uploaded to the CompareAssess interface as pdf files and are presented side-by-side.) Repeated comparisons and application of the sorting algorithm leads to scripts sorted in order of merit. Boundaries are determined by separate review of scripts.


A small pilot of ACJ in the fourth year of the Manchester Pharmacy programme is described. Twelve judges used ACJ to mark 64 scripts previously marked conventionally. 50 students peer-marked their own mock examination question using ACJ.


Peer-marking was successful with students learning from the process, and delivering both marks and feedback within two weeks. There was very good consistency among the students acting as judges, and accuracy (as defined by Pollitt, 2012) of 0.94.


Staff were similarly consistent, but the agreement with marks obtained by conventional marking was disappointing. While some discrepancies could be attributed to conventional marking failing under the stress of marking during teaching term, the worst discrepancies appeared to originate from inadequate judging criteria.  


We conclude that ACJ is a very promising method, especially for peer assessment, but that judging criteria require very careful consideration.

##plugins.themes.bootstrap3.article.details##

Section
Articles