Bin that rubric: A better way to assess conceptual understanding
Traditional examination practices are well-suited to assessing students’ knowledge of facts and application of procedures. Typical questions and marking rubrics focus on the precision and accuracy that are key to procedural knowledge. However, exams and marking tend to be inappropriate for assessing deep understanding of mathematical concepts. This is because conceptual understanding is hard to define comprehensively, and the wide variety of students’ interpretations of mathematical concepts are hard to anticipate in advance. I will present a novel approach to assessing conceptual understanding, based on comparative judgement, which involves no marking or rubrics. Instead, assessors, who can be lecturers or the students themselves, are presented with pairs of students’ work and asked simply to decide who has “the better understanding”. The binary data from many such decisions are then statistically modelled to generate a score for each student.
A programme of research at Loughborough University has shown the comparative judgement approach is valid and reliable when assessing understanding of mathematical concepts across a variety of contexts. Moreover, by involving students in judging one another’s work, it has potential for supporting conceptual learning. I will describe how comparative judgement is being used in formative and summative peer assessment activities on mathematics modules at Loughborough. This will be an interactive session and participants will have a chance to try their hand at judging student work. The approach is feely available for adoption by all involved in mathematics education through an online and open-access comparative judgement engine (nomoremarking.com).