Feedback and English mocks

Posted on 16-08-2017

You can also read this post on the No More Marking blog.

In the previous few posts, I’ve looked at the workload generated by traditional English mock marking, and at the low reliability, and I’ve suggested that comparative judgement can produce more reliable results and take less time. However, one question I frequently get about comparative judgement is: what about the feedback? Traditional marking may be time-consuming, but it often results in pupils getting personalised comments on their work. Surely this makes it all worthwhile? And beyond a grade, what kind of feedback can comparative judgement give you? This post is a response to those questions.

First, there’s a limit to the amount of formative feedback you can get from any summative assessment. That’s because summative assessments are not designed with formative feedback in mind: they are instead designed to give an accurate grade. So for the most useful kind of formative feedback, I think you need to set non-exam tasks. I write about this more in Making Good Progress.

Still, whilst formative feedback from summative assessments is limited, it does exist. When you read a set of exam scripts, there are obviously insights you’ll want to share back with your pupils, and similarly it’s always helpful to read examiners’ reports to get an idea of the common misconceptions all pupils make. I think we need to do fewer mock exams, because their usefulness is limited, but clearly when we do do them, we want to get whatever use we can from them.

So what is the best way for a teacher to give feedback on mock performance? The dominant method at the minute seems to be written comments at the bottom of an exam script. This is extraordinarily time-consuming, as we’ve documented here, and as other bloggers have noted here, here and here. What I want to suggest in this post is that these kinds of comments are also very unhelpful. Dylan Wiliam sums up why perfectly:

‘I remember talking to a middle school student who was looking at the feedback his teacher had given him on a science assignment. The teacher had written, “You need to be more systematic in planning your scientific inquiries.” I asked the student what that meant to him, and he said,“I don’t know. If I knew how to be more systematic, I would have been more systematic the first time.” This kind of feedback is accurate — it is describing what needs to happen — but it is not helpful because the learner does not know how to use the feedback to improve. It is rather like telling an unsuccessful comedian to be funnier — accurate, but not particularly helpful, advice.’

Wiliam, Dylan. Embedded formative assessment. Indiana: Solution Tree Press, 2002, p.120.

This might seem like a funny and slightly flippant comment, but actually it expresses a profound philosophical point put forward in the work of philosophers such as Michael Polanyi and Thomas Kuhn, which is that words are not always that good at explaining new concepts to novices. Often, part of what a novice needs to learn is what some of these words like ‘systematic’, or, to use an example from Kuhn, ‘energy’, really mean. If pupils don’t know what these words really mean, they can get stuck in a circular loop, similar to the one you might have experienced as a child when you didn’t know the meaning of a word, so you looked it up in a dictionary, only to find you didn’t know any of the words in that definition, so you looked those up, only to find that you didn’t understand the words in those definitions, and so forth…

Much more helpful than written comments are actions: things that a pupil has to do next in order to improve their performance. These do not have to be individual to every pupil, and they do not have to be laboriously written at the bottom of every script. They can be communicated verbally in the next lesson, and they can be acted on in that lesson too.

How does all this fit in with comparative judgement? One objection people have to comparative judgement is that whilst it may give an accurate grade, it doesn’t give pupils a comment at the bottom of their script. We’ve heard of a couple of schools where after judging a set of scripts, they’ve then required staff to go back and write comments on the scripts too. This is totally unnecessary and unhelpful! Instead, we’d recommend combining comparative judgement with whole-class marking. Whole-class marking is a concept I first came across on blogs by Joe Kirby and Jo Facer at Michaela Community School. Instead of writing comments on a set of books, you can jot down the feedback you want to give on a single piece of paper. You can formalise this a bit more by developing a one-page marking proforma, which gives you a structure to record your insights as you mark or judge a set of scripts, and to help you plan a lesson in response to the scripts. Here’s an example we’ve put together based on some year 7 narrative writing. The parts in red are the parts that involve teacher and/or pupil actions.

Caveat: this is written out far more neatly and coherently than is necessary — we’ve only done this to illustrate how it works. These proformas can be much more messy, as in Toby French’s example here. What’s important is the thought process they support, and the record they will provide over time of actions and improvements. In short, combining comparative judgement with one-page marking proformas will drastically reduce the time it takes to mark a set of scripts, and will give your pupils far more useful feedback than a series of written comments.

Our aim with our Progress to GCSE English project is to use tools like the one above to allow schools to replace traditional mock marking with comparative judgement. We ran our first training days in July, and will be running more in the autumn term. To find out more, sign up to our mailing list here. Our primary project, Sharing Standards, takes a similar approach, and you can read more about it here.