Research Ed 2022: Linking the curriculum & assessment

Posted on 04-09-2022

Here are my slides from September 2022’s Research Ed conference, and below is a short summary.

DC Research ed 2022

If you’d like to read more, take a look at my books! If you’d like to take part in any No More Marking assessments, take a look at our website and sign up for one of our training webinars.

One: Why knowledge matters

People who talk about a skills curriculum are right to say that skills are the important thing we take from education. Where they go wrong is assuming we can teach skills directly. Skills are made up of knowledge & sub-skills. If you want to develop your skill of reading, you need to know what words mean. If you want to develop your skill of solving complex maths problems, you need to know basic maths facts.

Therefore it is correct to say that knowledge & skills are a false dichotomy. Ingredients and cakes are also a false dichotomy! You can’t be pro-cake and anti-ingredient!! If we accept that knowledge and skills are a false dichotomy, then we should also accept that knowledge and skills are NOT on a pendulum. The pendulum is the wrong metaphor. The right metaphor is a pathway, a ladder or a journey.

If we accept all this, it’s also the case that the steps on the way don’t look like the end goal. Knowledge doesn’t look like skill but it leads to skill. Learning new words might not involve doing any reading, but it can make you a better reader. Learning basic maths facts doesn’t involve complex mathematical thinking, but it can make you better at solving complex maths problems.

For more on this, read my book Seven Myths about Assessment.

Two: Implications for assessment

Formative assessment is about the steps on the way – the knowledge. Summative assessment is about the end goal – the complex skill. You need to assess both and you will need different types of assessment depending on the purpose.

One popular way of trying to assess formatively and summatively together is to assess pupils against prose descriptors like ‘can compare two fractions to see which is bigger’. The problem with this method is it’s inaccurate and unhelpful. People can define that statement as meaning ‘which is bigger: 3/7 or 5/7?’. Or they can define it as ‘which is bigger: 5/7 or 5/9’. Most students will get the first question right; most will get the second one wrong! Similarly, telling a student that ‘you need to work on comparing fractions’ is just too vague to be really useful.

A better approach is to use multiple choice questions and whole class feedback for formative assessment, and scaled scores and Comparative Judgement for summative assessment.

For more on this, read my book Making Good Progress.

Three: Putting it into practice

You can put this into practice in a school by building formative question banks that are designed to be used frequently, and designing complex summative tasks that are designed to be used less frequently (maybe twice a year).

At No More Marking, we have been doing exactly this! We have a Comparative Judgement software that we’ve used to assess over half a million pieces of student writing. We’ve also just launched a new website, Automark, which will automatically mark paper-based multiple-choice questions.

Technology is incredibly valuable here because it makes the process of analysing large bodies of data so much easier. In turn, this can give you insights that it would be impossible to get otherwise and can tell you if the strengths and weaknesses you see in your students are shared by others or are less common.

However, the problem with technology in the classroom is that screens can be incredibly distracting. One way of resolving this trade-off is to use paper-based assessments that can be scanned in and analysed digitally. That’s what we do at No More Marking.

For more on this, read my book Teachers vs Tech.