Why didn’t Assessment for Learning transform our schools?
Posted on 08-01-2017
This is part 1 of a series of blogs on my new book, Making Good Progress?: The future of Assessment for Learning. Click here to read the introduction to the series.
Giving feedback works. There is an enormous amount of evidence that shows this, much of it summarised in Black and Wiliam’s Inside the Black Box. The importance of giving feedback was the rationale behind the government-sponsored initiative of Assessment for Learning, or AfL. Yet, nearly twenty years after the publication of Inside the Black Box, and despite English teachers saying they give more feedback to pupils than nearly every comparable country, most metrics show that English education has not improved much over the same period. Dylan Wiliam himself has said that ‘there are very few schools where all the principles of AfL, as I understand them, are being implemented effectively’.
How has this happened?
My argument is that what matters is not just the act of giving feedback, but the type and quality of the feedback. You can give all the feedback you like, but if it doesn’t help pupils to improve, it doesn’t matter. And over the past twenty years or so, the feedback teachers were encouraged to give was based on a faulty idea of how pupils learn: the idea that pupils can learn generic skills.
National curriculum levels, the assessing pupil progress grids, the interim frameworks and various ‘level ladders’ are all based on the assumption that there were generic skills of analysis, problem-solving, inference, mathematical awareness and scientific thinking, etc., that could be taught and improved on. In these systems, all the feedback pupils get is generic. Teachers were encouraged to use the language of the level descriptors to give feedback, meaning that pupils got abstract and generic comments like: ‘you need to develop explanation of inferred meanings drawing on evidence across the text’ or ‘you need to identify more features of the writer’s use of language’.
Unfortunately, we know that skill is not something that can be taught in the abstract. We all know people who are good readers, but their ability to read and infer is not an abstract skill: it is dependent on knowledge of vocabulary and background information about the text.
What this means is that whilst statements like ‘you need to identify more features of the writer’s use of language’ might be an accurate description of a pupil’s performance, these statements are not actually going to help them improve. What if the pupil didn’t know any features to begin with? What if the features they knew weren’t present in this text?
Generic feedback is descriptive, not analytic. It’s accurate, but it isn’t helpful. It tells pupils how they are doing, but it does not tell them how to get better. For that, they need something much more specific and curriculum-linked. In fact, in order to give pupils more helpful feedback, they need to do more helpful, specific and diagnostic tasks. If you try to teach generic skills, and only give generic feedback, you will end up always having to use assessments that have been designed for summative purposes. That is, you will end up over-testing and teaching to the test.
Teaching to the test, and the vexed question of whether it is a good or a bad thing, will be the subject of the next post.