Every Research Ed I’ve been to has been brilliant, and every single one has been better than the one before. Great conversations, great people, fascinating ideas – I loved it all. Here is my summary of the day. Session One I spoke about replacing national curriculum levels. You can see my slides here: REd 2015… Read more »
Read moreCategory: Assessment
Twitter – pros and cons
A recent essay in Changing Schools discusses the impact of social media on education policy. It got me thinking – what is Twitter good for? What is it bad for? How can it help us – not just in education and policymaking, but in our lives in general? Here are my pros and cons. Pro… Read more »
Read morePrincipled Assessment Design by Dylan Wiliam
Back in 2013 I wrote a lengthy review of Measuring Up by Daniel Koretz. This book has had a huge influence on how I think about assessment. Last year I read Principled Assessment Design by Dylan Wiliam, which is equally good and very helpful for anyone looking to design a replacement for national curriculum levels. As… Read more »
Read moreTacit knowledge
In my most recent blogs about assessment, I’ve looked at some of the practical problems with assessment criteria. I think these practical problems are related to two theoretical issues: the nature of human judgment, which I’ve written about here, and tacit knowledge, which is what this post is about. In Michael Polanyi’s phrase, ‘we know… Read more »
Read moreMarking essays and poisoning dogs
This psychological experiment asked participants to judge the following actions. (1) Stealing a towel from a hotel (2) Keeping a dime you find on the ground (3) Poisoning a barking dog They had to give each action a mark out of 10 depending on how immoral the action was, on a scale where 1 is not… Read more »
Read moreAssessment alternatives 2: using pupil work instead of criteria
In my last few blog posts, I’ve looked at the problems with performance descriptors such as national curriculum levels. I’ve suggested two alternatives: defining these performance descriptors in terms of 1) questions and 2) example work. I discussed the use of questions here, and in this post I’ll discuss the use of pupil work. Take… Read more »
Read moreAssessment alternatives 1: using questions instead of criteria
In many blog posts over the last couple of years, I’ve talked about the problems with prose descriptors such as national curriculum levels and grade descriptors. It’s often said that national curriculum levels and the like give us a shared language: actually, as I argue here, they create the illusion of a shared language. I’ve… Read more »
Read moreAssessment is difficult, but it is not mysterious
This is a follow-up to my blog from last week about performance descriptors. In that blog, I made three basic points: 1) that we have conflated assessment and prose performance descriptors, with the result that people assume the latter basically is the former; 2) that prose performance descriptors are very unhelpful because they can be… Read more »
Read moreProblems with performance descriptors
A primary teacher friend recently told me of some games she and her colleagues used to play with national curriculum levels. They would take a Michael Morpurgo novel and mark it using an APP grid, or they would take a pupil’s work and see how many different levels they could justify it receiving. These are… Read more »
Read moreWhat do exams and opinion polls have in common?
A lot. Daniel Koretz, Professor of Education at Harvard University, uses polls as an analogy to explain to people how exams actually work. Opinion polls sample the views of a small number of people in order to try and work out the views of a much larger population. Exams are analogous, in that they feature… Read more »
Read more