Different types of evidence

Posted on 26-02-2012

Stephen Twigg’s announcement of his plans for an Office of Educational Improvement has got a lot of people talking, including me, here. On Lib Dem Voice, Stephen Tall  welcomed the proposal, arguing that it would reduce the influence of unevidenced dogma and politicians’ pet whims. But I think that, on the contrary, the body as it is imagined by Twigg would do very little to reduce the amount of dogma in the education system.

This is because there are two types of evidence that are important in education. The first are econometric analyses – the sort of randomised control trials where you attempt to isolate a certain factor and measure its impact. This is the sort of evidence which we are fairly teeming with in the UK. I would include in this category research that tries to measure the impact of structural or systemic changes such as uniform, class sizes, teaching assistants and school structures. I would also include research that tries to measure the impact of teaching strategies – direct instruction, synthetic phonics, discovery learning, etc. These types of analyses can be very valuable, but they do have their flaws. One flaw is that of the Hawthorne effect – almost any change will have some sort of an impact. Another flaw is that in the social sciences, as opposed to the hard sciences, it’s much harder to isolate a particular factor. For example, if we run a trial showing that pupils who receive private tuition tend to do better in school, can we be sure that it is the impact of the private tuition that is leading them to do better? Or is the fact that these pupils are part of families who care very much about education and create a home environment conducive to study? Another flaw is that many of these studies will measure the success or otherwise of a factor based on whether it has improved GCSE grades. Now, given that GCSE grades have been increasing nationally at 1-2% a year for the past two decades, there are obviously problems with using this as a measurement.  I would also include in this category of evidence statistical analyses of different countries’ education systems, such as PISA.  And of course, the frequently noted flaw with such studies is that people tend to look at a high performing system and cherry pick certain aspects they were keen on already.

None of this is to say that this evidence isn’t valuable. For example, the bible of teaching strategy research is John Hattie’s Visible Learning, which is full of valuable insights about what does and doesn’t work in the classroom. This book is a meta-analysis of hundreds of pieces of educational research, and the large sample size obviously reduces some of the problems I noted above. Some of the techniques it recommends I used already. Some I didn’t. The advice it gives on feedback and how that can improve student performance is something that has really changed my practice for the better. Likewise, it’s extremely interesting and insightful to read about how things are done in other countries.

However, as useful as the above evidence is – and I do think it is useful – it is something I find much less useful than the other type of evidence I wish to consider. This is scientific evidence about how the brain works. Over the last thirty years, scientists have discovered more about how the brain works than ever before. We now have a fairly reliable working model of how the brain learns. This research has immense implications for classroom practice. Unfortunately, very little of it is known in this country. Indeed, so little of it is known that many schools up and down the country persist with teaching techniques such as VAK that are based on completely faulty scientific premises. More subtly, the entire design and structure of the curriculum rests on implicit assumptions that are scientifically dubious. One of these dubious assumptions is the idea that we can teach transferable skills. This assumption has had a very real impact in the classroom – it has led to programmes such as ‘learning to learn’, entire curriculums such as the Royal Society of Art’s Opening Minds, and wide-ranging cross-curricular project-based learning activities.

I would argue that this latter category of evidence is of a higher category than the former – or at least, individual examples of the former. What I mean by this is that this latter category of evidence is itself based on huge amounts of observation and research.  It is a scientific theory. If we find a piece of educational research that does a randomised trial of a learning to learn programme, for example, and it concludes that such a technique is extremely useful, then I think we are entitled to question that piece of research. Steven Weinberg puts it more eloquently:

“Medical research deals with problems that are so urgent and difficult that proposals of new cures often must be based on medical statistics without understanding how the cure works, but even if a new cure were suggested by experience with many patients, it would probably be met with scepticism if one could not see how it could possibly be explained reductively, in terms of sciences like biochemistry and cell biology. Suppose that a medical journal carried two articles reporting two different cures for scrofula: one by ingestion of chicken soup and the other by a king’s touch. Even if the statistical evidence presented for these two cures had equal weight, I think the medical community (and everyone else) would have very different reactions to the two articles. Regarding chicken soup, I think that most people would keep an open mind, reserving judgment until the cure could be confirmed by independent tests. Chicken soup is a complicated mixture of good things, and who knows what effect its contents might have on the mycobacteria that cause scrofula? On the other hand, whatever statistical evidence were offered to show that a king’s touch helps to cure scrofula, readers would tend to be very sceptical because they would see no way that such a cure could ever be explained reductively…How could it matter to a mycobacterium whether the person touching its host was properly crowned and anointed or the eldest son of the previous monarch?”

The other advantage of scientific evidence is that it allows us to see why something is successful, not just that it is successful. For example, Hattie’s meta-analyses show us that synthetic phonics are a very powerful method of teaching reading. But if that were the only evidence we had, then we would be entitled to wonder if that were just because the people using other methods hadn’t been trained very well in them, or if those other methods for some reason tended to be used disproportionately by less innately able pupils, or whatever. But the scientific evidence about how the brain works shows us that the success of synthetic phonics is not a fluke. Synthetic phonics is a successful method of teaching reading because it recognises the limitations of working memory and the importance of committing facts to long-term memory. It isn’t just a statistically plausible method of teaching reading – it is scientifically plausible too.

As well as being of a higher category, I would also argue that this latter type of evidence is more important for UK schools because it is the least-known category of evidence. There is a great deal of statistical and econometric evidence that is known and cited by policymakers and even teachers. Very little of the scientific evidence is known by policymakers and teachers. Twigg revealed this himself. In the very same speech where he was promoting evidence in education, he cited transferable skills as a policy which had a strong evidence base. In actual fact, the scientific evidence is extremely sceptical about the possibility of teaching transferable skills. My fear – and I think it is a well-founded fear – is that any Office of Educational Improvement would make Twigg’s mistake, except on a much grander scale.  It would inevitably be staffed by the sort of people who produce many of the econometric and statistical analyses at the moment, and the sort of people who have historically had quite a lot of impact within the education establishment. It would far more difficult, if not impossible, to staff a UK/English Office of this sort with experts in the latter category of scientific evidence. That is because, as I have said, there are very few experts on this in this country. Therefore, rather than institute an expensive central agency staffed with people who are already influential, I would suggest that advocates of evidence-based education policy should rally round this policy: the government should buy a copy of Dan Willingham’s ‘Why Don’t Students Like School?’ for every teacher in the country.