Nate Silver and E.D. Hirsch

Posted on 02-02-2014

Over Christmas I read Nate Silver’s excellent book The Signal and the Noise. Silver runs the FiveThirtyEight politics blog and became famous for his uncannily accurate predictions of US elections. Before predicting elections, he predicted the success of baseball players and teams. Before that, he made money playing online poker. His book is a distillation of what he has learned in these various pursuits about the art and science of prediction. As well as discussing politics, baseball and poker, he also talks about chess, weather, climate change, earthquakes and terrorist attacks.

He doesn’t discuss education, but there is one particular aspect of his book which I think has big implications for education research: that of the role of statistical analysis. The basic thesis of Silver’s book is that we can’t just predict using statistics alone. We need a theory.

This was initially quite surprising for me, as I came to the book with the knowledge that Silver was famous for his sophisticated statistical analyses and wonkish approach to problems. His successful political predictions became all the more famous because he was an outsider, not from a political background but a statistical one. I believe that in the early days of his blog, seasoned Beltway observers dismissed his predictions on this ground. I therefore was expecting the book to be an endorsement of detached statistical methods and an attack on overly emotive predictions by biased ‘experts’ too immersed in their field to analyse it objectively.

And in some ways, it is that book. Silver does criticise political pundits for their laughably inaccurate predictions, and notes rather drily that one of the reasons why he was able to make such a splash with his political predictions is because the bar was set so low. However, in other fields, he is much less critical of established experts. Notably, in the chapter on baseball he goes out of his way to show that there was much less of a difference than you think between the new stats-based analysts and the old-style player scouts. Silver is one of the new-style baseball analysts who are profiled in the book and film Moneyball. The stereotype is that the grizzly old baseball player scouts were caught unawares by a bunch of laptop-wielding geeks with an Excel model. Silver has a bit of fun with this stereotype, but basically fails to endorse it. He argues that the traditional scouts were always aware of stats, and that the modern ‘sabermetricians’ understood the nuances of baseball. So whilst he is willing to criticise traditional political experts, he is less willing to criticise traditional baseball experts.

And on the point as to whether detached statistical analyses on their own will suffice for prediction, Silver is clear that they can not. Indeed, he argues that, paradoxically, the more data we uncover, the harder it may be to make accurate predictions.

This is why our predictions may be more prone to failure in the era of Big Data. As there is an exponential increase in the amount of available information, there is likewise an exponential increase in the number of hypotheses to investigate. For instance, the US government now publishes data on about 45,000 economic statistics. If you want to test for relationships between all combinations of two pairs of these statistics – is there a causal relationship between the bank prime loan rate and the unemployment rate in Alabama? – that gives you literally one billion hypotheses to test. But the number of meaningful relationships in the data – those that speak to causality rather than correlation and testify to how the world really works – is orders of magnitude smaller. Nor is it likely to be increasing at nearly so fast a rate as the information itself; there isn’t any more truth in the world than there was before the Internet or printing press. [Amen to that – click here to read chapter 3 of my book, which is about just this point].

In fact, Silver directly takes on one of the more extreme predictions of futurologists – Chris Anderson’s prediction that Big Data would make the scientific method obsolete. Silver calls Anderson’s view ‘badly mistaken’, because ‘the numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.’

He titles another section of his book ‘data is useless without context’. If you only ever seek to find statistical significance, and never bother to think about the plausibility of your finding, then in a world of Big Data you will end up with a lot of odd predictions.

Thus, you will see apparently serious papers published on how toads can predict earthquakes, or how big-box stores like Target beget racial hate group which apply frequentist tests to produce “statistically significant” (but manifestly ridiculous) findings.

To return to education, I think the worry is that we introduce tests of statistical significance without a good working theory of how learning happens. Without this theoretical understanding, we are more likely to conduct meaningless tests, mistake correlation for causation and confuse statistical significance with causal significance. This is something that E.D. Hirsch has written an absolutely brilliant article about, and which I have blogged about before here.

To recap, in Hirsch’s article he takes as his test case that of class sizes, perhaps one of the most popular issues in education, and one of the most well researched. He notes that a methodologically sound study of class sizes in Tennessee showed that reducing class sizes had a positive impact on achievement. And yet, when California rolled out a hugely expensive programme to reduce class sizes, it had little impact. Hirsch’s point is that the Tennessee study, whilst methodologically robust, did not probe to the root causes of its statistically significant finding. This is exactly the kind of thing Silver warns against. In his words, ‘statistical inferences are much stronger when backed up by theory or at least some deeper thinking about their root causes.’ The deeper thinking about root causes was absent in the case of this class size study.

Hirsch notes that we do have a strong theory from cognitive science about how pupils learn. We can use this theory to guide our teaching. He argues that we can use the first principles of cognitive science to derive certain ‘middle axioms’ or ‘reliable general principles’ that can guide our day-to-day teaching. Here is his list of reliable general principles (in the article he discusses each at length).

• Prior knowledge as a prerequisite to effective learning.
• Meaningfulness.
• The right mix of generalization and example.
• Attention determines learning.
• Rehearsal (repetition) is usually necessary for retention.
• Automaticity (through rehearsal) is essential to higher skills.
• Implicit instruction of beginners is usually less effective.

It seems to me this is an excellent and easily accessible summary of what we know from cognitive science. If we used these as a basis for devising RCTs and as a starting point for discussing the findings we get from them, I think we would be doing well.

These middle axioms would also make an excellent basis for teacher training or a course of CPD, but as far as I know they have not been used in this way. This brings me to one final point about Hirsch. The article that these ‘middle axioms’ are taken from is a small masterpiece. In just a few thousand words, Hirsch combines his mastery of a range of different intellectual fields – educational history & theory, cognitive psychology, the scientific method – and produces some genuine theoretical and practical insights. There are many good people working in education, but there are none I have read who are capable of this kind of scholarship. All of us in education are lucky to be able to profit from his insights.