The NFER blog

Evidence for excellence in education

The importance of knowing what doesn’t work

1 Comment

By Ben Styles

This blog post leads on from a previous blog on ‘The importance of not making a difference,’ and is taken from a more detailed article.

I have recently been reminded of the difficulty we face when trying to communicate null or negative findings from research. In Spring 2013, a team from Coventry University delivered the Chatterbooks programme as part of a randomised controlled trial (RCT) funded by the Education Endowment Foundation. Chatterbooks is an extracurricular reading initiative that aims to increase a child’s motivation to read by providing schools with tools and resources to encourage reading for pleasure. In this trial, Chatterbooks was delivered instead of normal lessons.

In May 2014, the NFER Education Trials Unit published the results of the trial. Chatterbooks had an estimated average effect of slowing progress in reading by 2 months, although we could not be confident this negative effect hadn’t arisen by chance. If it was a genuine effect, it could have been because control pupils were learning faster in their existing lessons, so any improvements as a result of Chatterbooks would have been offset.

Either way, we can be pretty certain that across the 12 secondary schools involved in this trial, Chatterbooks was of no help in improving average attainment in reading for the children involved when compared to ”business as usual”. Fast forward just over a year and the Department for Education (DfE) in England is funding The Reading Agency to extend Chatterbooks to 200 more primary schools all over the country.

Kindergarten teacher helping students with writing skills

The trial received limited press coverage at the time of publication as it returned a null result, but the trial was part-funded by DfE. The evaluation was carried out with 11- and 12-year-olds who were struggling with reading at the start of secondary school, whereas the new roll-out funding is for children in primary school between the ages of 7 and 11. It is possible that Chatterbooks is effective when run in primary schools and of no effect in secondary schools. But how different are 11-year-olds struggling with reading when they start secondary school to children with similar difficulties at the end of primary school?

One of the most consistent things we see from data is that the spread of ability within a year group far outstrips the average progress children make between years. Therefore, there will be children at the end of primary school who have similar reading skills to those who took part in the trial. In fact, there will also be children with similar reading skills in the year below, and the year below that, although they would not be classed as ‘struggling’. Exact details of what the new funding is for are not yet available. It may be for attending after-school book clubs, rather than a replacement for existing lessons, in which case the results of the trial are less relevant. However, the question remains as to how we can be sure that the pupils who need the help attend the Chatterbooks book clubs?

The trial targeted children who were struggling readers, and it was not successful. If the book clubs do not provide support for these children, success seems even less likely. I would maintain that the results of the trial should prompt serious consideration – alongside the wider set of factors the government needs to take into account when making decisions – when deciding whether or not to fund the programme for any phase of schooling.

At the very least, a robust evaluation of the use of these funds is warranted. If the programme is shown to be more successful when implemented in a different way, this would be valuable in informing future Chatterbooks activities.

We are a long way from academic journals, let alone the press, giving equal weight to null or negative findings as compared to those that demonstrate a positive effect. Null or negative findings are, of course, just as important as positive ones. A school spending its valuable Pupil Premium resources on an intervention that is demonstrated to be ineffective can quickly change tack to something that has greater weight of evidence behind it. Was there robust evidence for or against the other book club programmes the DfE was considering when awarding its funding? Probably not.

It will take time for more rigorous research methods to be widely used by the English education research community. And the wider question is not so much whether this was the right book club for DfE to fund, but whether this is the best way of improving reading. Thankfully, high quality evidence is now accumulating at breathtaking speed in England. Perhaps, next time the Government is deciding how to spend its money on improving reading it will have more to go on.

Author: thenferblog

National Foundation for Educational Research

One thought on “The importance of knowing what doesn’t work

  1. I found this very interesting. In Hattie’s Visible Learning he states that almost anything works. I worry about that because I wonder if what it shows is that there’s a large bias introduced by reporting what works and then not reporting what doesn’t. This is well documented in medicine I believe. I can think of three projects that just petered out because they didn’t work and never got written up. It then tends to suggest that if something doesn’t work – then it really doesn’t work. If no-one could show a positive impact and the worst results weren’t even published then why on earth pursue it? I think these sorts of results are far more likely to reliable than positive results. By culling approaches that definitely don’t work we might make progress in teaching and learning.

Leave a Reply