The NFER blog

Evidence for excellence in education

The importance of not making a difference

4 Comments

By guest blogger Anneka Dawson, Research Manager

Schools beware – trials showing no statistically significant impact on pupils’ achievement get little media coverage.

In May, The Education Endowment Foundation published the latest results from four of its 20 randomised controlled trials (RCTs) on interventions which are believed to improve literacy levels in the transition from primary to secondary school. The headline findings reported in the majority of mainstream news focused on the trial ‘improving writing quality’ that boosted children’s reading by nine months, but made no mention of the other three trials – which found no significant differences.

These reports highlight two interesting misconceptions about the use of randomised controlled trials in educational research. Firstly, that an intervention should be immediately abandoned if the results of one trial show that it is not having a positive impact on learning. Secondly, that results that don’t show a statistically significant improvement are not important. In addition, there is the related concern that randomised controlled trials are not ethical, as one group (the control group) is ‘denied’ the intervention. This blog aims to address these three misconceptions.

To address the first misconception, EEF blogged on the ‘significance of no significance’, challenging the view that we should immediately give up on interventions that have not shown a statistically significant effect in a trial, without setting the findings in context. They point out that this finding could be only part of a fuller story: perhaps the intervention would work with a different group of students (for example younger students), or needs fine-tuning to take into consideration the practical nature of running a trial in schools (including leaving plenty of time for schools to plan for an intervention to ensure that they can meet its requirements fully)?

Further research about many educational interventions is needed to build a full body of evidence that clearly demonstrates which interventions work and which do not, and in what contexts. This is what is required to ensure that practitioners are able to make the most informed choices about which interventions to use in their schools, and is what the EEF and Sutton Trust aim to offer through their Sutton Trust-EEF Teaching and Learning Toolkit. The toolkit brings together the findings from many different research projects in a set area (such as early intervention), and translates research results into estimated months of additional progress that pupils receiving the intervention would make over a year.

To address the second misconception it is important to challenge the persistent view that studies showing no significant differences are actually showing ‘no results’. These projects are of equal value to the schools and pupils the programmes are trying to help as those results which show a positive impact. It is equally important for schools to know what doesn’t work, so they don’t waste their time trying to implement an intervention that is unlikely to make a difference, as it is to know what does work. In the past, results of studies showing an intervention made no statistically significant differences may not have been published and therefore knowledge about what doesn’t work was hidden. Thanks to the increasing use of trial websites such as http://controlled-trials.com/, we are becoming more able to find the results of trials, whether or not the findings were positive. This is essential knowledge for senior leaders in schools to decide on how precious school budgets should be spent as outlined in a previous blog by my colleague Ben Styles.

Finally, I would like to debunk the common myth that participation in an RCT is unethical because some of the schools (or pupils) are denied the intervention. As can be seen in the current EEF example, only one of the four literacy interventions was making a statistically significant difference to pupils’ achievement. It is clear from this that those pupils who were not allocated to the intervention were not disadvantaged compared to those who were allocated to the intervention. More importantly, we have only discovered this by using a randomised controlled trial with a group of schools or pupils acting as a control (i.e. not having access to the intervention). Based on the value of these findings to learners, it could be argued rather that RCTs are the most ethical method to be used to evaluate interventions! All groups in a trial are vital to its ability to determine if an intervention is worth a school’s precious time, effort and budget.

The increasing use of RCTs in educational research, demonstrated once again through the publication of these EEF results, is raising a number of questions about the best approach to collecting evidence on what works in schools. As more trials are undertaken, the benefits of RCTs are becoming increasingly apparent, as well as the issues that we are working through to make them as valuable as possible.

Author: thenferblog

National Foundation for Educational Research

4 thoughts on “The importance of not making a difference

  1. I have several additional issues with RCTs which I’ve described in technical notes on my blog. One is that there is a loss of match through randomisation, a second that it can only assess one aspect against another – or at most two others, and a third is that it is not suited to investigation, but more a reconnaissance in force. I’ve cited developments in surgery as having taken another route, and one that is equally valid.

  2. Disappointed there is no mention of statistical power in the discussion of trials which do not provide evidence to reject the null. If a high-powered study does not reject the null, then this is a big deal. If a low-powered study fails to reject the null, this tells us much less.

  3. Thank you John for some very useful analysis of this subject. For me the point that stood out most as a teacher and a parent is the “key issue of long-term follow up to check on whether improvements are lasting”. That is absolutely critical to education and we will seriously miss a trick if we do not understand that education is about human development and mastery of cumulative skills. For example, I often come across youngsters who can decode text but cannot read for meaning, so they appear to be able to read but actually have no idea what they are reading. Phonics might help in teaching decoding, but a child needs far more to be able to read for meaning and we are not capturing that complexity.
    RCT’s in my area of educational research risk over-simplifying matters to the point where they are meaningless. There are also ethical issues to consider, for instance if you know that a child has a physical impairment which blocks cognitive processing; is it ethical not to address that issue? Also staff are not keen on failing to intervene when they know what the problem is and how to deal with it; we cannot run trials without training the staff.
    There are practical issues to consider also in selecting schools for RCT’s; if the school are not conscientious in supporting the intervention it will produce far less impressive results than it would in a conscientious school. To make the trial really random then the children would have to be consistently treated by staff in each sample; that is quite difficult to achieve. The more staff involved the more human variation will arise; but if you use a small number of staff then the intervention will be at different times of the day a regular slot just before lunch or hometime is rather different than first thing in the morning.
    I really want educational research to be robust, but I do think that John has come up with some very important issues to be considered.

  4. Comment from Anneka Dawson…

    Thank you Nat, Charlotte and John for your comments. In this instance the blog was focusing on the media’s portrayal of research results, however we acknowledge that there are many important aspects to consider about RCTs.

    NFER do not only support RCTs but evaluate the best method to measure an intervention. It is important to consider what is right for the particular study and much of our work is heavily qualitative, exploring teachers’ thoughts and experiences, which allows us to examine in depth the ‘why’ alongside ‘what’ works.

Leave a Reply