The NFER blog

Evidence for excellence in education

If summer-borns do less well at school, who or what is to blame?

3 Comments

By guest blogger Caroline Sharp, Research Director (October-born)

I recently attended the launch of the Nuffield report on month of birth. I have been researching this issue for some years now, so I was pleased to be invited onto the project advisory group. Actually, when I was first invited I thought ‘what will this new study add to our knowledge of the subject?’ followed swiftly by ‘why didn’t I think of that?’.

The research was led by a team of economists at the Institute for Fiscal Studies. They used large national datasets and performed some clever analysis to explore the effects of being born in different months on school attainment, wellbeing, university attendance and employment. Like other studies, they found that there is an advantage to being older and a disadvantage to being younger in your year group. They confirmed that the difference was largest when children are youngest, and decreases as young children get older. While there is still a small difference in the proportion of August-borns who progress to university, compared to those born at other times of the year, there is no evidence of differences in adult employment, earnings or happiness.

What is really great about this research is that it has not just replicated and reinforced the results of previous studies but has also advanced our understanding of the causes of birth date differences and therefore helped to identify the best solutions.

Once you have removed the possibility that these differences are caused by an accident of birth – either the more fanciful astrological explanation, or the more plausible seasonal pre-natal infection/exposure to sunlight explanation – then you are faced with the conclusion that any differences stem from the way the education system groups children and deals with their maturation, rather than from some innate differences in children born at different times of the year.

This leads to further questions of whether the differences are due to age of entry to school, relative age within the class, or age on testing? The team found that the best explanation is age on testing: the differences can be explained by the fact that August-born children are almost 12 months younger than September-borns when they are assessed. I welcome this finding because it always seemed to me to be the most logical explanation.

An analogy that comes to mind is that expecting all children in a year group to perform at the same level on a test is like expecting all runners to reach the finish line together even though some started closer and had less far to run.

Once the research team adjusted test scores by month of birth the differences in performance disappeared. Differences in age of starting school and length of schooling did not appear to contribute once differences in age on testing were taken into account.

So now the tricky bit – what should policy makers, teachers and parents do? Well an obvious recommendation is for age-adjusted scores to be provided alongside ‘raw’ scores on all national assessments and similar assessments (such as school entrance exams). It is important to have both raw and age-adjusted scores, so that you can judge whether children have achieved a particular standard and also make fair comparisons to avoid younger children being penalised simply for being younger.

Second, teachers and schools should be aware of age differences when comparing children within classes – a simple technique is to put a class register in order of birth. A particular concern is that relatively younger children seem to be more likely to be identified as having special educational needs. The explanation seems to be that teachers are comparing children’s progress with others in the class, thereby over-identifying younger children and under-identifying older children who may need the extra support. Clearly, this needs to be addressed, for example by taking the birth dates of children into consideration when identifying those needing additional support.

Third, although I am concerned about introducing formal learning too early, I am not in favour of allowing more flexibility in our definitions of year groups. The reason for this is that it results in a greater range of ages within a class with little evidence of overall benefit for children as a whole. Rather than allowing children to delay starting school and join a different year group, I think we need to make our schools more responsive to the needs of all children and their parents (for example, by staff supporting younger children’s transition to school and making sure teaching and learning is differentiated by age).

Finally, I am concerned about the evidence of psychological effects of being younger in the class, so this is where I would focus research attention in future. We need to understand more about the causes of this and identify some practical solutions for parents, teachers and the children themselves.

More about season of birth

Author: thenferblog

National Foundation for Educational Research

3 thoughts on “If summer-borns do less well at school, who or what is to blame?

  1. As a setting, we currently make a 2 year old progress check under the EYFS when children are around 32 months. Would it not make sense to undertake the EYFS profile inthe same way at a given point – say 60 months – rather than in the June of Reception year? I do acknowledge though that this would be unhelpful to DfE statiticians!

  2. This sounds like a good idea in principle, as it would make for a fairer comparison between children. It all goes back to what the assessment is for. If you want to know what each individual is capable of in relation to developmental milestones, then an age-related assessment is the best approach. But if you want to know what children are able to do by the time they complete a stage of education (such as the end of the EYFS) then you need an assessment towards the end of the educational stage. Because certain assessments are used for both purposes, you need both a ‘raw’ and an age-adjusted result. (CS)

  3. Hi, yes I see your point. I guess though, the idea that assessing/measuring at the end of EYFS, as we currently do is not really measuring children like with like. So, as you say, perhaps our bigger question is why we ‘measure’ children? Is it, as your rightly point out, to look at individual capability? Or to see how the system is shaping up regardless of their actual age? The latter supports the factory model of education (Benett KP and Le Compte 1998 How schools work) which looks at efficient operation with raw materials and end products….just my ramblings.

Leave a Reply