Recently published research - (Wiggins et al. 2019) - suggests that an evidence-informed approach to school improvement – the RISE Project – may lead to pupils to making small amounts of additional progress in mathematical and English compared to children in comparison schools. However, these differences are both small and not statistically significant so the true impact of the project may have been zero. Now for critics of the use of research evidence in schools, this may be indeed be ‘grist to their mill’ – with the argument being put forward that why should schools commit resources to an approach to school improvement which does not bring about improvements in outcomes for children. So where does that leave the proponents of research use in schools? Well I’d like to make the following observations, though I need to add these observations are made with the benefit of hindsight and may not have been obvious at the time.
First, the evidence-informed model of school improvement was new – so we shouldn’t be surprised if new approaches don’t always work perfectly first time. That doesn’t mean we should be blasé about the results and try and downplay them just because they don’t fit in with our view about the potential importance of the role of research evidence in bringing about school improvement. More thinking may need to be done to develop both robust theories of change and theories of action, which will increase the probability of success. Indeed, if we can’t develop these robust theories of change/action – then we may need to think again.
Second, the RISE Model is just one model of using evidence to bring about school improvement, with the Research Lead model being highly reliant on individuals within both Huntington School and the intervention schools. Indeed, the model may have been fatally flawed from the outset, as work in other fields, for example, (Kislov, Wilson, and Boaden 2017) suggesting that it is probably unreasonable to expect any one individual to have all the skills necessary to be a successful school research champion, cope with the different types of knowledge, build connections both within and outside of the school, and at the same time maintain their credibility with diverse audiences. As such, we need to look at different ways of increasing the collective capacity and capability of using research and other evidence in schools – which may have greater potential to bring about school improvement.
Third, the EEF’s school improvement cycle may in itself be flawed and require further revision. As it stands, the EEF school improvement cycle consists of five steps – decide what you want to achieve; identify possible solutions - with a focus on external evidence; give the idea the best chance of success; did it work; securing and spreading change by mobilising knowledge. However, for me, there are two main problems. First, at the beginning of the cycle there is insufficient emphasis on the mobilisation of existing knowledge within the school, with too much emphasis on external research evidence. The work of Dr Vicky Ward is very useful on how to engage in knowledge mobilisation. Second, having identified possible solutions the next step focusses on implementation, whereas there needs to be a step where all sources of evidence – research evidence, practitioner expertise, stakeholder views and school data – are aggregated and a professional judgment is made on how to proceed.
Fourth, some of the problems encountered – for example the high levels of turnover of staff being involved in a high-profile national project and using that as a springboard for promotion – were pretty predictable and should have been planned for at the start of the project.
Fifth, the project was perhaps over-ambitious in its scale – with over 20 schools actively involved in the intervention, and maybe the project would have benefitted from a small efficacy trial before conducting a randomised controlled trial. Indeed, there may need to be a range of efficacy trials looking at a range of different models for evidence-informed school improvement
Sixth, we need to talk about headteachers and their role in promoting evidence-informed practice in schools. It’s now pretty clear that headteachers have a critical role in supporting the development of evidence-informed practice (Coldwell et al. 2017) and if they are not ‘on-board’ then Research Leads are not going to have the support necessary for their work to be a success. Indeed, the EEF may need to give some thought not just to how schools are recruited to participate in trials but then to focus on the level of commitment of the headteacher to the trial – with a process being used to gauge headteacher commitment to research use in schools.
And finally
The EEF and the writers of the report should be applauded for the use of the TIDiER framework for providing a standardised way of reporting on an intervention – and is a great example of education learning from other fields and disciplines.
References
Coldwell, Michael et al. 2017. Evidence-Informed Teaching: An Evaluation of Progress in England. Research Report. London, U.K.: Department for Education.
Kislov, Roman, Paul Wilson, and Ruth Boaden. 2017. “The ‘Dark Side’of Knowledge Brokering.” Journal of health services research & policy 22(2): 107–12.
Wiggins, M et al. 2019. The RISE Project: Evidence-Informed School Improvement: Evaluation Report. London.