Last weekend saw the annual education evidence fest – aka- ResearchED 2018 take place in St John’s Wood, London. Unfortunately, one of the inevitable disappointments of attending #rED 18 is that you are unable to see all the speakers that you would like to see. As such, you are often have to do with other people’s summaries of speakers. So I was particularly pleased to see Schools Week come up with a headline and article ResearchED 2018 Five interesting things we learned and was even more pleased when I saw it contained a short summary of Dr Sam Sims presentation on the positive impact of instructional coaching. However, my excitement was short-lived as when I came to read the article, and especially references to the statistically significant positive effect of instructional coaching being was found in 10 out of 15 studies, which was then used to infer that “probably the best-evidenced form of CPD currently known to mankind”.
Now as I wasn’t there, didn’t see the presentation or subsequently talk to Sam I am loathe to make a specific critique of the presentation. Instead, I am going to look at the general issue of what can we learn when a variety of studies find positive and statistically significant results from the implementation of an intervention in a range of different circumstances. And in doing so, I am going to lean heavily on the work of Cartwright and Hardie (2012).
Piling on the evidence
Writing about when RCTS have been conducted in a variety of circumstances, Cartwright and Hardie argue these results can help in justifying a claim that an intervention is effective i.e. it worked somewhere or in a range of different somewheres.
However, this does not provide direct evidence that the intervention will work here in your circumstance or setting. Cartwright and Hardie argue that seeing the same intervention works in a variety of different circumstances, provides some evidence that the intervention can play a causal role in bringing about the desired change. It may also provide evidence that the support factors for the intervention to work occur in a number of different settings. On the other hand, an intervention working in lots of places is only relevant to whether an intervention will work here, if certain assumptions are met. That is, the intervention can play the same causal role here as it did there. Second, the support factors necessary for the intervention to play a positive causal role here, are available for some individual post implementation.
Cartwright and Hardie go onto explain about how ‘ it works there, there and there, is supposed to provide evidence that it work here. Using induction
X plays a cause role in situation 1
X plays a causal role in situation 2
So X plays a causal role everywhere.
For this argument to be robust – it requires research studies/RCTs from a wide range of settings. In addition, it requires the observations to be generalisable across settings, but also what types of settings that they are likely to be generalisable. In addition, it is necessary to take into account the causal connections at work, and how they are influenced by the local support factors.
The problem with induction
At this point it worth remembering that all inductive inferences can be wrong. Kvernbekk (2016) cites the Bertrand Russell who talks about a chicken
Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined vies as to the uniformity of nature would have been useful to the chicken.
As such we ourselves may be no better positions than the chicked which unexpectedly has its neck wrung.
Nevertheless, Cartwright and Hardie go onto argue that lots of positive RCTS results are a good indicator that the intervention plays the same causal role widely enough for it to potentially ‘carry’ to your setting. However, this is based on the argument that RCTS are carried out in a sufficiently wide range of settings, with some ideally similar to your own, so that you are able to make some generalisations from there to here. Cartwright and Hardie then go onto argue ‘that bet is always a little bit dicey.’
Discussion and implications
First, be wary of headlines or soundbites – it’s rarely that straightforward.
Second, if you can, get to the original research which was used for the headline and read it for yourself, as there may be a range of nuances which you need to be aware of.
Third, if you think the intervention might have potential for your school, you need to spend time thinking about the causal roles and support factors necessary to give the intervention of change of working. Indeed, you may want to have a look at this post on causal cakes.
Fourth, as Cartwright and Hardie argue – thinking about causal roles and support factors is difficult and hard work and is difficult to get right every time. However, it’s not beyond you and other colleagues to have a serious discussion about what’s necessary to put in place ino to give an intervention every chance of success.
And finally
In next week’s post I will be using the work of Kvernbekk (2016) to explore in more detail the challenge of making what worked there work here.
References
Cartwright, N. and Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford. Oxford University Press.
Kvernbekk, T. (2016). Evidence-Based Practice in Education: Functions of Evidence and Causal Presuppositions. London. Routledge.