This week saw the welcome announcement
of the appointment of Dr Becky Allen as the director of the UCLIOE’s Centre for
Education Improvement Science. On
appointment, Dr Allen wishes to help develop “a firmer scientific basis for
education policy and practice” and drawing on methods such as laboratory
experiments and classroom observation.
Now regular readers of this blog will
know that I have often expressed a concern over how educational researchers
often misuse terms associated with evidence-based practice. So, given this new initiative in improvement
science it seems sensible look at a definition of improvement science/research
and to do this, I’ll use the work of (LeMahieu et
al., 2017)
Improvement Research : a definition (LeMahieu et al., 2017)
Improvement research is … about making social systems work better.
Improvement research closely inspects what is already in place in social
organizations – how people, roles, materials, norms and processes interact. It
looks for places where performance is less than desired and brings tools of
empirical inquiry to bear and to produce new knowledge about how to remediate
the undesirable performance. Put simply, improvement research is not
principally about developing more “new parts” such as add-on programs,
innovative instructional artifacts or technology; rather, it about making the
many different parts that comprise an educational organization mesh better to
produce quality outcomes more reliably, day in and day out, for every child and
across the diverse contexts in which they are educated.
Examples of
Improvement Research/Science
- Networked Improvement Communities;
- Design-Based Implementation Research;
- Deliverology;
- Implementation Science;
- Lean for Education;
- Six Sigma;
- Positive Deviance
As such, (LeMahieu et
al., 2017) state that All seven of the
approaches ……. share a strong “common
core”. All are in a fundamental sense “scientific” in their orientation. All
involve explicating hypotheses about change and testing these improvement
hypotheses against empirical evidence. Each subsumes a specific set of inquiry
methods and each aspires transparency through the application of carefully
articulated and commonly understood methods – allowing others to examine,
critique and even replicate these inquiry processes and improvement learning.
In the best of cases, these improvement approaches are genuinely scientific
undertakings
In other words, improvement
research is a form of ‘disciplined inquiry’ (Cronbach and Suppes, 1969)
What Improvement Science Is Not?
However, as (LeMahieu et
al., 2017) note a major distinguishing feature of improvement research, is what it does not
attempt to do. Improvement research is not
about creating new theories or research and development. Nor is about seeking to evaluate existing
teacher strategies, interventions of field-based trials. Rather improvement science is about doing
more of what works, stopping what doesn’t and making sure everything is joined
up in ways which bring about improvements in a particular setting
Given this stance, then
statements about the Centre for Education Improvement Science (CEISbeing about
‘laboratory experiments and classroom observations’ seem a little incongruent
with the existing work in the field.
My confusion about the work of
the CEIS is further compounded by mention in Schools Week where it describes Improvement Science London, which is also based at UCL, improvement
science involves the recognition of “the gap between what we know and what we
put into practice” and using the “practical application of scientific
knowledge” to identify what needs to be done differently. However, that could probably more accurately
be described as ‘implementation science’ (a subset of improvement science
admittedly). So, let’s delve into a little
more detail about what is meant by the ‘implementation science.
What is implementation science?
(Barwick, 2017) defines Implementation science (as) the scientific study of methods that support
the adoption of evidence based interventions into a particular setting
(e.g., health, mental health, community,
education, global development). Implementation methods
take the form of strategies and processes that are designed to
facilitate the uptake, use, and ultimately the sustainability – or what I
like to call the ‘evolvability’ – of empirically-supported
interventions, services, and policies into a practice setting (Palinkas
& Soydan, 2012 ; Proctor et al., 2009); referred to herein as
evidence-based practices (EBPs).
Barwick goes onto state that Implementation focuses on taking
interventions that have been found to be effective using
methodologically rigorous designs (e.g., randomized controlled trials,
quasi-experimental designs, hybrid designs) under real-world conditions,
and integrating them into practice settings (not only in the health
sector) using deliberate strategies and processes (Powell et al., 2012 ;
Proctor et al., 2009; Cabassa, 2016). Hybrid designs have emerged relatively recently to
help us explore implementation effectiveness alongside intervention
effectiveness to different degrees (Curran et al, 2012).
As a consequence – implementation
science sits on the right hand side of the following figure (taken from (Barwick,
2017))
So where does this leave us?
Well on the one hand, I am really
excited that educational researchers are beginning to pay attention being done
in field such as improvement and implementation science. On the other hand, I’m a bit disappointed that
we are likely to make the same mistakes as we have with evidence-based
practice, and not fully understand the terms we have borrowed.
Finally – this post may be
completely wrong as I have relied on press releases and press reports to
capture the views of the major protagonists – as such I may be relying on ‘fake
news.’
References
BARWICK, M.
2017. Fundamental Considerations for the Implementation of Evidence in
Practice. MelanieBarwickJourneysInImplementation
[Online]. Available from: https://melaniebarwick.wordpress.com/ [Accessed 15 November 2017].
LEMAHIEU, P., BRYK, A., GRUNOW, A. & GOMEZ, L.
2017. Working to improve: seven approaches to improvement science in education.
Quality Assurance in Education, 25, 2-4.