Implementation science: what is it?
The social sector needs to take notice of this growing field that’s driving better interventions in health, education, and human services.
- Implementation science focuses on what helps and what hinders the uptake, effective implementation, and sustainability of proven programs, practices, and policies in every day service.
- The field has developed to drive the better use of effective interventions in health, education and human services.
- It addresses many factors which can get in the way of the quality of implementation including competing demands on frontline practitioners; lack of knowledge, skills and resources; and misalignment of the evidence with operational priorities.
- To be well implemented, programs need to be adequately described, have sufficient funding for more than just training, and the data collected needs to cover three areas: reach, implementation and outcomes.
Nick Perini from SVA Consulting talks to Robyn Mildon, Executive Director of the Centre for Evidence and Implementation (CEI), to find out why.
Nick Perini: What is implementation science and where is it applied?
Robyn Mildon: Implementation science focuses on what helps and what hinders the uptake, effective implementation, and sustainability of proven programs, practices, and policies in every day service.
Just because a program has been shown to work (in good scientific trials) doesn’t make reaping its benefits (in real world settings) easy. It requires focused efforts.
The field of implementation science has developed to help with this – to drive the better use of effective interventions in health, education and human services.
Essentially it is about identifying the core activities that are going to help a workforce implement something fully and properly, and an organisation to support that workforce to enable more effective implementation.
These insights can be helpful for change specialists, project managers and service delivery CEOs in planning for implementation.
Achieving full and effective implementation of evidence-informed programs into every day services can take on average 2-4 years.
Why is this needed as a separate discipline?
Achieving full and effective implementation of evidence-informed programs into every day services can take on average 2-4 years. This is known as the research-to-practice gap.
When new programs and practices are applied in less controlled settings – that is, outside of a controlled research study – barriers will appear. And they will lead to problems in the quality of the implementation.
Many factors can get in the way of actual implementation: Competing demands on frontline practitioners; lack of knowledge, skills and resources; misalignment of the evidence with operational priorities and so on.
Implementation science develops strategies to address some of these factors and improve the quality with which programs and practices are implemented.
Although as a field, implementation science grew out of the health area – endeavouring to answer questions such as, ‘How do we get the polio vaccine to the population that could benefit from it?’ or ‘How do we get physicians to fully implement all steps of a set of clinical guidelines?’ – now, it is also being applied in the education and human services settings.
How is it different from organisational change or change management?
All share the same goal of improving programs, services and operations, and the methods used can overlap. However, while there is a vast literature on organisational change, most of it has a very weak evidence base, often relying on case studies or the writer’s experience.
… coaching in-situ for frontline practioners learning new practices is one of the most effective ways to help them to deliver that approach well.
Implementation science on the other hand, contributes to or uses a body of research evidence. Implementation scientists study the processes that enable better implementation and that get in the way of it, and apply that research-based knowledge in practice.
For example, from implementation science, we know coaching in-situ for frontline practioners learning new practices is one of the most effective ways to help them to deliver that approach well. Training by itself, no matter how good, has been shown to be ineffective in achieving this.
The implementation science literature also clearly shows the importance of considering the readiness of an organisation or system to implement the program or practices. This has resulted in the development of a number of tried and tested readiness assessments to improve the chances of achieving intended changes.
How has implementation science been received in Australia and what are some good examples?
Everyone agrees that you need to tend to implementation. However, sometimes people don’t welcome the message that it’s going to take more than workforce training to get something effectively implemented.
Implementation of education approaches
The Centre for Evidence and Implementation in partnership with Evidence for Learning, an education enterprise incubated by SVA, has produced a report about high quality implementation of educational approaches. The report identifies crucial elements of implementation in education that ensure approaches are given their greatest chance to improve outcomes for students. It fills a gap as there has been little research in Australia that summarises the important elements in implementation of practices and approaches in real-world classrooms and school settings.
The report reveals four major indicators of implementation quality that influence students’ outcomes and teachers’ attitudes and practices.
It is early days but we are starting to see some really good examples in Australia. The Federal Government and a handful of state governments have invested in implementation science expertise to support organisations and broader government systems to implement a range of different services.
They’ve asked: how do we create the infrastructure that will enable, rather then prevent, a service and program to be implemented as planned; monitor it to determine what effects are being achieved; and track whether we’re getting the outcome promised?
For example, the Australian Federal Government, as part of the Intensive Family Support service to support families and address child neglect in the Northern Territory, funds a specialist implementation capacity support team. The team provides on-the-ground support to the local workforce to build their knowledge and skills to deliver the service. At the same time, it works with agencies to better align their systems and processes and support the effective implementation of the services. For example, by ensuring they are collecting the right data at the right time to determine how implementation is going and how it could be improved.
The NSW Department of Family and Community Services, is funding the Centre for Evidence and Implementation to develop an implementation strategy with them, for their new state-wide Child Protection Practice Framework and to support the early use of this strategy in practice.
Non-government agencies are also looking to implementation science to better support their work. This year, the Blackdog Institute engaged us to design a tailored Implementation Guide that informs and guides the roll-out of their new LifeSpan suicide prevention initiative in NSW.
In your experience, what has to be addressed to see programs well implemented?
A number of things but these three come to mind.
One is, we need to ensure that the program or policy you are working to get into a system is actually implementable. This means, it has to be adequately described. Unfortunately, particularly in social services, a lot of programs are made up of big-picture, broad statements. They are important but insufficient to enable practitioners and supervisors to understand what they are supposed to do, how they can continually get better at it, and how they monitor whether it’s happening. We often need to look at the implementability of interventions, and improve the descriptions of them to improve the training and coaching in the field.
This also means that you need innovation specific capacity – or somebody who knows, at a very detailed level, about the program that you’re trying to implement. For example in SVA’s project, Restacking the Odds which is creating an evidence-based measurement framework to help improve services for children experiencing intergenerational disadvantage. In this case, you need staff with the expertise and good understanding of the parenting, early childhood, and health-focused programs that the project intends to implement; staff who know what’s workable and what’s not.
Secondly, we need to fund more than just training. As mentioned earlier, coaching is the most effective way to help people deliver a new approach well. We need to move beyond only giving workers training because it’s been shown to be mostly ineffective by itself, no matter how good it is. Again, it is important but insufficient to get the change.
The third is data. We need to align what we collect with what will help understand three things:
- Reach: Are you reaching the population that you intend to serve? (This means really understanding the characteristics and the needs of the people you’re serving.)
- Implementation: Are you implementing in the way you intended (often referred to as program adherence and fidelity).
- Outcomes: As a result of your implementation efforts, are you achieving the outcomes that you set out to achieve?
If we are able to collect quality data across these three areas, then we are better able to monitor how we’re going and continuously improve it.
The data piece is important because your assessment of how well the implementation has gone is only as good as the data you collect. And focusing on continuous improvement is key to working in an implementation science informed way.
For example, when implementing a mental health program, are we using a quality measure of mental health that’s valid and reliable so you can trust that it’s measuring what it intends to measure? Many organisations use homemade measures in their programs. Implementation science tries to help move away from that and towards the use of measures that are more valid and reliable.
How should program managers and/or policy makers be applying implementation science in their planning and practise?
For program managers or policy makers, using a known implementation framework to guide the planning process, can be quite helpful.
… implementation isn’t an event but a process that unfolds in stages…
There’s been an explosion of implementation frameworks – more than 60 are now available in the literature. They describe both the process of implementation and the action and behaviours that can give individuals and organisations a better shot at achieving quality implementation. As such, they can guide the overall planning of an implementation and help problem solve during an implementation process.
One thing most of these frameworks agree upon is that implementation isn’t an event but a process that unfolds in stages: pre-planning, early implementation and full implementation.
Pre-planning is what you do before putting the practice or policy into action. During early implementation you’re trying a new policy, program, or practise, which is followed by full implementation. It’s helpful to use these phases in your planning process.
The pre-planning phase, where you’re thinking about the essential things that you need in place to enable the implementation, is the most important. For example: what kind of training needs to be accessed? What kind of coaching models need to go into place? What kind of data needs to be collected to answer questions of reach, implementation and outcomes? What kind of organisational systems need to be changed to enable the things to be implemented?
You use implementation science at your organisation – The Centre for Evidence and Implementation (CEI). Tell us what CEI seeks to achieve and some of the projects you’re working on.
CEI is a social purpose organisation. Our focus is to improve outcomes for children and families facing adversity by helping to improve programs and policies for children and families and their communities.
We work in the following three areas to achieve this:
- Generate evidence and then translate and disseminate this evidence to assist in the design of effective programs and evidence-informed policy.
- Build capacity within organisations to effectively implement proven programs and policies. By applying implementation science, services are strengthened.
- Evaluate programs to assess outcomes and understand how programs achieve results for service users.
CEI is two years old. We have over 30 projects on the go now, working with government and non-government agencies and intermediary agencies across Australia, in Singapore and in the UK.
These projects range from evaluations, to rapid evidence assessments on ‘what works’ in social services, to practical training and consultation in the application of implementation science, and program improvement work.
The work we do with community service agencies tends to focus on ‘service design and improvement’. We often get asked to do an evaluation first but we work with the agency to see if it would be a better use of resources – from both a client-centred perspective and an inclusion of evidence point of view – to assess and improve the service first.
Author: Nick Perini