Using Theory of Change in Prime and my PhD: Key Lessons Learned
Where did the idea for my PhD come from?
In 2011, I was working as the project manager for the multi-country DFID-funded Programme for Improving Mental Healthcare (PRIME) which aimed to develop, implement, evaluate and scale up district level mental healthcare plans in Ethiopia, India, Nepal, South Africa and Uganda. DFID required that we develop a theory of change (ToC) to monitor and evaluate the programme. ToC is a programme evaluation approach which has been used by funders such as DFID, Grand Challenges Canada and Comic Relief to determine how and if a programme reaches the impact it set out to achieve. My colleague, Mary De Silva, volunteered to go on a training course and came away not only with the knowledge of how to use ToC for programme evaluation but with a vision for how we could use ToC for our research in combination with the MRC framework in order to better guide our mental healthcare development and evaluation . At the time, we didn’t know whether this was a new concept or even how it worked in practice; or whether we could really use it to evaluate process and outcomes together as we had anticipated. What we could find was limited to journals focused on evaluation methods like Evaluation and Evaluation and Programme Planning.
That’s where the idea for my PhD was born: my PhD set out to evaluate how we could use ToC to design and evaluate complex interventions across five low- and middle-income countries using PRIME as a case example.
What did I do and what did I find?
Firstly, I did a systematic review. It turns out that 62 peer-reviewed papers, theses or grey literature already used ToC in developing or evaluating public health interventions. But they were mostly quite light on the details.
Secondly, I explored how PRIME went about developing both cross-country ToC and country ToCs. The cross-country ToC was initially developed at a PRIME meeting which was held in Goa in 2011. Then, each country team developed and used these country ToCs to refine the cross-country map, developing indicators for our evaluation plan. I followed the process we used in PRIME to develop and use ToC. I interviewed the country teams who ran the ToC workshops and looked at meeting minutes to see what patterns I could see across countries. I found that ToC workshops help to develop logical ToC maps, contextualized mental healthcare plans and get buy-in from stakeholders. I also published the overall process and resulting ToC map.
Thirdly, I showed how the ToC could be evaluated using both process and outcome indicators using data from PRIME Nepal as an example. Initially, I struggled to find an analysis approach which combined both quantitative and qualitative data, explored causal pathways and could deal with relatively small sample sizes. In the end I used Qualitative Comparative Analysis and showed how it can be used with ToC to combine outcome and process indicators.
In the Qualitative Comparative Analysis, I compared service utilization data from all 10 implementation facilities in Chitwan, Nepal, as well as other factors outlined in the ToC such as trained staff, availability of medication and community detection. The data was collected as part of PRIME. I found that supply- and demand-side interventions are important to increase mental health service utilisation in Nepal.
What are the lessons learned and key messages?
- ToC can be useful when interventions are complex but using workshops is resource intensive and the process needs a ToC champion to drive it
- A ToC can provide a framework for evaluation and assist with indicators for measurement, but it may not be feasible to measure all of them
- ToC does not provide a data collection and analysis approach which is both a strength and a weakness and it means that you need to figure out what best fits the indicators and the kind of evidence you need to produce
- You can adapt your ToC to other similar contexts and improve on it in future versions
Since 2011, ToC is increasingly being used in Global Mental Health research by Laura Asher, Dixon Chibanda and colleagues to plan and evaluate programmes. This will allow us not only to compare whether interventions worked or not in different contexts but start to understand how the stakeholders and researchers hypothesized they would work. We can then use the evaluation of these programmes to evaluate whether the programmes worked as intended in each context and why or why not. As we gather more evidence for the ToCs this will help to make a more informed decisions about what elements of the interventions could be transferable to similar contexts.