Show me the impact!
Last week we ran a session in our office on “Building the evidence base, and evidence based reporting” which was identified as one of our priority areas of work for this year. The purpose was to unpack a little what we mean by “evidence“ in UN coordination work and what are we lacking and what can we do about it.
Perhaps unsurprisingly the biggest gap we identified was evidence of the impact of what we do. Donors have been willing to invest in UN Coordination with the assumption that it will lead to better results, but now under pressure from their own constituencies are starting to ask for the proof.
But what do we mean by results? Some have specifically asked us to show evidence of impact of UN coordination on development results. Ideally we would love to show this, it is after all what motivates us to do what we do, but trying to prove (with hard evidence) how many children’s lives were saved, or how many jobs were created, or how much economic growth occurred as a result of joint workplans or pooled funding mechanisms or regular sector meetings is tough job – rather like looking for the impact of a butterfly’s wings on hurricane patterns.
Why is this so difficult? Well for starters, there’s a lot of debate about whether or not aid contributes to development at all – but even assuming it does – attribution is difficult. Development is the result of the actions and resources of many players – bilateral donors, multilateral banks, NGOs, private sector and not least the government, so identifying what the contribution of the UN is to development versus the other actors is very difficult. Now imagine trying to determine what the impact of particular process changes in how we work together affect our ability to contribute to those results. Furthermore the real development results of an action are only fully apparent many years after an action is taken. So the impact of what we do now might only be measurable in 5-10 years time or more. We also don’t have a control case against which to compare i.e. you can’t randomly choose to coordinate half of your offers and not coordinate the other half and compare the difference.
So if it’s next to impossible to answer the development impact question with confidence what can we do? A few thoughts:
1. Lower your expectations – measure what you can actually measure. Look more at outputs or process outcomes rather than development impact. Look at those things where the change is more easily quantified such as greater efficiency (e.g. reduced number of person hours to do something), faster response time, reduced prices through joint procurement, reduced duplication, greater population reach through joint work. Those things can be measured and can demonstrate the value of coordination assume they do if fact improve – but we can and must measure them to see whether they do make things better and how big the gains are.
2. Use what we have in terms of linking coordination to development results however limited it may be. All UNDAFs are supposed to be evaluated and although the time frame is too short to show impact they can look at both coordination and delivery and see how they relate to one another. Similarly a number of evaluations of joint programmes have been done on country and global programmes (for example all joint programmes funded by the MDG Achievement Fund were evaluated – a treasure trove of information if someone had the capacity to do a meta-analysis of them all) – again these can help us to determine the relation although they are far from the complete story.
3. Collect individual case studies that illustrate the impact of coordination and explain the chain of events through which they do it. Case studies help illustrate in a real life situation how coordination takes place and what are the potential and actual gains. They illustrate both the challenges and the gains in a way that is tangible and credible. While they are not “scientific” they can have a strong explanatory power. The key here is to present both the successes and the failures – this contributes more to future learning and improved approaches and is also more credible – we need to avoid the temptation of only sharing the positive thinking that this is what donors want to hear. To overcome the limitations of the individual examples whose success my be contextual its is important to collect many case studies from different contexts. This improves the confidence in the observations, and also can be a basis for meta-analysis to look for broader patterns and lessons (see an example here of case studies on human rights mainstreaming).
4. Make a plausible case to look at how process impact can lead to actual impact e.g. if joint procurement of vaccines reduced prices by 10% then this means we are able to vaccinate 10 % more children. If we reduce reporting burden by 50% then staff have 50% more time for programming (or we need 50% less staff). This is of course theoretical impact but it does clearly demonstrate the opportunity cost of maintaining the status quo, strengthening the case for change (and investing in it).
5. Take a look at perceptions – even though we can’t always generate solid quantitative measures of improved effectiveness and efficiency, it’s worth looking at qualitative measures. Government counterparts and other partners often have a sense whether we are working more effectively with them and reducing the burden on them, delivering better advice etc. based on their regular interactions with us. Ongoing dialogue and periodic feedback surveys or polls can be very informative as to how well our main clients think we are doing and whether we are improving over time. We can also anonymously ask our staff the same questions to see whether or not we feel we are doing better. The good news is we have the tools to do this or can easily set them up.
6. The last, and possibly the most important point is that we need be get real with donors and the public. We need to have a hard, truthful conversation where we explain what we can and can’t say about coordination and development, particularly what we can’t say. Often we try to please without facing the truth. In fact most of our donors and partners are struggling with the same problems in showing the impact of their own work to a skeptical public. Maybe it would be better to work with them to figure out how to make the most of what we can do with the information we have, and how to educate the public on what we can know and what we can’t, and share experience on how to communicate this more effectively. Ultimately we need to reassure donors and the public that their money is in good hands, and the reforms we are undertaking are making a difference without being misleading about how much we know about the magnitude of this difference or the exact formula which delivers development. A lot of this is not just in how we measure results, but how we communicate them – something I’ve written about in more detail before.
Measuring and transparently sharing what evidence we can gather, being honest about what we don’t know and sharing real stories and examples of our work is probably the best we can do with this difficult challenge.