Archive for March 2014
Last week we ran a session in our office on “Building the evidence base, and evidence based reporting” which was identified as one of our priority areas of work for this year. The purpose was to unpack a little what we mean by “evidence“ in UN coordination work and what are we lacking and what can we do about it.
Perhaps unsurprisingly the biggest gap we identified was evidence of the impact of what we do. Donors have been willing to invest in UN Coordination with the assumption that it will lead to better results, but now under pressure from their own constituencies are starting to ask for the proof.
But what do we mean by results? Some have specifically asked us to show evidence of impact of UN coordination on development results. Ideally we would love to show this, it is after all what motivates us to do what we do, but trying to prove (with hard evidence) how many children’s lives were saved, or how many jobs were created, or how much economic growth occurred as a result of joint workplans or pooled funding mechanisms or regular sector meetings is tough job – rather like looking for the impact of a butterfly’s wings on hurricane patterns.
Why is this so difficult? Well for starters, there’s a lot of debate about whether or not aid contributes to development at all – but even assuming it does – attribution is difficult. Development is the result of the actions and resources of many players – bilateral donors, multilateral banks, NGOs, private sector and not least the government, so identifying what the contribution of the UN is to development versus the other actors is very difficult. Now imagine trying to determine what the impact of particular process changes in how we work together affect our ability to contribute to those results. Furthermore the real development results of an action are only fully apparent many years after an action is taken. So the impact of what we do now might only be measurable in 5-10 years time or more. We also don’t have a control case against which to compare i.e. you can’t randomly choose to coordinate half of your offers and not coordinate the other half and compare the difference.
So if it’s next to impossible to answer the development impact question with confidence what can we do? A few thoughts:
1. Lower your expectations – measure what you can actually measure. Look more at outputs or process outcomes rather than development impact. Look at those things where the change is more easily quantified such as greater efficiency (e.g. reduced number of person hours to do something), faster response time, reduced prices through joint procurement, reduced duplication, greater population reach through joint work. Those things can be measured and can demonstrate the value of coordination assume they do if fact improve – but we can and must measure them to see whether they do make things better and how big the gains are.
2. Use what we have in terms of linking coordination to development results however limited it may be. All UNDAFs are supposed to be evaluated and although the time frame is too short to show impact they can look at both coordination and delivery and see how they relate to one another. Similarly a number of evaluations of joint programmes have been done on country and global programmes (for example all joint programmes funded by the MDG Achievement Fund were evaluated – a treasure trove of information if someone had the capacity to do a meta-analysis of them all) – again these can help us to determine the relation although they are far from the complete story.
3. Collect individual case studies that illustrate the impact of coordination and explain the chain of events through which they do it. Case studies help illustrate in a real life situation how coordination takes place and what are the potential and actual gains. They illustrate both the challenges and the gains in a way that is tangible and credible. While they are not “scientific” they can have a strong explanatory power. The key here is to present both the successes and the failures – this contributes more to future learning and improved approaches and is also more credible – we need to avoid the temptation of only sharing the positive thinking that this is what donors want to hear. To overcome the limitations of the individual examples whose success my be contextual its is important to collect many case studies from different contexts. This improves the confidence in the observations, and also can be a basis for meta-analysis to look for broader patterns and lessons (see an example here of case studies on human rights mainstreaming).
4. Make a plausible case to look at how process impact can lead to actual impact e.g. if joint procurement of vaccines reduced prices by 10% then this means we are able to vaccinate 10 % more children. If we reduce reporting burden by 50% then staff have 50% more time for programming (or we need 50% less staff). This is of course theoretical impact but it does clearly demonstrate the opportunity cost of maintaining the status quo, strengthening the case for change (and investing in it).
5. Take a look at perceptions – even though we can’t always generate solid quantitative measures of improved effectiveness and efficiency, it’s worth looking at qualitative measures. Government counterparts and other partners often have a sense whether we are working more effectively with them and reducing the burden on them, delivering better advice etc. based on their regular interactions with us. Ongoing dialogue and periodic feedback surveys or polls can be very informative as to how well our main clients think we are doing and whether we are improving over time. We can also anonymously ask our staff the same questions to see whether or not we feel we are doing better. The good news is we have the tools to do this or can easily set them up.
6. The last, and possibly the most important point is that we need be get real with donors and the public. We need to have a hard, truthful conversation where we explain what we can and can’t say about coordination and development, particularly what we can’t say. Often we try to please without facing the truth. In fact most of our donors and partners are struggling with the same problems in showing the impact of their own work to a skeptical public. Maybe it would be better to work with them to figure out how to make the most of what we can do with the information we have, and how to educate the public on what we can know and what we can’t, and share experience on how to communicate this more effectively. Ultimately we need to reassure donors and the public that their money is in good hands, and the reforms we are undertaking are making a difference without being misleading about how much we know about the magnitude of this difference or the exact formula which delivers development. A lot of this is not just in how we measure results, but how we communicate them – something I’ve written about in more detail before.
Measuring and transparently sharing what evidence we can gather, being honest about what we don’t know and sharing real stories and examples of our work is probably the best we can do with this difficult challenge.
A few days ago Dave Algoso posted an excellent blog (Who’s afraid of imported solutions) which looks at whether outside knowledge and models are useful for development (versus home-grown solutions) and considers how issues of power affect the exchange of knowledge, often for the worse.
I just wanted to share a few observations of my own on the intertwined nature of knowledge sharing and power:
- The debate about local versus external “solutions” is often a false one since almost all development work revolves around introduction and adaptation of new ideas from elsewhere. Almost no project or idea is entirely home-grown. But the role of external knowledge and how it is acquired and used varies enormously with important implications. External knowledge can range from the transfer and application of specific technical knowledge (e.g. the formula and procedures to manufacture and preserve vaccines) i.e. direct copying of a solution, transfer of models (e.g. adoption of a particular approach or design of social security and cash transfer programmes) with contextual adaptation, or just the use of a methodology to support a locally created design such as application of human-centred design approaches – this might create a local solution but often the methodology used to develop it is an external “solution”of its own.
- Power is critical in knowledge exchange and can impose limitations on it. One of the most obvious ways that power influences use of external knowledge is through money – if a donor country organization is providing aid to another country, then most likely the models and approaches they suggest will also come from the same country. This might be deliberate, but it is often also unintentional. If I’m a donor providing resources, I’m naturally inclined to also share the knowledge I most readily have access to which is of course my own. Aid conditionality adds to this power imbalance. Making aid conditional upon certain actions by the government makes sense for the donors in order to insist their money isn’t wasted and that the government follows what the donor believes is good practice in taking the necessary steps on its side to ensure the programme is successful. This is nevertheless a form of coercion which can lead to resentment or unwillingness to “speak truth to power” to explain why the conditions imposed might not be appropriate to a given situation. And this can also mean that the solutions of those donors with more resources (e.g. World Bank, DFID, Gates Foundation) are more widely adopted than if merit were the only consideration.
- Knowledge is used most effectively when it is based on demand and meets the users specific practical needs. This seems obvious, but is often overlooked. This means it’s always going to be better when countries demand specific expertise than when it is foisted upon them however well-intentioned. At the same time waiting for demand isn’t enough – it may well be that governments in a country are not sufficiently aware of external approaches which could be useful to them to demand them or may not know how to ask. Also it’s not always obvious when an external idea will be useful, sometimes you also need external advice to see it. Also in some areas there may not be government demand, even if it comes from populations themselves (e.g. for sensitive human rights topics) So while demand is essential, it also needs to be stimulated.
- And of course those with greater ability to market their solutions (often western led experts, NGOs and advocates) will be more effective at stimulating demand for their solutions. And the surer they seem about their solution the more appealing it will be politically, even if the evidence isn’t there to back it up (thinking here of examples such as OLPC or Millennium Villages which were able to take off through the sheer conviction of their creators). Sometimes whole countries can be a marketing asset i.e. people might want to adapt approaches from a particular country just because of their positive views of that country as a sign of quality regardless of whether the tool is actually successful or whether or not it is applicable in the new context.
- One way to try to level the power imbalance, and also improving the chances of successful transfer of solutions or models is through South-South cooperation. This is generally more demand driven and can be a learning partnership where both sides benefit from sharing experience and the solutions might be more easily replicable as the contexts might be more similar. The downside is that it is harder to fund when both sides have more limited capacity both in terms of finance and in availability of expertise (experts are often more busy implementing their own country’s systems to have the time to help someone else). Historically much South-South cooperation has been about solidarity and political more than mutually beneficial knowledge exchange – and sometimes with similar power imbalances to conventional aid.
A few thoughts on approaches to (partially) address these imbalances:
1. Making more visible the experience and models that are out there and whatever evidence exists about them – in particular finding ways to share more information about what approaches and models come from the South and/or how they have been adapted. There is a wealth of valuable knowledge that is largely untapped just because it is less visible. Supporting more research is promising approaches is also important, although just because and idea doesn’t have a body of rigorous evidence supporting it doesn’t mean it shouldn’t be shared – since the fact that some things are more studied than others is also often as a result of a power or financial imbalance.
2. Decoupling funding and source of expertise. The discussion around untying procurement is long in aid, but is much less advanced in the area of technical assistance. It could bring tremendous benefit if external funding was not tied to the expertise it procures and instead allowed the most relevant, and demand driven models and expertise to be used. This could potentially give a huge boost to South-South technical cooperation without waiting for countries to pay to provide their own technical expertise.
3. For those working in aid or in partner countries as “beneficiaries” of aid it’s important to see models as inputs to programmes not as the programmes themselves i.e. should look to identify and learn from the best models out there – but not to believe the hype of others or ourselves that we have a ready-made solution. And we need to be aware of how power affects our conversations and what solutions are considered and who is listened to. We need to be guided in contextualizing a model or solutions by participatory approaches to accept, reject or redesign them. But even the process of participation needs to be locally designed or refined.
An interesting conclusion from all of this is that for development to be successful it has to be both locally owned and driven– but it also has to successfully integrate external ideas and knowledge. This is a challenge since externals can bring in the outside knowledge but will struggle to understand how to apply it in the local context whereas locals may struggle with fully embracing the new models and the need for change. Successful integration of new ideas may well be a partnership, but with the balance of power shifted more firmly towards the person seeking to change rather than the person helping. But it may also be best led by someone who can straddle both worlds i.e. someone who lives in the current situation and understands the context and challenges, but who has also taken time way to learn and experience the new possibilities and who can mentally integrate the two (I wrote about this a few years ago as outside-in development). As I concluded then “the real heroes of development come from within, but at the same time need to undertake their personal journey to absorb the new ideas from the outside and figure out how to reconcile them with their existing societies. Aid workers can be allies to help smooth this journey, but they cannot take it for themselves.”
So let’s get over our own power trip when providing support with models and ideas (or innovation techniques) and not assume we know what is best, leaving that to the local heroes of change.