KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Is there such a thing as evidence-based delivery?

with 6 comments

I started writing this blog post in frustration at hearing once again the mantra in the aid world, and particularly in the UN of need for us to be doing “Evidence based policy and programming” and discussions and laments about why we don’t yet have this and what to do about it. But then this morning I saw two excellent blog posts by Kevin Watkins of ODI and Owen Barder of CGD on Jim Kim’s “Science of delivery” which also seem to touch on some of the same frustrations.

It seems to me that at least part of why this discussion remains frustrating is that we don’t agree on what evidence is, and what it’s role should be in policy or programming.

At first glance it might seem obvious that our policies and programmes should be “firmly based on the best available evidence” and that wherever the ‘best available evidence” isn’t good enough we should be doing more research to fill the gaps. And it can be quick to jump to the conclusions that we need to invest more in research, carefully designed experiments (such as RCTs) and independent evaluations and on the other to put in place incentives or rules than encourage or even force policy makers and programme managers to use the evidence that is available.

While all this might be useful, I don’t think it will be the silver bullet that some think it will. If this is so self-evident then it begs the question as to why we don’t already practice what we preach. Why isn’t aid work primarily if not exclusively evidence-based?

Firstly, however much we invest in research and evaluation there are limitations to what we will know (see my earlier post the truth is out there for a more detailed discussion of the limits of what we can know). Some interventions lend themselves to rigorous experimental design or data analysis (e.g. cash transfers, medicines) while others are harder to do as experiments (e.g. policy reforms) or to research cost effectively.

Secondly incentives to use, or not to use evidence also matter. It’s not enough to improve the packaging and dissemination of new knowledge, or to make using knowledge part of procedure – there are many other factors that govern whether or not knowledge is used both individual and institutional e.g. focusing on implementation rates reduces the incentives to take the time to learn from research or other experiences. (See my earlier post “creating a demand for knowledge?” for more ion the challenges of getting people to use what is already available)

Thirdly, context matters. In most cases it’s simply not enough to exactly copy a “best science based practice” from one location and directly replicate it in another. At best the approach will need to be tailored or adapted to a new context, at worst the approach may not work at all. Kevin Watkins mentions the importance of politics and how different political structures and power relations can have an important impact on how effective a “scientifically sound” approach might be in practice, and you can’t factor this into an experimental design. Owen Barder also highlight how the traditional “evidence-based approach” also fails to deal with complexity i.e. that there are complex and dynamic relationships between the context and the programme which evolve together in unpredictable ways for which a static approach to using evidence and best practice is ineffective.

But I’d add an additional dimension to the issue of context and complexity. Programmes are implemented by individuals, not just by institutions, and individual people bring their own layer of complexity to a problem which is hard to measure or control. Different project managers performing the same role can have different levels of technical skills, but also different personal motivations, and different personal relationships with other key players in the project – and these can interact with the project in unpredictable ways. What’s more we tend to overlook the fact that someone may perform well in one context with one team yet perform poorly in another situation – so it’s impossible to standardize how exactly a project team will work on the project unless we reduce all elements of judgment and unpredictability – transforming programme management to a production line function.

Yet I don’t think all is lost. There are some additional things we can do to improve how we learn and implement apart from investing more in research. Here are a few of them;

1. Take the broadest possible view of what “evidence means”. Evidence might be rigorous research, but it might also be case studies, stories, ethnographic studies, analysis of “big data”. Take a broad view but recognize what each type of “evidence “ us useful for and what its limitations are. don’t overlook key areas such as political context and power analysis.

2. Do more pilots and dynamic experiments – encourage more experimentation – not only in the sense of control-design experiments but also in the sense of coming up with lots of possible ideas, then trying them out, adapting them as you go and building on those that are yielding results.

3. Start from, but don’t blindly copy past practices – encourage use of existing knowledge and experience – but as a starting point that will be built upon and deliberately and continually adapted.

4. Encourage ongoing collaboration and sharing of tacit knowledge between practitioners as well as sharing of explicit research or evaluation reports and results.

5. Work with and learn from beneficiaries  – they will often have insights into why something does or doesn’t work, or what might be done to address a problem that an outsider cannot see.

6. Keep a diary with detailed ongoing records of what is happening on the ground both internal and external factors to help generate enough material to get a useful insight into what is working and why. Encourage self-reflection by those involved in the project based on this information. But you can also use this as a key source for more rigorous independent analysis.

7. Work on incentives  – make sure that incentives are there to generate and to use evidence but also personal insights and experience.

8. Adapt – rather than looking for an idealized approach to solving a problem once and for all  – keep searching for improved ways to solve it that work in the context where they are applied and keep modifying and varying your approach based on what you are learning.

I’m not sure if delivery can be entirely evidence-based – but evidence informed and learning-based would be a good start.


Written by Ian Thorpe

August 14, 2013 at 11:39 am

6 Responses

Subscribe to comments with RSS.

  1. […] I started writing this blog post in frustration at hearing once again the mantra in the aid world, and particularly in the UN of need for us to be doing “Evidence based policy and programming” and …  […]

  2. These are very good points. Delivery and policy are almost never based on evidence and where they are, change in this direction is usually incremental and over time. Incentives can help, but the response can be unreliable. In my experience the best stimulus to Governments adopting evidence-based implementation is when development agencies provide flexible TA to encourage Governments to undertake their own research and effectively convince themselves. This takes time.

    Chris Vickery

    August 15, 2013 at 9:30 pm

  3. Ian, great post! Running through this and few others you cited in your blog, it seems to be that we need an entirely new dictionary as there doesnt seem to be a consensus around what ‘evidence’ is or should be, and in order to capture the complexity you talk about (projects implemented by people not institutions, and people bring in additional layer of complexity with their own biases, and motivations), we need something other than ‘controlling for,’ ‘measuring,’ and ‘taking stock off’ but rather sensing, probing, emerging etc…


    August 16, 2013 at 3:02 am

  4. […] Excerpt from: Is there such a thing as evidence-based delivery? […]

  5. In short to the question that this blog asks, there can be, and I think I have an example of evidence-based delivery in practice! I worked for two years on one £65m DFID-funded programme in Bangladesh, which sought to create livelihoods with the extreme poor. Now, most NGOs in Bangladesh had never worked with this demographic before, and the majority of the 36 projects were ‘innovations’. Therefore there was not much ex ante evidence. But what we as the managing agency focused on, was creating an evidence-generation system and complementary management system to enable evidence-based delivery. We created a ‘Change Monitoring System’ which provided ongoing real-time data of field-level changes (both quantitative data through mobile phones) and qualitative PRA info explaining those changes. We then worked with each of the project management teams to encourage them to reflect on their projects, and adapt their projects accordingly. In fact ‘change monitoring’ was the primary reporting system for partner NGOs (rather than inputs/burn rates etc). As the management team in charge of the money, we not only enabled NGOs to adapt their project designs, but actively encouraged it. From all my reading and experience, this is the closest thing I have seen to ‘evidence-based delivery’. I recently wrote up the model and theory behind it, and am looking to get it published somewhere. I can email it to you if you are interested?

    Christopher Maclay

    August 21, 2013 at 6:35 am

  6. Thanks for this great blog! Yes, you are definitely touching on something very important here. It also is very frustrating seeing that organizations which deliver “easy fixes” which are quantifiable (number of vaccines given, or number of shelters made…) rank higher in donor evaluations than those with long term development goals. Not to mention the difficulty working with gender issues, especially gender based violence, where so much already is taboo and hidden, and people don’t disclose information. Besides, if positive long term changes are made, so many various factors can have contributed to its result. Oh, sigh…

    I will follow this blog closely.

    The Gender Observer

    September 7, 2013 at 5:57 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: