KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Planners, evaluators and entrepreneurs

with 8 comments

Last week I attended part of NYU’s Development Research Institute’s annual conference entitled “Debates in Development: The Search for Answers

The morning session was great, with a particularly lively and interesting discussion on different approaches to development, the highlight being a debate on the Millennium Villages Project, which was much more interesting and surprising than it sounds (let’s face we’re probably all a bit jaded with discussions about the MVPs, especially if you work in the UN since many people erroneously believe that MVPs form a major part of the UN’s approach to poverty, when in reality it only forms a small experimental part of our overall work with relatively little UN funding or support).

Bill Easterly set the stage by introducing the debates as a comparison between smart, expensive decision-making systems, in which he was also lining up the afternoon’s discussion on randomized control trials (RC Ts), and cheap, dumb solution finding systems, by which he means experimentation and success or failure based on market feedback. This is a new framing of his idea of searchers versus planners in development – but instead also looking not only at how development projects are planned, but also how they are evaluated.

In the MVP debate that followed, instead of having Sachs come into the lion’s den, Stewart Paperin of the Open Society Institute, a major funder of the MVPs gave the approach a spirited defense against critics Michael Clemens and Bernadette Wanjala who have both been publicly critical of the MVPs citing their lack of transparency and rigorous evaluation, and for overstating their results.

What was interesting about the debate was that Paperin was skillfully able to defend the MVPs on the grounds that they were, from his perspective at least, an investment in a practical, even entrepreneurial experiment that wasn’t certain to work, but were a good chance to try something different in order to learn more for the future.

In the end the most interesting aspect about the conference for me was the debate around the nature of actionable knowledge in development, and what can we trust as a basis to make decisions on development funding and action. This is both a scientific and a practical question.

The “debate” has been set up in at least three different ways:

In his book  “The White Man’s Burden” Easterly talks about planners versus searchers i.e. those who think that top down set of proven approaches can work in development (a la Sachs) versus those who believe that all solutions are local and that people need to experiment and find their own solutions within their own context,

In his talk Clemens spoke instead about the goals movement versus i.e. those who believe they already have sufficient evidence for their solutions and have a vision and passion to take them to scale versus the evaluation movement i.e. those that believe  that we need to rigorously measure what we do to know if it works and how to improve it.

But in his introduction Easterly also spoke of a third debate: between rigorous, expensive scientific measurement versus low-cost experimentation and market feedback. His case being that evaluation is costly, and often the results are not decisive or generalizable, so it might be more effective to use feedback from beneficiaries as a way of assessing what works.

Three big take-aways from the discussion were:

1. We might know some things about what works in development, but there is a lot we don’t know.  Even when we do know something, it’s not a guarantee that it will work without a hitch in another context.

2. Evaluation (and tools such as RCTs) can tell us a lot about what works but they are expensive to run, and their results are not always easily generalizable or actionable.

3. But if you don’t measure your project in some way, then how will you know if a project works? and how will you improve it?

What strikes me here is that in a way all of these different perspectives have value, but their proponents have a difficult time understanding each other and figuring out how to best combine their approaches.

Wouldn’t it be good if the goals people – those who have a clear vision and a passion to pursue it would create momentum  and raise resources around their projects, whether these are large-scale plans developed from extensive research and experience, or whether they are smaller scale hunches or experiments – but as entrepreneurs rather than as top-down planners. But at the same time if these projects would collect data from the outset to better enable them to track progress, and if where feasible they could try multiple approaches or variations on an approach to be able to compare then and learn from the differences.

Similarly if the results could be made public, then funders, beneficiaries and even academics could see them and independently assess them, and project managers could use them to modify their programmes and identify whether they should be scaled up or shut down.

Lastly, but perhaps most importantly – the missing element in evaluation of development projects is effective and ongoing beneficiary feedback. Entrepreneurs, unlike aid planners try lots of different things some which succeed massively while others fail dismally – the difference being that their success is measured by the feedback they get from consumers who buy their product. In the aid world we don’t yet have effective ways to get this feedback so we rely instead on evaluation –  to rigorously, but only selectively assess the impact of our work, and communication – to sell our story of success but of continued need to funders who are far removed from the experience of those who the programmes are designed to assist. And evaluation and communication are often at odds.

The next big focus on measurement will hopefully be in the area of getting real-time feedback from beneficiaries which can be fed back into projects to improve them, and fed back to donors and the public transparently in order for them to better judge what and who to fund, and all this at a relatively low-cost and greater clarity than expensive evaluation.

Formal evaluation (and experimental project designs such as RCTs) can focus on those areas where getting the programme design specifics is important in terms of cost and impact, but where the results are also likely to yield insights which can be generalized beyond a specific programme.

(For a more complete account of the conference check out Tom’s blog here and his curated twitter stream here)

Written by Ian Thorpe

March 29, 2012 at 9:15 am

8 Responses

Subscribe to comments with RSS.

  1. Thanks for the excellent recap Ian. The role of evaluation, failure and feedback are fascinating, both given the Sachs/Easterly type debates, and the frustrations with seeing failure happen over and over. I recently wrote a post looking at the combined effect of poor evaluation methods with a requirement for rigid up front designs and the outcome of “achieved failure”. http://theborrowedbicycle.ca/2012/03/the-development-sector-is-achieving-failure/

    I would be interested in your thoughts as I have seen this pattern a few times but don’t have extensive experience with several projects or project evaluations.

    Thanks again for your regular posts and contributions!

    Ben Best

    March 29, 2012 at 9:42 am

  2. […] Read more from the original source: Planners, evaluators and entrepreneurs […]

  3. What we liked most (and it was another big “take-away”), was from Rugasira and his business model at Good African Coffee. He invests 50% of his companies profits back into the communities that work for him to improve the livelihoods of families and just about everyone involved. Not only was his talk inspiring and entertaining (booing at Kony 2012, anyone?), but he also made a solid point that Africa doesn’t need handouts to overcome poverty. Instead, Africa need access to international markets to grow and prosper. It makes sense, but it’s not something that people always think about because charity can sometimes be a useful tool.

    But otherwise, this is a great recap!

    IIRR US Office

    March 29, 2012 at 3:52 pm

    • Yes, unfortunately I missed this session which I heard was great.

      Maybe a new role for “aid” is not handouts but rather work to help foster developing country markets and in particular to nurture local entrepreneurs and connect them with capital, expertise and customers whether for economic or social enterprise.

      Ian Thorpe

      March 29, 2012 at 4:39 pm

      • You might be interested in Building Markets (formerly Peace Dividend Trust until today). After seeking out and investing in local entrepreneurs, they connect them with new markets: http://buildingmarkets.org/

        Ben Best

        March 29, 2012 at 5:29 pm

  4. Ongoing beneficiary feedback really is the missing key in evaluating aid projects. However, the technology and desire is out there if we can harness it. There are several organizations out there who are working to do exactly this—for example we are currently working on using crowdsourcing in Uganda to create this “feedback loop” and hope it will be able to expand across borders. Some background info on this project: http://www.aiddata.org/content/index/Innovation/uganda-crowdsourcing

    AidData

    March 29, 2012 at 4:17 pm

  5. Ian, I certainly agree with the notion that “the missing element in evaluation of development projects is effective and ongoing beneficiary feedback”, and that “The next big focus on measurement will hopefully be in the area of getting real-time feedback from beneficiaries which can be fed back into projects to improve them (…)”.

    So, if we assume that in the development sector there is very little experience in how to get this right, (a fair assumption, I´d say), why not turn to other organizations that are really good at it and ask for their support? They could be from the business sector (a Zara, perhaps…?), market research firms, perhaps a rare govt department or wherever they happen to operate. If it came from the private sector, it´d open the door to an interesting initiative for corporate social responsibiity.

    Many thanks for the excellent wrap-up!

    Manuel

    Manuel Acevedo

    March 29, 2012 at 4:24 pm

  6. […] I also seem to recall – but cannot now find the right quote or link – the suggestion that aid practitioners could seek inspiration from how success and failure business is determined in the business world. (The DRI debates considered whether an RCT would be a suitable evaluation mechanism for the IPhone game Angry Birds – not the best example imho!) (Update 27/06/12: found that link, it was Ian Thorpe blogging here.) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: