KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for the ‘smartaid’ Category

A flowering of approaches to complexity and development?

with 4 comments


We are an important juncture in development at the moment with the Sustainable Development Goals due to be finalized later this year, and with discussion now turning full swing into what needs to happen to make them a reality, including a lot of discussion  around how to make the UN (and the aid and development sector more broadly) “Fit for Purpose”.

A lot of the discussion on the SDGs is taking a typical form looking at intergovernmental monitoring and follow-up mechanisms, institutional arrangements and structures within the UN, financing mechanisms and partnerships. But at the same time there are quite a few groups doing some soul-searching about whether our system of goals and targets, development plans and project timelines and monitoring are really working.

A whole range of initiatives and approaches are emerging that could be loosely grouped under the umbrella of “complexity” i.e. the idea that development is a complex adaptive process and thus top down long-term planning doesn’t really work – instead we need to be more nimble and iterative in how we respond to circumstances and push the system in the right direction rather than developing a detailed master plan for a perfectly designed future.

To understand more about what complexity is and how it applies to development I’d highly recommend this recent blog by Owen Barder which poses a number of important challenges as well as some suggested ways forward for development work based on the idea that we are in fact intervening in a highly complex system.

But how are people putting this into practice?

In fact there are quite a few initiatives that respond to the challenge of complexity in development one way or another, whether or not they explicitly use the “C” word. Here are a few examples:

Problem Driven Iterative Adaptation (PDIA), and approach developed by Matt Andrews at the Center for International Development at Harvard. Building on the PDIA principles a group individuals from a range of organizations including the World Bank, ODI launched the “Doing Development Differently” manifesto.

A number of other organizations have adopted a “human-centred design” approach to innovation in development based on the principles developed by IDEO and outlined in their human centred design toolkit.  This approach, also referred to as design thinking comes in different variations such as Stanford’s dSchool.

Another approach is the Cynefin approach developed by Dave Snowden for knowledge management and decision support. It is not specific to development but is being used (or at least experimented with) in a number of government and development sector projects.

This past week there was a discussion on KM4DEV about applying the principles of agile software development, and the agile manifesto  to international development. There were quite a few replies from people who were already using different variants of this approach in their project management, mainly but not exclusively from ICT for development projects.

The UN and indeed many other development organizations are launching innovation teams, units, lab, networks etc. UNICEF was one of the early movers in this and has already gone to a degree of scale including setting up a global innovation centre and last week launching a global innovation fund. UNICEF and a number of other development partners adopted the UNICEF innovation principles.

Perhaps one of the older interventions in this discussion from the aid sector was Bill Easterly’s critique of the  top down approach to aid, as well as the MDGs outlined in The White Man’s Burden and the follow-up The Tyranny of Experts. The idea here being that we need searchers (i.e. those who set out to find and build on locally applicable solutions) rather than planners (those who bring a toolbox of known “scientific” solutions that are imposed from above).

And there are probably many more I haven’t listed here.

One question you might reasonably ask is how are all these things related?  Which one of these approaches am I supposed to apply in my work, or at least what are the relative strengths and weaknesses of each?

When looking at the increasing number of different approaches we can see they have similar elements – namely a focus on the importance of local context, of designing projects with users/beneficiaries/partners, and of running small experiments and quickly iterating them based on experience on the ground. But they also have differences in emphasis, methodology and even ideology. Some are more strongly grounded in theory, while others are very practical, and none currently has the upper hand in gaining acceptance. While they are often connected to one another and are exchanging ideas, they are also separate initiatives taking their own paths often with strong groups of followers. Sometimes they even struggle to find a common language to talk to one another (as nicely explained in this blog by Duncan Green about a cross-disciplinary meeting on the UNDP’s “Finch Fund”)

But how do we take on complexity in development if there is no unified theory and agreed approach, and no strong body of evidence on the merits of the different approaches?

I’d argue – from the principles underlying complexity, that this diversity of philosophies and approaches is actually a good thing. Since we don’t know the one best way to tackle complexity, and indeed the aid community is only starting to wake up to the need for and potential of complexity based approaches, then it only seems right that we should be using multiple parallel approaches (or experiments) that live in their own contexts.

The fact that there are multiple overlapping and competing threads means that people are starting to take the issue of dealing with complexity seriously and are searching for ways to address it.

What I hope to see in the coming years is an increasing attention to complexity, and with it a further blooming of different approaches and variations, and opinions as each technical discipline, think tank, activist group and organization seeks to put their own spin on it.

So what if these ideas are not entirely consistent and congruent? The principles of complexity thinking call for multiple experiments together with variation and adaptation, and so we should welcome multiple approaches to dealing with complexity that are emerging and evolving. And we can hope that these will continue to evolve and improve and that the stronger more promising ones will succeed while some of the less useful ones will disappear. But while I expect that the number of approaches and initiatives will reduce as this area of thinking matures, I think we shouldn’t expect to get a single unified approach that is widely agreed and universally applied – after all it’s a complex world out there!

Footnote: here is an older blog post I wrote on complexity back in 2011 when I was trying to explain to myself exactly what it is and what it means for development – Who’s afraid of complexity in aid?

Written by Ian Thorpe

May 15, 2015 at 11:49 am

Posted in smartaid

Delivering development through case studies

with 2 comments


(picture: from @andyR_AGI twitter feed)

I just came from a two-day meeting in Berlin launching the “Global Delivery Initiative” which is being spearheaded by the World Bank and the German technical cooperation agency GIZ.

You are probably wondering what the Global Delivery is – I did (one wag asked if it was something about safe childbirth). While it sounds like a way of doing programming it is actually about building an alliance and common knowledge base between development organizations around what works in development. It is related to, but different from the recently launched “Doing Development Differently” initiative (about which I shall blog separately).

The key insight driving this initiative is that while there has been a lot of research and evaluation on the “what” of delivery and a lot is known about what approaches “should” work, especially around technical issues and programme design, much of what goes wrong in development is related to the “how” i.e. how programmes are actually delivered on the ground in the messy reality. Relatively less is understood about what makes some similarly designed projects successful while others fail.

Many of these implementation challenges are messy human problems and don’t lend themselves easily to experimental design or traditional research methods. In fact much is based on “tacit” knowledge that lies in people’s experiences (here is one of my first blog posts “the truth is out there” which explains this in more detail).

The key approach being taken through the global delivery initiative is that in order to capture this tacit knowledge and make it shareable and reusable is the development of case studies on the “how” of delivery. The aim is to develop case studies that are of high quality, focus on the how rather than the what,  and according to common standards and format to make them shareable both within, but more importantly across development organizations. The initiative is proposing to create a global online repository of  delivery focused case studies using a somewhat standardized template and methodology. The aim would be to collect and share examples of how delivery challenges have been overcome on the ground to build up an evidence base of what works – but not as “best practice” but as a resource of example approaches which could be adapted to local context, and longer term as the number of examples grows as a resource that could be analyzed and mined to spot common themes and solutions to delivery challenges.

The aim of the meeting was to present the approach together with case study examples from participants to help refine the approach as well as to get more contributing partners aboard and to talk through more on how to make the initiative be successful – including what needs to be done collectively by partners and what needs to be done inside individual organizations to strengthen their ability to create and effectively use case studies. However at the meeting it was clear that “case studies” meant different things to different people, have different uses and employ different approaches – and there is a balance to be struck between coming up with a shared approach that allows cross organizational learning versus specific needs of individual organizations. There was a marketplace of example case studies from which the diversity of approaches was clear  – some were simply an approach to research to understand a problem while others were much more focused on documenting a programme, and others on how a problem had been addressed.

A few of the key issues raised in the meetings were:

  • Who identifies the “problems” that should be documented? The general feeling was that this needed to be done in a participatory way with beneficiaries and country team leaders rather than being top down. Importantly the case studies should focus on problems and not projects
  • What is the difference between case studies and more formal evaluation techniques? There was some confusion in the discussion but the general sense was that these are complementary activities not alternatives, and that case studies shouldn’t seek to be as rigorous and comprehensive as evaluations
  • How to make use of existing knowledge sharing techniques that result in self-reflection such as appreciative inquiry or after action review in the development of case studies as the currently proposed methodology didn’t fully make use of these – and self-reflection is an important part of a learning case study.
  • How to incorporate learning from failure – there was general agreement we should, but also that this was extremely challenging in publicly funded development work. Case studies focusing on failure might not be feasible, but including lessons from the less successful aspects of a programme or a comparative analysis across different locations to identify the determinants of success or otherwise was seen as valuable. A more challenging approach, but also more fruitful would be to design more experimental, iterative approaches to problem solving in development from the outset, which may have a higher risk of failure, but also greater potential benefits, and greater learning.
  • Who can/should write case studies? it was felt that not everyone has the right skill set to develop good case studies. Also there is a balance between using insiders who know the context and outsiders who can be more impartial and may have better documentation skills. Case studies are not objective in the same way as evaluations are however and they do need to draw on the reflections of those involved in the case. We also heard from Jennifer Widner that Princeton is developing a MOOC for writing case studies which will be interesting to check out.
  • Case studies are very labour intensive – and there was some discussion about weighing the value against the amount of time they take to develop (one estimate was that a single case study takes at least 350 person hours of work)
  • There are a lot of organizational challenges in the use of case studies – partly because use and dissemination is often not fully thought through at the beginning of the process – but more importantly because organizational practices and incentives are not aligned to support the use of tacit learning in our work (a much larger issue that I’ve discussed previously on this blog), and because our tendency to present everything as a success and as an advocacy opportunity to funders hampers our ability to self-critically reflect and learn.
  • My final observation was that unsurprisingly it was pointed out that case studies are only one of a range of approaches to fostering learning from experience and sharing delivery related knowledge. It was felt that other approaches such as communities of practice, peer learning and support, use of innovation approaches such as human-centred and participatory design also needed to be part of any initiative to improve delivery.

Going forward there was strong support for learning more about delivery challenges and sharing and applying that knowledge, although the nature of the partnership and who would do what was a little less clear (understandable since there was a diverse array of partners and the meeting only lasted a day and a half)

The World Bank and GIZ have started producing case studies and are planning more in 2015 along with an online repository – they are now trying to get others to sign up and do the same.

There was an agreement that case studies should be complemented by other work – and participants agreed to do more to share with each other both what they learn about delivery related knowledge but also how they learn about delivery in terms of tools, techniques and approaches they are using and how well they work.

Participants also agreed to advocate within their organizations for the idea of learning from experience on the how of development and more generally doing development differently.

All told, this looks like a promising initiative to improve sharing of tacit knowledge between development organizations – and one which I need to follow-up on to see if and how UNICEF could be involved. But for it to be successful it will need more partners, and notably absent were local and southern partners who would be key to any learning on delivery. In addition while case studies can form a good basis for this learning, the initiative will need to go beyond collecting and sharing case studies to focus on the idea of fostering learning from experience in development – both in identifying and sharing that learning – but also in overcoming some of our institutional challenges in actually applying it – and ensuring that there is continual learning in how we do our work.

Finally I mentioned at the beginning of the blog that this was related to “Doing Development Differently”, an initiative which seeks to rethink how we do development. While the global delivery initiative can contribute to that – I think the groups need to keep asking themselves whether they really are contributing to doing development differently by collecting new, more grounded information on what works and in a way that helps inspire and inform action, but doesn’t direct it – and avoid creating another knowledge repository that goes unused, or is used as a set of development recipes from donor to beneficiary (see this old blog of mine on why “best practice” databases are not the way to go), or a way of telling stories that make our (current traditional) work look good without real learning.

So let’s give it a try!

Written by Ian Thorpe

December 16, 2014 at 8:45 am

Making it up as we go

with one comment

In past posts I’ve talked about the problems of relying too much on rules, guidelines and so-called “best practices” in development work. And while big organizations like the UN still rely too much on these, I’m also starting to see some taking quite an opposite view.

With the trend towards innovation work, and discussions around complexity in development an increasingly popular approach is as follows: get together a group of people, preferably including some of your beneficiaries, quickly brainstorm some ideas preferably involving applying new technologies, then try them out and iterate what you do as you go along adapting your approach to what you find. Quite often these projects are attractively presented and communicated and shared at conferences and via social media generating a lot of buzz – but it much less frequent that they are also studied and evaluated over time with the potential learning from them incorporated into “mainstream” development thinking.

Innovative and iterative approaches are exciting, and they are built upon the recognition, often missing in more traditional aid programmes, that each situation is unique and that we rarely know exactly what will work. But it’s important that they be seen as complementary or additional approaches to development work, rather than a new, better approach that will replace the old.

Here are a few counter-arguments as to why we shouldn’t just throw away the rule book, programme procedures, toolkits and case studies in favour of “making it up as we go along”:

1. Not all development problems, or at least all aspects of them  are fully “complex” in nature, and even some of those that are have been extensively programmed, researched and evaluated. This means that in many areas we already have a fairly good idea how best to tackle them, and have probably also gained some costly experience about how not to handle them.  In these cases it might be less risky and more productive to do what we know works rather than trying to do something new.

2. Some areas of work actually rely in standardized predictable procedures to make them work, even if these aren’t always the most efficient or user-friendly. An obvious example is financial management systems where you wouldn’t want to experiment or reinvent them every time, but this might equally apply to other technical processes such as vaccine handling, or shelter construction etc. etc. where what is being done is highly technical and expert, but good practice is built into the standard procedures to ensure consistent quality.

3. Sometimes consistent (non-innovative) approaches are needed for reasons of transparency and fairness i.e. to ensure that everyone knows how the system works and that they will be treated according to consistent rules (even if these are recognizably imperfect). This may even be a legal requirement that needs to be taken into account e.g. to be demonstrably and accountably equitable in how resources are distributed to the poor from a programme.

4. While every individual situation is unique in some regard, and it is often not clear what approach is the best to take, few situations are completely unlike anything else that has gone before, there are usually ideas and experiments that can be adapted from other countries, or from other sectors. It’s therefore good to use “document and possibly well studies/evaluated approaches from elsewhere as a starting point for a new innovation rather than designing from scratch or only designing from local knowledge and context.

5. It’s hard to deliver innovative and adaptive programmes at scale since they are a moving target and since there may be an insufficient understanding of how and why they work and what aspects of them are replicable or scalable. Also not everyone is a natural innovator, even if given training and tools, and innovation often needs a lot more hands on support and so there might be limits to how far we can expect public servants and aid workers all to be productive working in this fashion.

Of course if we want to find breakthroughs in development it can be hard to do so if we only do so by starting out from established approaches. Sometimes counterintuitive and radical ideas are needed to break though our existing paradigms of thinking that might limit us to incrementally improving solutions  that are only partially working at best. Duncan Green has an excellent blog post about this notion drawing on Robert Chamber’s work and giving “Community Led Total Sanitation” as one example of a highly successful new approach that broke the mold of existing thinking (shameless link to something I worked on in UNICEF about this).

So how do we balance the two approaches?

I’d argue that we need two parallel but interacting approaches to our development work:

One is the more mainstream work of programmes that are designed largely based around existing “scientific” knowledge and experience, often codified into rules, procedures, tools and case study examples.  This is mainstream development programming which is also attempting to scale up known successful approaches. These need to be continually evaluated and studied and the knowledge from experience and formal study needs to be fed back into the system to incrementally improve it. And when the tools and procedures are applied they need to be done so thoughtfully to take into account local context, and specific challenges and opportunities as they arise, as well as based on feedback they receive – but within an overall agreed approach.

The Other is an experimental, iterative and interactive approach that deliberately tries out new ideas including ones that seem counterintuitive or unlikely. One that test out commonly held assumptions. One that is willing to discard  unsuccessful approaches or adapt them fine tuning or even totally redirecting efforts in the course of the programme in order to find out what works based on real-time feedback. This approach should also seek to try out multiple parallel experiments on the same project at the same time – even if they seem mutually contradictory. These ‘experiments” could be in new areas where little knowledge exists, but they should also be used to try out radical new ideas in areas that are also well trodden.

But these experiments need to be carefully documented and studied (and also networked) in order for the insights from these experiences to be internally digested, shared and also reflected upon by others. The aim here is to identify approaches or elements from them that might be susceptible to being scaled up – or aspects of them that might be adapted or turned into tools and procedures or approaches  that can be used by others. Adaptations of the same basic innovative approach might also need to be tried in different contexts to better understand what makes them successful (or not). The innovations can then turned into something that informs and improves mainstream thinking.

So in conclusion, I’d argue for a portfolio approach to development where perhaps a major part of the work is relatively mainstream – consistent and only evolving slowly over time as approaches are refined, but with a smaller, but significant (and certainly larger than at present) segment that is deliberately trying to break new ground through an innovative and adaptive approach – but with good systems to connect the two such that breakthroughs developed by the innovation stream can be tested and if suitable incorporated into mainstream thinking, even if they upend current thinking and approaches.

Written by Ian Thorpe

September 12, 2012 at 4:52 pm

Posted in rants, smartaid

How am I doing?

with 4 comments


Summary: If we are trying to measure the results of knowledge management work, or any type of development work for that matter, we could do worse than ask our clients what they think of what we are doing.

Some years ago, when I was interviewed for my first “real” KM job, one of the questions I was asked was “how will you measure the results of what you are doing?”. At this stage we didn’t even know what we would be doing, so I gave an instinctive answer – but one I’d at least partly stand behind now. I told the interviewer that the best way to know whether the knowledge products and services we were doing were any good would be to ask our clients what they think – on a regular basis.

We are often struggling to find ways to measure the results of our work. We are looking to measure impact, but often this requires complex, potentially evaluation and identification of a clear theory of change. If we aren’t able to do this we often fall back on measures of output such as budget spent, work plan tasks implemented, supplies delivered, workshops carried out, website downloads and the like which tell us about our efficiency in getting things done, but not about the effectiveness of what we are doing.

But if we can’t easily measure impact, how about going half way? While beneficiary/partner feedback isn’t the same thing as “impact” it can be a very valuable proxy to look at what you are doing and where you need to improve or put additional focus. You can ask about their perceptions or ratings of what you do, as well as asking for their direct feedback on what they need, and what they want you to do differently.

The biggest criticism of asking for feedback is that what you get back is perceptions on you and what you are doing, rather than what you are actually doing, and that the people you are asking might not understand what you are doing well enough to comment on it, or might not value the “right” things.

While to some extent this can be true, knowing what people think about your organization, your image, what you do and what you should be doing can still be very illuminating. If people don’t know who you are, or misunderstand what you do, or think you are doing a lousy job when you think you are not then you might have a communication problem. And what good is it doing great work if no-one knows about it? Not just for your own ego, but also so you can build goodwill in your “client” populations for the work you do in order to make your work easier, or so you can have something to show to donors on how what you do is responsive to the needs of those it is supposed to help.

But lack of recognition or negative feedback isn’t just about how well you communicate. It might well be that you, and what you are doing is not seen as relevant or high quality by the people you are supposed to serve. If they don’t know about you and your work, it might well be because you are not reaching them or having any meaningful impact on their lives (whatever your monitoring statistics tell you). If they don’t like what you are doing, it might be that what you are doing doesn’t meet their needs, or that the way you are doing it isn’t respectful of them.

Asking for feedback reminds us that ultimately we are there to serve our beneficiaries (or “clients”) and to a large extent its they who determine whether or not we are doing a good job. Asking for feedback also has the added benefit that it can help build trust by showing that we value the opinion of those we are helping rather than simply deciding what is best for them, and it can also help elucidate important information about their aspirations, priorities and the realities they face which we can easily overlook in how we design and execute programmes.

There are a variety of means of collecting feedback which can include formal surveys, phone polls, in person interviews, focus group discussions, suggestion boxes etc. The correct tool will depend on your audience/clients, what you want to know and the resources you have to do the work. Simple survey questionnaires and suggestion boxes can be a relatively simple and inexpensive way of collecting data – but if they highlight an issue you might need to use face to face questionnaires or interviews to really probe and understand an issue in depth.

You can also develop standardized tools for collecting feedback which can be used to track performance over time, and which could be used to compare different services or programmes with each other (or similar programmes across different locations).

But one word of caution. if you ask for feedback, you also create expectations – in particular that you will share the feedback you received, or at least a summary of it, even if it isn’t positive, and that you will take action to respond to any negative feedback you receive. If you don’t do this, then next time you ask, you won’t get any feedback, or worse you will have damaged your recommendation and increased the cynicism of those you surveyed about your sincerity to listen to them and “really” help them.

Aid agencies are not particularly good at systematically seeking feedback from their beneficiaries, or from partners who might be intermediaries in their work, but there are a few encouraging signs. For example as part of its ongoing reform process the UN recently surveyed Programme Governments and partner NGOs about their views of the UN Development system and some of its coordination mechanisms and initiatives, and published the results (see here and here) – I hope we will now also see the next round of reform building on some of this feedback.

Digital technologies also make it easier and cheaper to collect and analyze this data than ever before through use of tools such as help lines, SMS polling etc. These can potentially reach large populations that would have been costly and logistically difficult to survey using traditional survey methods, and can also be more quickly tabulated.

So let’s not forget who we work for and regularly ask them what they want, and how we are doing both as an input to our planning and as a measure of our performance.

Written by Ian Thorpe

July 12, 2012 at 11:45 am

The long and winding road to evidence based development

with 2 comments


There has been quite a bit of discussion online about the Financial Times article “How Aid got Smarter” featuring an interview with UNICEF’s Executive Director Tony Lake.

The article makes some important points about the need to improve the use of evidence to making decisions in aid, in particular in discovering what works and what doesn’t, admitting it and acting on it.

What is perhaps a pity about the article is that it can be read to imply that until recently aid decisions  were largely faith-based, but now suddenly, at last, the role of science and evidence is being taken seriously in development. As usual the reality is a bit more complex than that.

Discussions around the need to make development work more evidence based have been around as long as I’ve been working in development (and probably a lot longer than that). And the progressions towards improved use of knowledge and evidence in development often seems like a case of two steps forward, one step back.

Over my past 20 or so years working in aid some notable improvements in the attention to evidence include an increased investment in and focus on evaluation resulting in more professionalized evaluation departments with greater resources and thus more and better evaluations; greater investment in supporting statistical data collection including in agreeing on harmonized standards and common sets of indicators to track over time; greater attention in various forms to supporting better internal communication and knowledge management to help staff have better access to, and make better use of available development knowledge. There are probably many others.

But many challenges remain. A few of the most thorny (and recurring) challenges in using knowledge in development work seem to be:

  • How far we are able to “know” what works and what doesn’t. We don’t have the resources and skills to measure everything scientifically – and some of the knowledge we need is highly contextual and based on experience as well as on science (See my previous blog “The truth is out there” about the limits of what we can know).
  • But even when we have a large body of relevant, available knowledge it is not always used in decision-making. It’s important to understand the reasons for these and try to tackle them along with work to increase the supply of knowledge (see my previous blog “Creating a demand for knowledge”).
  • In our desire to understand something, and to “break it down” so we can tackle it in manageable pieces or sell it to donors, or the public, we often forget that many of the things we are dealing with are “complex adaptive systems” where the whole works differently from the sum of the parts and where a good practice in one context might not work in another. Of course, this doesn’t mean we shouldn’t use evidence – but we need to understand it in context, and  apply it flexibly rather than expecting to find universal answers. (See my previous blog “Who’s afraid of complexity in aid”)

But while evidence based aid isn’t a new idea, and even though we are still not there yet, there is still good reason to be optimistic that aid is becoming, and will continue to become more evidence informed. Here are a few reasons why:

1. The results agenda – donors and beneficiaries alike are putting increasing pressure on aid agencies to managing for and report on results – in particular to be sure that ever scarcer aid money is being well invested (see this blog by Owen Barder on some of the benefits and challenges of the results agenda for improving aid).

2. Aid transparency – as more and more aid agencies sign up for IATI then it becomes easier to see who is dong what and where which is an aid to accountability and to improved coordination – but also to research as there is a whole lot of new data to crunch to understand more about how aid works (or doesn’t) especially when linked to the results agenda.

3. Open data and research – more and more development data is being made freely available for public use which provides a whole range of raw material for researchers. Increasingly (although still slowly) publicly funded research (and even data sets and analyses) is also being opened up for public access – which means there is a lot more chance that it will be used.

4. Real time data analysis – Often one of the big challenges in using evidence is that by the time you know enough about a problem its already too late (think global food/economic crisis). New “big data” techniques to more quickly understand what is happening – at least enough to act, if not enough to scientifically “know”. (See this previous blog on the possibilities of “real time”).

5. Beneficiary feedback – this is one area where there is great (as yet mostly untapped) promise, and a number of interesting initiatives. Too often external solutions are imposed on beneficiaries, using science as a basis, but without enough attention to getting real time feedback from the people who the programme is designed to help on whether they want the programme, if it is likely to work, and whether they are satisfied with it, or whether they have their own ideas about how to improve it. More listening can make projects more likely to work, and more participation can also help them be more sustainable in the long term giving beneficiaries a say and a stake in the project’s success (see my previous blog “listening to the people we work for” for more).

6. Lastly, there are a lot of smart, committed individuals talking about and working on how to improve aid. Sure, there always have been, but it seems (to me at least) that the volume and depth of this discussion has increased over the past few years, including from the detractors who in their own way, through their own critiques are advancing the discussion and thinking about how to do aid work well. And with more and more aid agency heads such as Tony Lake are speaking up in favour of smart aid – we can hope for more discussion about what smart aid really means- and for aid workers to feel more empowered to advocate for it inside their own organizations.

The quest for smarter aid is not new, and it will not be achieved overnight. Evidence based development work is more an ongoing journey rather than a destination. But the lights of Oz are looking a little bit brighter.

(Image: Dave Cutler)

Written by Ian Thorpe

May 23, 2012 at 5:18 pm

Two sides to open development

with 6 comments



Ever since Robert Zoelick’s speech on “democratizing development” there has been a lot of buzz around the idea of open development and lots of discussion about what it really means. Some of the recent discussion is nicely summed up on this post on the Bank’s public sphere blog “Openness for Whom? and Openness for What“. I’ve also tackled the related issue of “Development 2.0” or the new way we can/should be doing development work taking advantages of changing technologies and business models now open to us.

Trying to unpack this a little bit I realize that there are two related but different aspects to the open development discussion that often get a little mixed up:

1. Transparency and reducing friction – i.e. making use of technology to make it easier to share information and knowledge in a standard way so it can be easily assessed, compared, mashed up, acted upon.

2. Participation – using technology to give people a voice and to change existing power structures, and decision-making processes.

In the mid 1990s I was doing some work on knowledge sharing on public finances. I can draw a parallel between the types of discussions we were having on sharing public finance information with the  discussions on open development today. In more traditional public finance work the aim of major players (led by  the IMF) was to provide high quality technical advice and capacity building to ministries of finance, usually behind closed doors. Along came two new related but different approaches, coming mostly from the human rights and democracy movements. There were:

1. Budget transparency – making government budgets public, widely disseminating them, and presenting them in a  form that could also be understood by non-specialists (which includes parliamentarians, the media and civil society as well as the public at large). The aim here is that making the information available in a public, comprehensible and unbiased format would put pressure on government to justify and implement budgets more effectively, and also make  it easier for them to be held to account through existing means (such as through parliamentary oversight or public opinion).

2. Participatory budgeting – Some municipalities/regions chose to take this a step further – they also created standing, open, consultative mechanisms that allowed citizens to directly influence at least part of the public budget, and be involved in its oversight.

The obvious corollary is that while transparency is necessary for greater participation, and can also help increase interest and engagement, it is not a sufficient step. Deciding to take the step to participatory budgeting requires and additional political commitment to devolve power back from the authorities (whether elected or civil service) back to citizens. And so open budgeting is much more widespread than participatory budgeting. It’s also worth noting that in participatory budgeting the devolution to citizenry is partial not absolute, covering part of the budget, or having and role in decision-making but not as final arbiter.

So back to open development….

Much of what is currently being talked about is in the first category – making more and more consistent information widely available so anyone can use it however they see fit within their means and influence. Making aid data open puts pressure on aid agencies to be accountable to all, but mostly to their current and potential donors. Similarly access to open data and open research allows aid workers and government policy makers to make better informed decisions (if they  choose to do so), open public procurement means that companies have a level playing field for competing for aid contracts which also hopefully helps reduce costs for donors and beneficiaries alike.

But this openness only takes you so far in making development more democratic and empowering. It’s now becoming possible to collect and incorporate beneficiary feedback and local voices and knowledge into development work , but only IF you want to. While many of the big players are opening up their data, most are not opening up their allowing more input from beneficiaries and partners in their decision-making and resource allocation – here formal Boards and big donors still call the shots. Those that are experimented with limited feedback mechanisms are often doing this from the perspective of using the feedback to improve the likelihood their programmes will work by avoiding unseen pitfalls, or to help get better public and “buy-in” for their projects rather than to explicitly empower beneficiaries and change the power relations between “beneficiaries” and “benefactors”.

While the possibilities of technology can make the beneficiaries more vocal, in the end those who currently have the power will need to agree to give some of it up if technology is to really make aid more empowering. As with participatory budgeting, I don’t imagine that this will be a complete reversal of power relations, donors will still ant a say over where their money goes, but rather a rebalancing in favour of devolving of part of the donors or aid worker’s current role back to the people they wish to empower and assist to become sustainably self-sufficient.

Written by Ian Thorpe

May 9, 2012 at 3:27 pm

Posted in rants, smartaid

Planners, evaluators and entrepreneurs

with 8 comments

Last week I attended part of NYU’s Development Research Institute’s annual conference entitled “Debates in Development: The Search for Answers

The morning session was great, with a particularly lively and interesting discussion on different approaches to development, the highlight being a debate on the Millennium Villages Project, which was much more interesting and surprising than it sounds (let’s face we’re probably all a bit jaded with discussions about the MVPs, especially if you work in the UN since many people erroneously believe that MVPs form a major part of the UN’s approach to poverty, when in reality it only forms a small experimental part of our overall work with relatively little UN funding or support).

Bill Easterly set the stage by introducing the debates as a comparison between smart, expensive decision-making systems, in which he was also lining up the afternoon’s discussion on randomized control trials (RC Ts), and cheap, dumb solution finding systems, by which he means experimentation and success or failure based on market feedback. This is a new framing of his idea of searchers versus planners in development – but instead also looking not only at how development projects are planned, but also how they are evaluated.

In the MVP debate that followed, instead of having Sachs come into the lion’s den, Stewart Paperin of the Open Society Institute, a major funder of the MVPs gave the approach a spirited defense against critics Michael Clemens and Bernadette Wanjala who have both been publicly critical of the MVPs citing their lack of transparency and rigorous evaluation, and for overstating their results.

What was interesting about the debate was that Paperin was skillfully able to defend the MVPs on the grounds that they were, from his perspective at least, an investment in a practical, even entrepreneurial experiment that wasn’t certain to work, but were a good chance to try something different in order to learn more for the future.

In the end the most interesting aspect about the conference for me was the debate around the nature of actionable knowledge in development, and what can we trust as a basis to make decisions on development funding and action. This is both a scientific and a practical question.

The “debate” has been set up in at least three different ways:

In his book  “The White Man’s Burden” Easterly talks about planners versus searchers i.e. those who think that top down set of proven approaches can work in development (a la Sachs) versus those who believe that all solutions are local and that people need to experiment and find their own solutions within their own context,

In his talk Clemens spoke instead about the goals movement versus i.e. those who believe they already have sufficient evidence for their solutions and have a vision and passion to take them to scale versus the evaluation movement i.e. those that believe  that we need to rigorously measure what we do to know if it works and how to improve it.

But in his introduction Easterly also spoke of a third debate: between rigorous, expensive scientific measurement versus low-cost experimentation and market feedback. His case being that evaluation is costly, and often the results are not decisive or generalizable, so it might be more effective to use feedback from beneficiaries as a way of assessing what works.

Three big take-aways from the discussion were:

1. We might know some things about what works in development, but there is a lot we don’t know.  Even when we do know something, it’s not a guarantee that it will work without a hitch in another context.

2. Evaluation (and tools such as RCTs) can tell us a lot about what works but they are expensive to run, and their results are not always easily generalizable or actionable.

3. But if you don’t measure your project in some way, then how will you know if a project works? and how will you improve it?

What strikes me here is that in a way all of these different perspectives have value, but their proponents have a difficult time understanding each other and figuring out how to best combine their approaches.

Wouldn’t it be good if the goals people – those who have a clear vision and a passion to pursue it would create momentum  and raise resources around their projects, whether these are large-scale plans developed from extensive research and experience, or whether they are smaller scale hunches or experiments – but as entrepreneurs rather than as top-down planners. But at the same time if these projects would collect data from the outset to better enable them to track progress, and if where feasible they could try multiple approaches or variations on an approach to be able to compare then and learn from the differences.

Similarly if the results could be made public, then funders, beneficiaries and even academics could see them and independently assess them, and project managers could use them to modify their programmes and identify whether they should be scaled up or shut down.

Lastly, but perhaps most importantly – the missing element in evaluation of development projects is effective and ongoing beneficiary feedback. Entrepreneurs, unlike aid planners try lots of different things some which succeed massively while others fail dismally – the difference being that their success is measured by the feedback they get from consumers who buy their product. In the aid world we don’t yet have effective ways to get this feedback so we rely instead on evaluation –  to rigorously, but only selectively assess the impact of our work, and communication – to sell our story of success but of continued need to funders who are far removed from the experience of those who the programmes are designed to assist. And evaluation and communication are often at odds.

The next big focus on measurement will hopefully be in the area of getting real-time feedback from beneficiaries which can be fed back into projects to improve them, and fed back to donors and the public transparently in order for them to better judge what and who to fund, and all this at a relatively low-cost and greater clarity than expensive evaluation.

Formal evaluation (and experimental project designs such as RCTs) can focus on those areas where getting the programme design specifics is important in terms of cost and impact, but where the results are also likely to yield insights which can be generalized beyond a specific programme.

(For a more complete account of the conference check out Tom’s blog here and his curated twitter stream here)

Written by Ian Thorpe

March 29, 2012 at 9:15 am


Get every new post delivered to your Inbox.

Join 17,416 other followers

%d bloggers like this: