KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for the ‘smartaid’ Category

Making it up as we go

with one comment

In past posts I’ve talked about the problems of relying too much on rules, guidelines and so-called “best practices” in development work. And while big organizations like the UN still rely too much on these, I’m also starting to see some taking quite an opposite view.

With the trend towards innovation work, and discussions around complexity in development an increasingly popular approach is as follows: get together a group of people, preferably including some of your beneficiaries, quickly brainstorm some ideas preferably involving applying new technologies, then try them out and iterate what you do as you go along adapting your approach to what you find. Quite often these projects are attractively presented and communicated and shared at conferences and via social media generating a lot of buzz – but it much less frequent that they are also studied and evaluated over time with the potential learning from them incorporated into “mainstream” development thinking.

Innovative and iterative approaches are exciting, and they are built upon the recognition, often missing in more traditional aid programmes, that each situation is unique and that we rarely know exactly what will work. But it’s important that they be seen as complementary or additional approaches to development work, rather than a new, better approach that will replace the old.

Here are a few counter-arguments as to why we shouldn’t just throw away the rule book, programme procedures, toolkits and case studies in favour of “making it up as we go along”:

1. Not all development problems, or at least all aspects of them  are fully “complex” in nature, and even some of those that are have been extensively programmed, researched and evaluated. This means that in many areas we already have a fairly good idea how best to tackle them, and have probably also gained some costly experience about how not to handle them.  In these cases it might be less risky and more productive to do what we know works rather than trying to do something new.

2. Some areas of work actually rely in standardized predictable procedures to make them work, even if these aren’t always the most efficient or user-friendly. An obvious example is financial management systems where you wouldn’t want to experiment or reinvent them every time, but this might equally apply to other technical processes such as vaccine handling, or shelter construction etc. etc. where what is being done is highly technical and expert, but good practice is built into the standard procedures to ensure consistent quality.

3. Sometimes consistent (non-innovative) approaches are needed for reasons of transparency and fairness i.e. to ensure that everyone knows how the system works and that they will be treated according to consistent rules (even if these are recognizably imperfect). This may even be a legal requirement that needs to be taken into account e.g. to be demonstrably and accountably equitable in how resources are distributed to the poor from a programme.

4. While every individual situation is unique in some regard, and it is often not clear what approach is the best to take, few situations are completely unlike anything else that has gone before, there are usually ideas and experiments that can be adapted from other countries, or from other sectors. It’s therefore good to use “document and possibly well studies/evaluated approaches from elsewhere as a starting point for a new innovation rather than designing from scratch or only designing from local knowledge and context.

5. It’s hard to deliver innovative and adaptive programmes at scale since they are a moving target and since there may be an insufficient understanding of how and why they work and what aspects of them are replicable or scalable. Also not everyone is a natural innovator, even if given training and tools, and innovation often needs a lot more hands on support and so there might be limits to how far we can expect public servants and aid workers all to be productive working in this fashion.

Of course if we want to find breakthroughs in development it can be hard to do so if we only do so by starting out from established approaches. Sometimes counterintuitive and radical ideas are needed to break though our existing paradigms of thinking that might limit us to incrementally improving solutions  that are only partially working at best. Duncan Green has an excellent blog post about this notion drawing on Robert Chamber’s work and giving “Community Led Total Sanitation” as one example of a highly successful new approach that broke the mold of existing thinking (shameless link to something I worked on in UNICEF about this).

So how do we balance the two approaches?

I’d argue that we need two parallel but interacting approaches to our development work:

One is the more mainstream work of programmes that are designed largely based around existing “scientific” knowledge and experience, often codified into rules, procedures, tools and case study examples.  This is mainstream development programming which is also attempting to scale up known successful approaches. These need to be continually evaluated and studied and the knowledge from experience and formal study needs to be fed back into the system to incrementally improve it. And when the tools and procedures are applied they need to be done so thoughtfully to take into account local context, and specific challenges and opportunities as they arise, as well as based on feedback they receive – but within an overall agreed approach.

The Other is an experimental, iterative and interactive approach that deliberately tries out new ideas including ones that seem counterintuitive or unlikely. One that test out commonly held assumptions. One that is willing to discard  unsuccessful approaches or adapt them fine tuning or even totally redirecting efforts in the course of the programme in order to find out what works based on real-time feedback. This approach should also seek to try out multiple parallel experiments on the same project at the same time – even if they seem mutually contradictory. These ‘experiments” could be in new areas where little knowledge exists, but they should also be used to try out radical new ideas in areas that are also well trodden.

But these experiments need to be carefully documented and studied (and also networked) in order for the insights from these experiences to be internally digested, shared and also reflected upon by others. The aim here is to identify approaches or elements from them that might be susceptible to being scaled up – or aspects of them that might be adapted or turned into tools and procedures or approaches  that can be used by others. Adaptations of the same basic innovative approach might also need to be tried in different contexts to better understand what makes them successful (or not). The innovations can then turned into something that informs and improves mainstream thinking.

So in conclusion, I’d argue for a portfolio approach to development where perhaps a major part of the work is relatively mainstream – consistent and only evolving slowly over time as approaches are refined, but with a smaller, but significant (and certainly larger than at present) segment that is deliberately trying to break new ground through an innovative and adaptive approach – but with good systems to connect the two such that breakthroughs developed by the innovation stream can be tested and if suitable incorporated into mainstream thinking, even if they upend current thinking and approaches.

Written by Ian Thorpe

September 12, 2012 at 4:52 pm

Posted in rants, smartaid

How am I doing?

with 4 comments

feedback

Summary: If we are trying to measure the results of knowledge management work, or any type of development work for that matter, we could do worse than ask our clients what they think of what we are doing.

Some years ago, when I was interviewed for my first “real” KM job, one of the questions I was asked was “how will you measure the results of what you are doing?”. At this stage we didn’t even know what we would be doing, so I gave an instinctive answer – but one I’d at least partly stand behind now. I told the interviewer that the best way to know whether the knowledge products and services we were doing were any good would be to ask our clients what they think – on a regular basis.

We are often struggling to find ways to measure the results of our work. We are looking to measure impact, but often this requires complex, potentially evaluation and identification of a clear theory of change. If we aren’t able to do this we often fall back on measures of output such as budget spent, work plan tasks implemented, supplies delivered, workshops carried out, website downloads and the like which tell us about our efficiency in getting things done, but not about the effectiveness of what we are doing.

But if we can’t easily measure impact, how about going half way? While beneficiary/partner feedback isn’t the same thing as “impact” it can be a very valuable proxy to look at what you are doing and where you need to improve or put additional focus. You can ask about their perceptions or ratings of what you do, as well as asking for their direct feedback on what they need, and what they want you to do differently.

The biggest criticism of asking for feedback is that what you get back is perceptions on you and what you are doing, rather than what you are actually doing, and that the people you are asking might not understand what you are doing well enough to comment on it, or might not value the “right” things.

While to some extent this can be true, knowing what people think about your organization, your image, what you do and what you should be doing can still be very illuminating. If people don’t know who you are, or misunderstand what you do, or think you are doing a lousy job when you think you are not then you might have a communication problem. And what good is it doing great work if no-one knows about it? Not just for your own ego, but also so you can build goodwill in your “client” populations for the work you do in order to make your work easier, or so you can have something to show to donors on how what you do is responsive to the needs of those it is supposed to help.

But lack of recognition or negative feedback isn’t just about how well you communicate. It might well be that you, and what you are doing is not seen as relevant or high quality by the people you are supposed to serve. If they don’t know about you and your work, it might well be because you are not reaching them or having any meaningful impact on their lives (whatever your monitoring statistics tell you). If they don’t like what you are doing, it might be that what you are doing doesn’t meet their needs, or that the way you are doing it isn’t respectful of them.

Asking for feedback reminds us that ultimately we are there to serve our beneficiaries (or “clients”) and to a large extent its they who determine whether or not we are doing a good job. Asking for feedback also has the added benefit that it can help build trust by showing that we value the opinion of those we are helping rather than simply deciding what is best for them, and it can also help elucidate important information about their aspirations, priorities and the realities they face which we can easily overlook in how we design and execute programmes.

There are a variety of means of collecting feedback which can include formal surveys, phone polls, in person interviews, focus group discussions, suggestion boxes etc. The correct tool will depend on your audience/clients, what you want to know and the resources you have to do the work. Simple survey questionnaires and suggestion boxes can be a relatively simple and inexpensive way of collecting data – but if they highlight an issue you might need to use face to face questionnaires or interviews to really probe and understand an issue in depth.

You can also develop standardized tools for collecting feedback which can be used to track performance over time, and which could be used to compare different services or programmes with each other (or similar programmes across different locations).

But one word of caution. if you ask for feedback, you also create expectations – in particular that you will share the feedback you received, or at least a summary of it, even if it isn’t positive, and that you will take action to respond to any negative feedback you receive. If you don’t do this, then next time you ask, you won’t get any feedback, or worse you will have damaged your recommendation and increased the cynicism of those you surveyed about your sincerity to listen to them and “really” help them.

Aid agencies are not particularly good at systematically seeking feedback from their beneficiaries, or from partners who might be intermediaries in their work, but there are a few encouraging signs. For example as part of its ongoing reform process the UN recently surveyed Programme Governments and partner NGOs about their views of the UN Development system and some of its coordination mechanisms and initiatives, and published the results (see here and here) – I hope we will now also see the next round of reform building on some of this feedback.

Digital technologies also make it easier and cheaper to collect and analyze this data than ever before through use of tools such as help lines, SMS polling etc. These can potentially reach large populations that would have been costly and logistically difficult to survey using traditional survey methods, and can also be more quickly tabulated.

So let’s not forget who we work for and regularly ask them what they want, and how we are doing both as an input to our planning and as a measure of our performance.

Written by Ian Thorpe

July 12, 2012 at 11:45 am

The long and winding road to evidence based development

with 2 comments

oz

There has been quite a bit of discussion online about the Financial Times article “How Aid got Smarter” featuring an interview with UNICEF’s Executive Director Tony Lake.

The article makes some important points about the need to improve the use of evidence to making decisions in aid, in particular in discovering what works and what doesn’t, admitting it and acting on it.

What is perhaps a pity about the article is that it can be read to imply that until recently aid decisions  were largely faith-based, but now suddenly, at last, the role of science and evidence is being taken seriously in development. As usual the reality is a bit more complex than that.

Discussions around the need to make development work more evidence based have been around as long as I’ve been working in development (and probably a lot longer than that). And the progressions towards improved use of knowledge and evidence in development often seems like a case of two steps forward, one step back.

Over my past 20 or so years working in aid some notable improvements in the attention to evidence include an increased investment in and focus on evaluation resulting in more professionalized evaluation departments with greater resources and thus more and better evaluations; greater investment in supporting statistical data collection including in agreeing on harmonized standards and common sets of indicators to track over time; greater attention in various forms to supporting better internal communication and knowledge management to help staff have better access to, and make better use of available development knowledge. There are probably many others.

But many challenges remain. A few of the most thorny (and recurring) challenges in using knowledge in development work seem to be:

  • How far we are able to “know” what works and what doesn’t. We don’t have the resources and skills to measure everything scientifically – and some of the knowledge we need is highly contextual and based on experience as well as on science (See my previous blog “The truth is out there” about the limits of what we can know).
  • But even when we have a large body of relevant, available knowledge it is not always used in decision-making. It’s important to understand the reasons for these and try to tackle them along with work to increase the supply of knowledge (see my previous blog “Creating a demand for knowledge”).
  • In our desire to understand something, and to “break it down” so we can tackle it in manageable pieces or sell it to donors, or the public, we often forget that many of the things we are dealing with are “complex adaptive systems” where the whole works differently from the sum of the parts and where a good practice in one context might not work in another. Of course, this doesn’t mean we shouldn’t use evidence – but we need to understand it in context, and  apply it flexibly rather than expecting to find universal answers. (See my previous blog “Who’s afraid of complexity in aid”)

But while evidence based aid isn’t a new idea, and even though we are still not there yet, there is still good reason to be optimistic that aid is becoming, and will continue to become more evidence informed. Here are a few reasons why:

1. The results agenda – donors and beneficiaries alike are putting increasing pressure on aid agencies to managing for and report on results – in particular to be sure that ever scarcer aid money is being well invested (see this blog by Owen Barder on some of the benefits and challenges of the results agenda for improving aid).

2. Aid transparency – as more and more aid agencies sign up for IATI then it becomes easier to see who is dong what and where which is an aid to accountability and to improved coordination – but also to research as there is a whole lot of new data to crunch to understand more about how aid works (or doesn’t) especially when linked to the results agenda.

3. Open data and research – more and more development data is being made freely available for public use which provides a whole range of raw material for researchers. Increasingly (although still slowly) publicly funded research (and even data sets and analyses) is also being opened up for public access – which means there is a lot more chance that it will be used.

4. Real time data analysis – Often one of the big challenges in using evidence is that by the time you know enough about a problem its already too late (think global food/economic crisis). New “big data” techniques to more quickly understand what is happening – at least enough to act, if not enough to scientifically “know”. (See this previous blog on the possibilities of “real time”).

5. Beneficiary feedback – this is one area where there is great (as yet mostly untapped) promise, and a number of interesting initiatives. Too often external solutions are imposed on beneficiaries, using science as a basis, but without enough attention to getting real time feedback from the people who the programme is designed to help on whether they want the programme, if it is likely to work, and whether they are satisfied with it, or whether they have their own ideas about how to improve it. More listening can make projects more likely to work, and more participation can also help them be more sustainable in the long term giving beneficiaries a say and a stake in the project’s success (see my previous blog “listening to the people we work for” for more).

6. Lastly, there are a lot of smart, committed individuals talking about and working on how to improve aid. Sure, there always have been, but it seems (to me at least) that the volume and depth of this discussion has increased over the past few years, including from the detractors who in their own way, through their own critiques are advancing the discussion and thinking about how to do aid work well. And with more and more aid agency heads such as Tony Lake are speaking up in favour of smart aid – we can hope for more discussion about what smart aid really means- and for aid workers to feel more empowered to advocate for it inside their own organizations.

The quest for smarter aid is not new, and it will not be achieved overnight. Evidence based development work is more an ongoing journey rather than a destination. But the lights of Oz are looking a little bit brighter.

(Image: Dave Cutler)

Written by Ian Thorpe

May 23, 2012 at 5:18 pm

Two sides to open development

with 6 comments

bankowned

 

Ever since Robert Zoelick’s speech on “democratizing development” there has been a lot of buzz around the idea of open development and lots of discussion about what it really means. Some of the recent discussion is nicely summed up on this post on the Bank’s public sphere blog “Openness for Whom? and Openness for What“. I’ve also tackled the related issue of “Development 2.0” or the new way we can/should be doing development work taking advantages of changing technologies and business models now open to us.

Trying to unpack this a little bit I realize that there are two related but different aspects to the open development discussion that often get a little mixed up:

1. Transparency and reducing friction – i.e. making use of technology to make it easier to share information and knowledge in a standard way so it can be easily assessed, compared, mashed up, acted upon.

2. Participation – using technology to give people a voice and to change existing power structures, and decision-making processes.

In the mid 1990s I was doing some work on knowledge sharing on public finances. I can draw a parallel between the types of discussions we were having on sharing public finance information with the  discussions on open development today. In more traditional public finance work the aim of major players (led by  the IMF) was to provide high quality technical advice and capacity building to ministries of finance, usually behind closed doors. Along came two new related but different approaches, coming mostly from the human rights and democracy movements. There were:

1. Budget transparency – making government budgets public, widely disseminating them, and presenting them in a  form that could also be understood by non-specialists (which includes parliamentarians, the media and civil society as well as the public at large). The aim here is that making the information available in a public, comprehensible and unbiased format would put pressure on government to justify and implement budgets more effectively, and also make  it easier for them to be held to account through existing means (such as through parliamentary oversight or public opinion).

2. Participatory budgeting – Some municipalities/regions chose to take this a step further – they also created standing, open, consultative mechanisms that allowed citizens to directly influence at least part of the public budget, and be involved in its oversight.

The obvious corollary is that while transparency is necessary for greater participation, and can also help increase interest and engagement, it is not a sufficient step. Deciding to take the step to participatory budgeting requires and additional political commitment to devolve power back from the authorities (whether elected or civil service) back to citizens. And so open budgeting is much more widespread than participatory budgeting. It’s also worth noting that in participatory budgeting the devolution to citizenry is partial not absolute, covering part of the budget, or having and role in decision-making but not as final arbiter.

So back to open development….

Much of what is currently being talked about is in the first category – making more and more consistent information widely available so anyone can use it however they see fit within their means and influence. Making aid data open puts pressure on aid agencies to be accountable to all, but mostly to their current and potential donors. Similarly access to open data and open research allows aid workers and government policy makers to make better informed decisions (if they  choose to do so), open public procurement means that companies have a level playing field for competing for aid contracts which also hopefully helps reduce costs for donors and beneficiaries alike.

But this openness only takes you so far in making development more democratic and empowering. It’s now becoming possible to collect and incorporate beneficiary feedback and local voices and knowledge into development work , but only IF you want to. While many of the big players are opening up their data, most are not opening up their allowing more input from beneficiaries and partners in their decision-making and resource allocation – here formal Boards and big donors still call the shots. Those that are experimented with limited feedback mechanisms are often doing this from the perspective of using the feedback to improve the likelihood their programmes will work by avoiding unseen pitfalls, or to help get better public and “buy-in” for their projects rather than to explicitly empower beneficiaries and change the power relations between “beneficiaries” and “benefactors”.

While the possibilities of technology can make the beneficiaries more vocal, in the end those who currently have the power will need to agree to give some of it up if technology is to really make aid more empowering. As with participatory budgeting, I don’t imagine that this will be a complete reversal of power relations, donors will still ant a say over where their money goes, but rather a rebalancing in favour of devolving of part of the donors or aid worker’s current role back to the people they wish to empower and assist to become sustainably self-sufficient.

Written by Ian Thorpe

May 9, 2012 at 3:27 pm

Posted in rants, smartaid

Planners, evaluators and entrepreneurs

with 8 comments

Last week I attended part of NYU’s Development Research Institute’s annual conference entitled “Debates in Development: The Search for Answers

The morning session was great, with a particularly lively and interesting discussion on different approaches to development, the highlight being a debate on the Millennium Villages Project, which was much more interesting and surprising than it sounds (let’s face we’re probably all a bit jaded with discussions about the MVPs, especially if you work in the UN since many people erroneously believe that MVPs form a major part of the UN’s approach to poverty, when in reality it only forms a small experimental part of our overall work with relatively little UN funding or support).

Bill Easterly set the stage by introducing the debates as a comparison between smart, expensive decision-making systems, in which he was also lining up the afternoon’s discussion on randomized control trials (RC Ts), and cheap, dumb solution finding systems, by which he means experimentation and success or failure based on market feedback. This is a new framing of his idea of searchers versus planners in development – but instead also looking not only at how development projects are planned, but also how they are evaluated.

In the MVP debate that followed, instead of having Sachs come into the lion’s den, Stewart Paperin of the Open Society Institute, a major funder of the MVPs gave the approach a spirited defense against critics Michael Clemens and Bernadette Wanjala who have both been publicly critical of the MVPs citing their lack of transparency and rigorous evaluation, and for overstating their results.

What was interesting about the debate was that Paperin was skillfully able to defend the MVPs on the grounds that they were, from his perspective at least, an investment in a practical, even entrepreneurial experiment that wasn’t certain to work, but were a good chance to try something different in order to learn more for the future.

In the end the most interesting aspect about the conference for me was the debate around the nature of actionable knowledge in development, and what can we trust as a basis to make decisions on development funding and action. This is both a scientific and a practical question.

The “debate” has been set up in at least three different ways:

In his book  “The White Man’s Burden” Easterly talks about planners versus searchers i.e. those who think that top down set of proven approaches can work in development (a la Sachs) versus those who believe that all solutions are local and that people need to experiment and find their own solutions within their own context,

In his talk Clemens spoke instead about the goals movement versus i.e. those who believe they already have sufficient evidence for their solutions and have a vision and passion to take them to scale versus the evaluation movement i.e. those that believe  that we need to rigorously measure what we do to know if it works and how to improve it.

But in his introduction Easterly also spoke of a third debate: between rigorous, expensive scientific measurement versus low-cost experimentation and market feedback. His case being that evaluation is costly, and often the results are not decisive or generalizable, so it might be more effective to use feedback from beneficiaries as a way of assessing what works.

Three big take-aways from the discussion were:

1. We might know some things about what works in development, but there is a lot we don’t know.  Even when we do know something, it’s not a guarantee that it will work without a hitch in another context.

2. Evaluation (and tools such as RCTs) can tell us a lot about what works but they are expensive to run, and their results are not always easily generalizable or actionable.

3. But if you don’t measure your project in some way, then how will you know if a project works? and how will you improve it?

What strikes me here is that in a way all of these different perspectives have value, but their proponents have a difficult time understanding each other and figuring out how to best combine their approaches.

Wouldn’t it be good if the goals people – those who have a clear vision and a passion to pursue it would create momentum  and raise resources around their projects, whether these are large-scale plans developed from extensive research and experience, or whether they are smaller scale hunches or experiments – but as entrepreneurs rather than as top-down planners. But at the same time if these projects would collect data from the outset to better enable them to track progress, and if where feasible they could try multiple approaches or variations on an approach to be able to compare then and learn from the differences.

Similarly if the results could be made public, then funders, beneficiaries and even academics could see them and independently assess them, and project managers could use them to modify their programmes and identify whether they should be scaled up or shut down.

Lastly, but perhaps most importantly – the missing element in evaluation of development projects is effective and ongoing beneficiary feedback. Entrepreneurs, unlike aid planners try lots of different things some which succeed massively while others fail dismally – the difference being that their success is measured by the feedback they get from consumers who buy their product. In the aid world we don’t yet have effective ways to get this feedback so we rely instead on evaluation –  to rigorously, but only selectively assess the impact of our work, and communication – to sell our story of success but of continued need to funders who are far removed from the experience of those who the programmes are designed to assist. And evaluation and communication are often at odds.

The next big focus on measurement will hopefully be in the area of getting real-time feedback from beneficiaries which can be fed back into projects to improve them, and fed back to donors and the public transparently in order for them to better judge what and who to fund, and all this at a relatively low-cost and greater clarity than expensive evaluation.

Formal evaluation (and experimental project designs such as RCTs) can focus on those areas where getting the programme design specifics is important in terms of cost and impact, but where the results are also likely to yield insights which can be generalized beyond a specific programme.

(For a more complete account of the conference check out Tom’s blog here and his curated twitter stream here)

Written by Ian Thorpe

March 29, 2012 at 9:15 am

KONY2012 – a story in one flavour

with 9 comments

So, the Internets are abuzz with KONY2012, Invisible Children’s latest film offering. This comes broadly in two flavours:

1. The bulk of the masses, the mainstream media, plus a number of fawning celebrities all talking about how great this is.

2. A much smaller, but increasingly loud chorus of aid bloggers,  researchers, journalists and Ugandans themselves criticizing the film as oversimplistic, inaccurate, misleading and potentially harmful.

I’m not an expert on what is happening in northern Uganda, and lots and lots has been written on this already (see Brendan Rigby’s excellent ongoing compilation of articles and blog posts on the topic), but the beauty and curse of the internet is that everyone can have their say, whether they know anything or not!

So here are a few thoughts from me from a knowledge manager’s perspective:

Firstly, I couldn’t have found a better illustration of my last two blog posts on storytelling. KONY2012 nicely illustrates on the one hand how the most effective way to engage people is through a story, not though research reports, statistics, and official documents, but on the other hand how a story can vastly oversimplify or even misrepresent a complex problem and leave you with little idea about what is really happening, especially if you don’t or can’t verify or if you rely too much on a single story.

Secondly this whole buzz does create two potentially important opportunities:

i) Kony is in the news! – maybe all this public attention to Kony and northern Uganda can actually provoke some useful discussion, and maybe all this “awareness” can be translated into increased political pressure, and even political will, not necessarily to do exactly what the campaign is requesting, but rather prompting people to learn more about what is a complex situation and to think a little bit more about , be a little better informed, and think more about what they can (and can’t) do to help the developing world. Maybe.

ii) Smartaid and badvocacy are once again a hot topic. The potential backlash against KONY2012 opens up a useful debate about the role of advocacy, of activists and about how to communicate and fundraise for development. There’s an important “awareness raising” opportunity here too for advocates for a more nuanced understanding of development, and for a  more dignified and authentic presentation of people and problems in developing countries to bring this discussion to a broader audience.

A last point is that this situation highlights a fairly fundamental problem in knowledge sharing around development – the rather large gulf of understanding and perspective between researchers, aid practitioners, advocates and activists, governments and the donating public, and the most important and least listened to group of all – those affected by the problem (and one hopes the intended beneficiaries of any action). It highlights the immense challenge in bringing the knowledge of “experts” whether researchers, aid workers or affected populations (who are in a way the real experts) in a compelling and actionable way to those who could use it for evidence-informed action that might make a real different to people’s lives. At least it seems very difficult to do this without compromising the integrity of the knowledge itself, or perhaps the temptation to do so in order to get your message across is sometimes too great.

In a thoughtful blog on KONY2012 and the difficulties of bridging this gap James W. McCarty wrote: In this situation I think what we need is not academics who “simplify better” but activists who “complexify better.”

Undoubtedly we need both, but I think what we also need are more knowledge brokers, intermediaries who can help bridge the gap between those who know and those who can put that knowledge to use, with the aim of not only connecting people with relevant knowledge, but also putting it into a form that can be easily used and is interesting, and compelling enough for them to take notice and persuading them and helping them to use it effectively, and doing this while obeying the golden rule of advocacy “simplify but don’t distort“.

Written by Ian Thorpe

March 8, 2012 at 1:23 pm

Two great initiatives you should know about

with 4 comments

Peer coaching for development workers

Whydev and Development Crossroads, are launching a matchmaking service for peer coaching, aimed at young professionals, graduate students, and others starting out in international development could benefit from having access to peers who can help talk them through a problem and act as a sounding board.

They are just in the planning phase right now and are seeking feedback on the level of demand for this type of service and  what kind of matching system might be most useful. They have developed a short survey to help them craft the service  - go and let them know what you think!

We all need someone to talk to who isn’t part of our immediate team from time to time, even those of us who can’t really call ourselves young professionals any longer.  This is a promising idea that deserv es your support and input.  To find out more check out the blog announcement and questionnaire here.

The knowledge sharing toolkit

OK. I’ve written about this before, but there have been a wealth of updates since I last plugged it and also UNDP have now been added to the list of cosponsors and contributors. Here’s the spiel….

Join the CGIAR, the Food and Agriculture Organization of the United Nations (FAO), the KM4Dev Community , the United Nations Children’s Fund and the United Nations Development Programme in creating and growing the Knowledge Sharing Toolkit (http://kstoolkit.org) an excellent resource of knowledge sharing tools and methods.

It is a living wiki based site where a wide range of individuals from the sponsor organizations and others have written or pulled together materials about a wide range of knowledge sharing tools and techniques. It’s open to all to participate, whether it is just to consult the toolkit as a resource, or whether you would like to add new material or improve what’s already there.

What can you do with the toolkit?

1. Use the Toolkit and share it w/ colleagues – the simplest step. You don’t even need to join the wiki to read it. Bookmarkhttp://www.kstoolkit.org/ Tweet it out!

2. Improve an existing page – every page on the wiki is editable. All you have to do is join the wiki (upper right hand corner – you will have to wait for one of us to approve – we do this to keep out spammers), then go to the page you want to improve, click edit, and have a go! (See also http://www.kstoolkit.org/… )

3. Create a new page for a method or tool that is not yet in the Toolkit (see alsohttp://www.kstoolkit.org/How+to+Make+a+New+Toolkit+Page ) – Go to either KSTools or KSMethods (the lists are in alphabetical order), click edit, write in the new message in the appropriate alpha order, click on the link creator in the editor window at the top, and choose wiki link. The system will create a new link. Then click save. After the page reloads, click on the new link you made. That will take you to a page that has to be created (by you!) Then on that page select the KSToolkit template and start editing! (yes, we built a template to make it easy)

4. Comment on any page… just click on the little “comment” balloon on the upper right of any page – you have to be logged in though!

This is a common resource – so it is as good as WE ALL make it!

Written by Ian Thorpe

February 2, 2012 at 3:13 pm

Poker face: betting on development

with 2 comments

Duncan Green recently wrote  Honduras is building a charter city? This is never going to work which as you can tell by the title is rather skeptical about the attempt in Honduras to build a charter city along the lines proposed by Paul Romer. In the opening paragraph we offers readers to make a wager that it doesn’t work, such is his confidence.

Roving Bandit offers (kind of) to take him up on his challenge by correctly pointing out that just like in betting on poker sometimes it’s worth while betting on a long shot, if the potential reward is big enough and you have enough resources to  bet on a long shot without losing everything.

“So what odds do you say Duncan? I’ll give you £10 if it fails and you give me a £1000 if it works?”

This is a very important observation because if you never bet on something untested and risky, then you would rarely be willing to invest in something new, untested and potentially risky but which could potentially yield large benefits –   exactly the kind of challenge we face in development aid.

That said betting on a long shot in poker is different from doing it in the real world in two important respects:

  1. In poker you know whether you have won the hand – the rules of success are clear.
  2. In poker you might not know what cards the other player has, but you can calculate the odds and use them to guide your bids (at least according to those poker tournament shows they have on the teevee).

So going back to the Guatemala Charter City idea, although different people will have different opinions about the likely success of this enterprise, we don’t have a very clear picture neither of the real odds or the potential payoff. In addition if I were to take Duncan or Lee’s bet, how would I know if I had won or lost – what is the definition of success?

All this is a very roundabout way for me to come to the under-exploited potential of “prediction markets” for development. What if you could run a book on the potential outcome of a project investment? If you could get enough people to take bets on whether this (or any other project or initiative) is successful then you might get a much more accurate sense of the likely risk and potential benefit of this project. In order to so this you would also have to come up with a specific definition of success in terms of what happens by when.

As well as being fun for people to place a little wager, this type of approach could also tell us a lot about where to place our investments in development. Interestingly enough, while we often take our pilots projects to be make or break, in reality a failure or success of a pilot, including the current one on charter cities, doesn’t tell us whether an approach is right or wrong, it just helps us reevaluate the odds.

(for the record I agree with Duncan – but what do I know?)

Written by Ian Thorpe

December 23, 2011 at 8:50 am

Back to the future (2006): why too much aid harmonization might not be a good thing

leave a comment »

Back in early 2006 I wrote a short thought piece on aid effectiveness to stimulate internal discussion as I was beginning a new role combining work on aid effectiveness and knowledge management.

To acknowledge the Busan High Level Forum on aid effectiveness which is taking place this week, I thought I’d re-share this piece, even though I’m no longer directly working on the issues being discussed. I’ve left the document as is, apart from removing some internal references that might not make sense to an external audience. It’s surprising to me that in some ways the discussions today are very similar to the ones taking place back then, and most if the issue raised still seem relevant to me.

The paper does miss a few issues which are hot now, but which were not so much on our minds back then such as aid transparency, the role of technology and how emerging donors fit within the aid architecture. My views have also probably evolved somewhat with the benefit of experience and hindsight. That said I’d be interested to hear what you all think of the paper in today’s context. Here is is:

One size does not fit all – Why too much aid harmonization might be a bad thing - April 2006

Abstract:  The current push towards greater harmonization of  aid programmes both within the UN and by the development community as a whole has the potential to bring dividends in terms of greater coherence and efficiency in the delivery of aid. At the same time there is also a risk that too much harmonization will dampen innovation, reduce flexibility, and might result in everyone pursuing a single flawed plan. This article explores some of the risks of aid harmonization in more detail and suggests some ways in which they might be mitigated.

Reform and harmonization of development aid, together with a big push for additional resources is the current recipe being advanced for us to finally achieve the long elusive goal of eradicating global poverty. The Paris Declaration on Aid Effectiveness, signed in 2005, launched an ambitious agenda which includes an emphasis on national ownership and national development plans and national execution. For donors there is an alignment of programme cycles, targets, monitoring, and aid conditionalities.

It was signed on to by a wide range of donor and recipient governments as well as the UN, IFIs and a number of NGOs, Foundations and Global funds. The UN Development Group is participating in this effort, and in addition is currently defining its own strategy for greater harmonization within the UN including a more simplified and integrated approach to country programming, linked to national development plans as well as a move to joint offices.

Although a multitude of international targets, goals and commitments have been created by various global summits and conferences – a common set of goals has emerged as the global reference point for development – the MDGs. At national level National Development plans which are also Poverty Reduction Strategies are centre stage, and are for the most part, closely linked to global goals.

There are a number of potentially significant benefits of the current harmonization efforts (which is why they are being  pursued so strongly). These include:

  • Common mutually agreed aims and targets to which everyone contributes – we all know where we are headed, and we are all working to achieve the same thing.
  • Common plan  based on a common assessment and analysis of the needs (which is hopefully evidence based!).
  • International assistance is linked to national plans and therefore local needs rather than donor priorities.
  • Co-ordination efforts reduce programme overlaps and inconsistencies between approaches, aims and cycles of different agencies. This reduces wasteful duplication, and helps generate information sharing and programmatic synergy between the different aid actors.
  • Common institutional assessment, reporting and accounting requirements for budget support and national execution reduces reporting and accounting burdens on recipient governments.

But, there are some significant risks involved in harmonizing too tightly around all aspects of the development process. To paraphrase William Easterly, the aid business is the only area where we still have faith in central planning, long after it has been discredited as an approach by the end of the cold war, and despite little evidence to show it has fared better in development work. This might be overly harsh but there are always dangers of putting too many eggs (or funds) in one basket. So why exactly might overly harmonized planning not work?

Here are a few reasons:

  1. Too many meetings: First of all, on a practical level the co-ordination of all the various development actors including the UN, bilateral donors, IFIs, NGOs, government and civil society creates an enormous burden of co-ordination work. There is clearly a trade-off between time spent on co-ordination and time spent on implementation. Getting the right balance is tricky and current efforts might be tipping the balance unfavourably.
  2. The dangers of group think: Bringing a large group of development partners together with different capacities and agendas to agree a common plan is always going to produce an imperfect result. Often the loudest, most eloquent or best financed voices exert the greatest influence. There is pressure to reach consensus on issues when in reality none exists for the sake of reaching an agreement. Pressure to reach agreement risks producing lowest common denominator compromises that in fact satisfy no-one. Some key voices and issues may well go missing – especially those concerning the poor, the marginalized and the young.
  3. Repeating the same mistakes over and over: Assume you have one plan, based on one analysis. What happens if the analysis is flawed in some critical way? (Not impossible given the complex problems to be addressed and in many cases the uncertainties or even lack of data and research about them). Then everyone is following the same flawed plan. The plans will either be all in the right direction, or more likely all in the wrong direction. And the resulting development effort will be universally bad.
  4. Think local, act local: Some problems are simply better resolved with a bottom up approach by people finding their own ways of addressing the problems they face, in their own context. External models and expertise, and centrally mandated solutions cannot always be successfully applied to a situation. Local context is crucial, as is innovation and flexibility – things that work better at a smaller scale than at a larger one.
  5. Government “good”, government “bad”: an unintended consequence of having a common accountability framework for national execution is that either everyone is “in” channeling funds through government, or everyone is “out” when corruption, human rights issues or other weaknesses are detected. In practice this makes the funding of budget support highly volatile as money is either abundant or non-existent – hardly an environment that helps strengthen institutions. At the same time corruption may be harder to identify as a government entity only has to ace a single assessment tool.
  6. Slow to change: any plan that needs to be negotiated with myriad of local and international partners, and needs to find consensus will not easily be able to adapt to changes in the situation, whether they be emerging crises or new opportunities.

Despite this doom and gloom, I don’t believe all is lost. There are great potential gains from harmonization, but a few precautions are needed to mitigate some of the risks. Here are a few suggestions:

  1. Common plans should not be too prescriptive. It might be good to agree on a common set of high level goals, but not on highly detailed objectives or on specifications of how they should be pursued. Detailed planning should be transparent but is best left to those closer to the implementation.
  2. Open communication and sharing of ideas within the country and between countries should be promoted – even if this is not directly in support of the plan. It’s useful to provide a platform and to create opportunities for sharing of development knowledge – but this sharing needs to be organic, based on the needs and interests of those who participate – not centrally controlled to serve the plan.
  3. We need to constantly ask, through research and evaluation, whether the plan is really working, what aspects work and which don’t and why. Preferably some of this evaluation is done by people who are independent of those who are carrying out the plan. And, of course, there needs to be a mechanism to ensure that what is learned is fed back into the decision-making process.
  4. Feedback should be sought from beneficiaries and development actors on a continual basis in the design and execution of the plans – for accountability and transparency but also to get as complete a picture as possible of the situation on the ground. This feedback needs to encourage, not silence dissenting views, in order to continually challenge the assumptions of the plan, and to explicitly acknowledge diverse viewpoints where there is no consensus.
  5. Plans need to have flexibility built into them to allow for the incorporation of new research, changes in the situation, or even changes in the availability of resources.
  6. There needs to be some space for grassroots initiatives to emerge. Ideally this would involve setting up a regulatory and financing system that allows natural solutions to evolve, and funds promising new ideas, even if they weren’t in the original plan.
  7. Some development actors should stay out of the central planning loop. It’s good if there are a few NGOs, private sector organizations or others that are not following the same plan as everyone else. They can help naturally fill gaps that were missed in the plan and they can innovate and take risks and thus help alternative views and models to emerge (if they have merit they can then be co-opted by the plan). Some outside critical views also help improve accountability.

What role can the UN play in these discussions? I think we can do a few practical things such as ensuring our own programming is evidence-based, is flexible and includes an element of experimentation with new approaches, is evaluated and allows for course correction. Another must-do is to ensure that the key issues get sufficient attention in discussions around the design and implementation of the national development plan – here it’s important not only for us to have a voice, but also that we ensure that civil society, and most importantly families and communities have an input into the process. In addition to this we need to work with civil society to empower them to be a critical “outside” voice that calls government and donors to account for progress made or not made.

In practice though, aid harmonization is proving more difficult to implement than anticipated. Given the diverse interests of the various development actors, it’s unlikely that all development actors in a country will be able to agree to follow a common set of goals, plans and methods any time soon!

[end..]

(P.S. It goes without saying that i) this is only my own writing and does not in any way represent the views of the UN ii) it was written back in 2006 in a different time when I was in a different role.)

Written by Ian Thorpe

November 29, 2011 at 4:39 pm

Posted in smartaid

Who reads development blogs and why?

leave a comment »

I’m just back from vacation and also preparing for an imminent job move (will share more information soon) so to ease back into blogging here’s a little request….

Those of us who are writing aid/development related blogs would like to know who is reading them, what you read, how you read it and why.

Smartaid pal Dave Algoso (and author of the excellent development blog Find What Works) has put together a nice, short questionnaire. Please take a few minutes (it takes less than 5) to complete it so us bloggers can learn more about our audiences, and then with any luck, make them better. And what’s more the results will be shared so that both bloggers and blog readers can benefit from Dave’s analysis – which will be a useful contribution to the blogging knowledge pool.

Here’s a link to the questionnaire.

Please take a mo to fill in the questionnaire, and if you have a blog please also post a link. Thanks!

Written by Ian Thorpe

September 7, 2011 at 3:13 pm

Posted in smartaid

Follow

Get every new post delivered to your Inbox.

Join 11,847 other followers

%d bloggers like this: