KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Keeping the end in mind

with 2 comments

success-sketch

(Image: @babs26)

In the UN we are used to making big audacious goals to change the world, whether it be halving child mortality, eradicating extreme poverty or empowering the poor to have a say in how their government is run.

At the same time, by ourselves we have limited means to achieve these bold goals, so rely a lot on our power to convene and persuade others to do what is needed. The problem is confounded by the fact that for some of these problems we have a fairly clear idea who needs to do what and how, but for many of them even when we have ideas and some evidence there is no blueprint for success (for example think about the now much discussed idea of reducing inequality – there is a growing consensus around the importance of this, but we don’t even agree on how to measure it let alone what approach we should take to achieving it).

But if you don’t have challenging goals then you have no sense of direction, and no way of knowing if you are on the right track, and if your goals are too modest and simple – then you are probably not trying hard enough to do what we are trying to do, which is ultimately make the world a better place.

Let’s drill down a bit into how we work in development agencies to try to make these goals a reality. Given the importance of the goals themselves and the amount of money and effort required to achieve them then there is an increasing focus in “managing for results”. This is both understandable and for the most part welcome since if we are to make the case for aid, then we need to be able to prove whether or not it works (or more likely when and where), and if we are to ensure that our projects are being managed well and that all partners are accountable for delivering their part of a complex puzzle then we need systematic tools to monitor how we are doing both to report on whether we on track and spending money wisely, and also to flag problems and make course corrections when needed.

There are also a number of critiques of the results focus and Results-Based Management, some of which I’ve aired before on this blog but there is a particular challenge I’ve seen time and again in aid work that isn’t a flaw in the approach itself, but rather in terms of how we apply it.

When we develop our results chains or log frames for a project we invariably end up with a workplan of discrete activities with budgets and responsibilities assigned to them. We usually have some type of monitoring framework with indicators and baselines to accompany it and this perhaps includes some specific research, evaluation or data collection tools to keep it up to date. If we’ve done a good job our plans will also identify some assumptions that we consider need to be met in order for the activities to deliver the outcomes we are expecting, or if we are getting fancy we might even have an articulated “theory of change” that more clearly explains the link between the activities and the desired outcomes.

So far so good. But then we get to execution.

In many, many projects I’ve seen the focus  of monitoring shifts quickly to implementation – have we carried out our activities as planned?, have we spent our budget? and we hope to whether the activities delivered the outputs we were expecting. But once we are deep in the day-to-day management (and monitoring) of execution we tend to forget about the end goal. We start to care more about whether we delivered our training workshop and spent our budget than whether we actually built capacity, or whether that capacity is performing the role we originally intended.

If we are then asked whether our project is successful we can confidently assert it is because we were successfully able to carry out all our activities and spend all our budget and have something visible to show for it. But in doing that we often fail to cross check our outputs with the desired outcomes and impact. And if there is a gap between outcomes and where we expected to be, we often tend not to focus enough on understanding why – in particular looking to see whether our assumptions and theory of change were correct, or if circumstances have changed so that what seemed right at the beginning no longer hold true.

Looking at why our well executed activities didn’t lead to our desired outcomes is difficult, which is why we do it less than we should. In particular it’s easy to escape behind the assumptions – particularly those of the type “This assumes that the [name external partner] will effectively carry out complementary activity [X] and provide additional financing [$Y]”. But far from being able to blame lack of success on others not doing their part; reducing uncertainties around external assumptions in the logframe should be considered a key success factor for a project and something to be regularly monitored.  In reality the path to success is rarely linear, as we can’t be sure our theory of change is correct or doesn’t need to be adapted to context, and we can’t be sure that what circumstances don’t intervene that require us to change tack.

A couple of practices from audit and evaluation that are intended to foster systematic learning and improvement inadvertently contribute to this. In both audits and increasingly in evaluations there is a requirement to develop and implement a “management response” which outlines how the project or office that is being reviewed will take action to implement the recommendations of the review. This seems eminently sensible as it holds managers accountable for ensuring they read, consider and act on findings from an external review. But the negative side of this (and I’m basing this on several experiences) is that the response is usually a list of actions to implement, and the measure of success is whether they are adequately implemented, not whether they actually solved the shortcomings that the audit/evaluation identified. In other words they fall into the precise trap that carrying out an external evaluation is designed to avoid.

So what to do about this? We need to find ways to shift our internal accountability mechanisms away from monitoring and rewarding implementation of activities and spending of resources,  or even delivery of outputs to the contribution to outcomes and impact. To help achieve this we also need to focus more on developing and challenging our assumptions or theories of change, and designing projects to minimize the external factors that are a risk to delivery of results, or perhaps even better build our programmes to be more adaptive to changes in external influences which we have little ability to control, something we can only do if we are not too tied to rewarding unthinking but efficient delivery of our existing workplans.

At a basic level what is called for is to keep a focus on the end goals we are trying to achieve, even when we are bogged down in the minutiae of delivery, or at least to keep raising our heads up above the fray to keep asking ourselves whether or not our execution still makes sense in the context of where we want to go and where we are right now.

Written by Ian Thorpe

November 11, 2013 at 10:38 am

Posted in Uncategorized

2 Responses

Subscribe to comments with RSS.

  1. […] at KM on a dollar a day, Ian Thorpe (not the swimmer), ponders what success really looks like.  Achieving big audacious […]

  2. […] (Image: @babs26) In the UN we are used to making big audacious goals to change the world, whether it be halving child mortality, eradicating extreme poverty or empowering the poor to have a say in …  […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: