KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for August 2013

Is there such a thing as evidence-based delivery?

with 6 comments

I started writing this blog post in frustration at hearing once again the mantra in the aid world, and particularly in the UN of need for us to be doing “Evidence based policy and programming” and discussions and laments about why we don’t yet have this and what to do about it. But then this morning I saw two excellent blog posts by Kevin Watkins of ODI and Owen Barder of CGD on Jim Kim’s “Science of delivery” which also seem to touch on some of the same frustrations.

It seems to me that at least part of why this discussion remains frustrating is that we don’t agree on what evidence is, and what it’s role should be in policy or programming.

At first glance it might seem obvious that our policies and programmes should be “firmly based on the best available evidence” and that wherever the ‘best available evidence” isn’t good enough we should be doing more research to fill the gaps. And it can be quick to jump to the conclusions that we need to invest more in research, carefully designed experiments (such as RCTs) and independent evaluations and on the other to put in place incentives or rules than encourage or even force policy makers and programme managers to use the evidence that is available.

While all this might be useful, I don’t think it will be the silver bullet that some think it will. If this is so self-evident then it begs the question as to why we don’t already practice what we preach. Why isn’t aid work primarily if not exclusively evidence-based?

Firstly, however much we invest in research and evaluation there are limitations to what we will know (see my earlier post the truth is out there for a more detailed discussion of the limits of what we can know). Some interventions lend themselves to rigorous experimental design or data analysis (e.g. cash transfers, medicines) while others are harder to do as experiments (e.g. policy reforms) or to research cost effectively.

Secondly incentives to use, or not to use evidence also matter. It’s not enough to improve the packaging and dissemination of new knowledge, or to make using knowledge part of procedure – there are many other factors that govern whether or not knowledge is used both individual and institutional e.g. focusing on implementation rates reduces the incentives to take the time to learn from research or other experiences. (See my earlier post “creating a demand for knowledge?” for more ion the challenges of getting people to use what is already available)

Thirdly, context matters. In most cases it’s simply not enough to exactly copy a “best science based practice” from one location and directly replicate it in another. At best the approach will need to be tailored or adapted to a new context, at worst the approach may not work at all. Kevin Watkins mentions the importance of politics and how different political structures and power relations can have an important impact on how effective a “scientifically sound” approach might be in practice, and you can’t factor this into an experimental design. Owen Barder also highlight how the traditional “evidence-based approach” also fails to deal with complexity i.e. that there are complex and dynamic relationships between the context and the programme which evolve together in unpredictable ways for which a static approach to using evidence and best practice is ineffective.

But I’d add an additional dimension to the issue of context and complexity. Programmes are implemented by individuals, not just by institutions, and individual people bring their own layer of complexity to a problem which is hard to measure or control. Different project managers performing the same role can have different levels of technical skills, but also different personal motivations, and different personal relationships with other key players in the project – and these can interact with the project in unpredictable ways. What’s more we tend to overlook the fact that someone may perform well in one context with one team yet perform poorly in another situation – so it’s impossible to standardize how exactly a project team will work on the project unless we reduce all elements of judgment and unpredictability – transforming programme management to a production line function.

Yet I don’t think all is lost. There are some additional things we can do to improve how we learn and implement apart from investing more in research. Here are a few of them;

1. Take the broadest possible view of what “evidence means”. Evidence might be rigorous research, but it might also be case studies, stories, ethnographic studies, analysis of “big data”. Take a broad view but recognize what each type of “evidence “ us useful for and what its limitations are. don’t overlook key areas such as political context and power analysis.

2. Do more pilots and dynamic experiments – encourage more experimentation – not only in the sense of control-design experiments but also in the sense of coming up with lots of possible ideas, then trying them out, adapting them as you go and building on those that are yielding results.

3. Start from, but don’t blindly copy past practices – encourage use of existing knowledge and experience – but as a starting point that will be built upon and deliberately and continually adapted.

4. Encourage ongoing collaboration and sharing of tacit knowledge between practitioners as well as sharing of explicit research or evaluation reports and results.

5. Work with and learn from beneficiaries  – they will often have insights into why something does or doesn’t work, or what might be done to address a problem that an outsider cannot see.

6. Keep a diary with detailed ongoing records of what is happening on the ground both internal and external factors to help generate enough material to get a useful insight into what is working and why. Encourage self-reflection by those involved in the project based on this information. But you can also use this as a key source for more rigorous independent analysis.

7. Work on incentives  – make sure that incentives are there to generate and to use evidence but also personal insights and experience.

8. Adapt – rather than looking for an idealized approach to solving a problem once and for all  – keep searching for improved ways to solve it that work in the context where they are applied and keep modifying and varying your approach based on what you are learning.

I’m not sure if delivery can be entirely evidence-based – but evidence informed and learning-based would be a good start.

Written by Ian Thorpe

August 14, 2013 at 11:39 am

Dear Diary (on the importance of keeping a journal)

with 6 comments

captainslog

(Captain’s log supplemental)

I’ve seen several articles and blog posts and had several meetings lately that inspired me to write about the value of keeping journals, whether personal, project-based or public.

Whenever we are managing a project we almost always have some kind of formal or informal monitoring and evaluation attached to it, even if it something as simple as a workplan with timelines, deliverables and a few indicators to track progress and then some form of end of cycle report . Sometimes we might have a detailed dashboard of indicators and a comprehensive end of project evaluation.

Monitoring, review and reflection is an important input to learning and to making necessary course corrections. But a challenge with traditional monitoring and evaluation systems is that while we might be regularly looking at monitoring data we generally only perform the “reflection” part of learning at a reporting period or at the end of a project cycle. And quite often we focus on “just the facts” not reflecting on why we did things or how we were feeling about them, or about the impact of external events on our plans.

It’s well known that our memories are not entirely reliable. If we are asked to recall what happened in a project we were working on, we are quite likely to forget important facts, but more importantly we are very likely to forget why we did something and how we thought or felt about the project at the time, or to attribute different motivations to our actions or explanations to outcomes with the benefit of hindsight. We might forget some small but important thing we did or about important external environmental influences that affected us and our work. Also when we are asked to recount what happened in a project we are prone to recreate the story to fit our own preferred narratives, both consciously and unconsciously, especially to downplay mistakes and to impute insight when there was none.

One way to help overcome this, to improve the data we have on how a project worked or didn’t, and to improve our knowledge of ourselves and how we work is to keep some sort of diary or journal. Ideally  this should include not only what was done, but also what happened, how you felt about it, why you did what you did, and any important external events or influences that might be relevant. This can give us some extremely valuable data later on when we come to reflect on or even formally evaluate our experience. The act of keeping a journal can also help us be more explicitly aware of what is happening with our work and since it requires some mini-reflection event to write things down – yet quite often we are so busy “doing things” we don’t find the time. It also helps you track the evolution a project and our thinking about it over time.

There have been a few interesting applications of journaling that came up recently – in his recent blog post “Learning on a rollercoaster”, Chris Collison proposed a learning approach where project teams reflect on the emotional rollercoaster of a project life-cycle as a means to identify key learnings from the positive and negative experiences.

Another interesting idea shared with me by a colleague was the “One Second Every Day” TED talk by Cesar Kuriyama who takes one second of video very day of his life as a way of documenting his life to help him recall it better later both in terms of events and feelings, and also as a side effect this helped motivate him to make sure to do something memorable each day.

In another example I was speaking with a consultant working on a pilot monitoring system with WFP which helps track programme progress within a sector globally and help learn from it on an ongoing basis. There’s much more to the approach that I can describe here, but an interesting and unique aspect of their approach was to ask project managers to keep a log of key external events (e.g. elections, civil unrest or new laws) and internal events (project activities, technical inputs, staff changes etc.) over the year and then at the end of the year to try to attribute (subjectively of course) what proportion of the project outcomes (positive and negative) were attributable to the different internal and external actions they had recorded over the year with the idea of getting a sense across different projects of how different factors affect the evolution of the project outcomes as the project is evolving. While this doesn’t substitute for formal evaluation and research it can help capture different project managers views on what actions are most critical to project success (and aggregate them across projects) and similarly how external events influence outcomes over time.

Another examples was how some UNDP offices in the Europe and Central Asia region are using TimelineJS as a tool to capture and display project timelines as a better way of telling the story of a  project both to communicate it (people love stories and timelines as a way to consume information), but also to also help the participants understand it better.

Other tools such as real-time project reporting from the ground such as is done by Akvo and by Global Giving both help make project reporting more human and more interesting, as well as being more up to date. In these approaches – project reporting is much more frequent, but also much less intense and detailed and more varied in format and content, but over time these give a very rich source of data that can also be used for meta analysis to better understand project evolution and larger trends and issues, precisely because they contain more (not less) anecdotal information that when looked at en masse gives valuable insights into how things are working.

And lastly blogging is itself a form of journal keeping – directly if you literally write down what you are working on (“living out loud”), but also of your thinking and its evolution if you are blogging about your ideas as well as your direct work. I find in interesting from time to time to reread earlier blog posts to see if time has solidified my opinions or led me to change my mind, and the blog serves as a record of the range of different thoughts and ideas I have had which would otherwise go undocumented and probably forgotten.

So keep a diary! Write down what you do, what is happening in the outside world, what positive and negative outcomes there are and how you are feeling about it. It will be an invaluable resource for learning later on, for your organization, but mostly for yourself.

Written by Ian Thorpe

August 12, 2013 at 9:00 am

Posted in Uncategorized

%d bloggers like this: