KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Knowledge management and traditional evaluations – benefitting from both efforts part 1

with 8 comments

This morning I’ll be presenting at a panel discussion at the UN Evaluation Group Annual Meeting  – Evaluation Peer Exchange on “Knowledge Management and Evaluation”.

I’m writing up this session as two blogs posts – this one which contains my talking points which I’m putting live at the same time as the session, and part 2 which will be my reflections on the session as a whole including the other presentations and the audience discussion.

Part 1:

I started out by UN career in the UNICEF Evaluation office in 1996 (is it really that long ago?) – but ended up drifting first into information management, then communication and finally knowledge management, so I’ve seen both sides of this discussion up close.

For me they are two valuable and potentially complementary approaches – but often practitioners of one aren’t able to see things through the same eyes of those of the other, and we don’t always make the best use of the two approaches in combination.

First, I’ll start out with what’s in common between the two approaches. In essence both are about “learning” – finding out what works and what doesn’t in order to understand and learn from our experience and feed what we have learned back into the project we are managing or extract any usable knowledge that might inform other projects or even our overall organization policy or approach.

But there also a number of important differences. This might be a bit over generalized but a number of key distinctions can be seen in this table:

Evaluation

 

Knowledge Management

Explicit focus on (donor) accountability

Limited implicit focus on (beneficiary) accountability

Objective

Subjective

Observation

Reflection

External – done by evaluators

Internal – done by programme managers

Meticulous

Messy

Focus on explicit observable knowledge

Strong focus on tacit knowledge, know how and experience.

Slow

Quick

Expensive

Relatively cheap

Data focused

People/relationship focused

Independent of programme

Integrated as part of programme

Selective/Strategic

Ubiquitous

Macro issues

Micro issues

Each of these then lends itself to different types of questions and situations. For example if you want quick cheap feedback on how your project is doing for immediate action then you might be better off using  knowledge management tools such as after-action reviews or peer-assessments.  You can’t evaluate everything because it would be too resource-intensive – and by the time you have completed an evaluation the situation might already have changed. But at the same time, if you want to explore scientifically in-depth what happened in a programme, or you want to have solid evidence for scaling up of a new approach then an evaluation will give you something much stronger than a KM process.

But it’s also interesting to compare the results of the two approaches for the same project or type of work – do they back each other up or do they contradict? Do they give you contradictory or complementary insights into what is working in a programme and what isn’t?

It is also interesting to compare their impact on use and policy. Evaluations are often called for as a way of assessing pilot initiatives or as a funding condition – but at the same time, due to their external arms length nature – while they might be good to support a particular advocacy position or funding discussion, they are often less useful as tool for internal learning in terms of feeding back into the programme or into practice in general. Knowledge management tools if done well are likely to had a greater impact on learning because they are usually done by the people running the programme themselves or professional peers doing similar work. The Management Response is a step forward in the sphere of evaluation in that it ensures  people formally respond to an evaluation  – but the risk is that people will go through the motions, especially if they don’t really agree with the observations.

But how might these approaches be usefully combined? One way is to make use of knowledge generated through KM processes in the evaluation itself. In particular to make use of knowledge networks to help support the evaluators in formulating the right evaluative questions, help bring in relevant experience and comparators during the evaluation, to validate the findings with practitioners and in helping to engage the “community” around the implications of the findings for their work, and in supporting the programme managers in implementing the follow-up. To be useful therefore, action on the follow-up to the findings should be started before the report is finalized.

Another aspect is that  if you keep getting similar conclusions about a systemic issue from informal KM processes you might want to follow this up with an evaluation to analyze the issue more rigorously and to help identify and justify whether, for example, policy change is needed. Or you might try to develop a system to collect feedback and reflections (both internal and external) throughout a project and then aggregate these as a primary  information source for your evaluation – which would also mean that you could produce evaluation  conclusions at any stage of a project once enough data has been collected rather than being forced to decide on a single episodic assessment.

Another is in methodological exchange. On the one hand evaluation can (and does) adopt techniques from Knowledge Management – examples include the use of storytelling – including through audiovisual techniques to collect either records of the programming process, or on the impact on and views of programme beneficiaries. This can be both informative – but also if scaled can be made to be more representative – and the materials collected can also be very useful to support the dissemination of the formal findings, since whatever policy makers say about what they want, in practice they react more to real life stories than they do to dry analysis alone.

My last point is that evaluators need knowledge management for evaluation work itself i.e. to reflect on and learn from their own experience (without needing to formally evaluate themselves) and to help network with and learn from other evaluators and provide peer support, such as through the community of practice that is being developed by the UNEG KM group. And knowledge managers need you – the results of knowledge management are notoriously difficult to quantify and to evaluate, posing a number of methodological challenges – and too often we shy away from doing so – yet we do need to do this to be able to improve and advocate for our work. Evaluators help us out!

About these ads

Written by Ian Thorpe

April 15, 2013 at 9:30 am

8 Responses

Subscribe to comments with RSS.

  1. This is very good, Ian. Very relevant to what we are doing in KM and knowledge services these days.

    Do you mind if I reference this and your follow-up post in the course I’m teaching for Columbia University’s M.S. in Information and Knowledge Strategy program (Management and Leadership in the Knowledge Domain)?

    This would be very valuable to our students.

    Good luck with your presentation.

    Thanks.

    Guy

    smrknowledge

    April 15, 2013 at 10:41 am

  2. Ian, this is an excellent observation of KM and M&E. while KM can be done quickly and easily, i think project managers should take the responsibility to do after-action reviews and write learning blogs after certain events. In a sense, it is about working out loud too, as most of the projects limit themselves within press-release and some top-down communication only. Let me give you our example from the Rio+20 Dialogues, right after the event, our manager shared with us the template for the after-action review and we organized a brainstorming session to know what went right and what could be done better. As a result, we had come up with the list of things, which were very helpful in identifying our strategic steps in the next events.
    Cheers, Ifoda

    ifodakhon

    April 15, 2013 at 5:35 pm

  3. Like the way you have layed out the distinctions in the table I think this makes it clear for the non expert to understand the fundamental differences

    Many thanks

    Sue Waller

    April 16, 2013 at 10:11 am

  4. Ian, you should really come to KM4Dev in Seattle in July – many of us are interested in this! I think Patricia Rogers from BetterEvaluation.org wants to host a session on this juicy intersection!

    Nancy White

    April 16, 2013 at 11:59 pm

  5. [...] This morning I’ll be presenting at a panel discussion at the UN Evaluation Group Annual Meeting – Evaluation Peer Exchange on “Knowledge Management and Evaluation”. I’m writing up this session as two blogs posts – this one which contains my talking points which I’m putting live at the same time as the session, and part 2 which will be my reflections on the session as a whole including the other presentations and the audience discussion.  [...]

  6. […] At the same time there are subtle differences between these approaches (see my past blog post comparing KM and evaluation for more details). A major difference is that monitoring and evaluation has a greater focus on […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 13,148 other followers

%d bloggers like this: