KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

KM, M&E, the art and science of delivery

with 2 comments

I was meeting with a KM team from another UN agency a couple of days ago when the conversation turned to two interesting and related questions

1. What is the relationship between knowledge management and monitoring and evaluation?

2. To what extent should the focus of knowledge management be about improving the use of academic or scientific knowledge in development work?

Many organizations, especially in the UN are linking monitoring and evaluation with knowledge management both in terms of content and in terms of organizational structure. This is both an opportunity and a challenge. On the one hand information from monitoring and evaluation processes is a critical input to knowledge management processes, and similarly knowledge management tools and techniques can help support better monitoring and evaluation. At the same time there are subtle differences between these approaches (see my past blog post comparing KM and evaluation for more details). A major difference is that monitoring and evaluation has a greater focus on accountability – is the project on track, were the outputs delivered, was the money spent well, did the project have the desired impact. Knowledge management focuses more on learning and reflection and on how to share what is learned with other projects. These are in fact very complementary, and they require some overlapping skills such as the ability to collect and analyze data and summarize and interpret it – but again knowledge management practitioners put a greater emphasis on “soft”  skills such as understanding human psychology and group dynamics, networking skills and skills in both interpersonal and mass communication. Another challenge is that if the KM team is seen by staff across the organization as part of the “results” team or performance monitoring team then this can mean that people are less likely to trust them with their stories of failures and setbacks in case that be “used against them” but a large part of learning requires candid reflection on positive and negative experiences in a safe environment.

In practical terms having M&E and KM sit and work together can be very effective to get a more complete picture of organizational knowledge and how to leverage it, but this only works if the KM people are allowed to function like KM people and their different role and skill set is respected rather than being seen as part of an overall monitoring and accountability system. Similarly KM people need to work closely with human resources, communications and technology teams across any organization so the structure and working methods need to allow for that (I hope to write a future blog post about the plusses and minuses of locating the KM function in different parts of an organizational structure including communication, staff development, IT, executive office, programmes etc. –unsurprisingly there is no “best” approach as each has it’s advantages and disadvantages).

Taking the second question – an increasing refrain heard in aid agency strategic plans, or government plans for that matter is that they need to be more “evidence-based”. On one level this is a no brainer – if you have knowledge about a problem, how it is caused and what strategies are effective in tackling it, then why wouldn’t you use it?

A more interesting question might be to look at why available evidence isn’t it being used, or what if any are the limitations of what you can determine from research that can be applied in addressing problems in the real world.

Research and scientific method is very powerful, and woefully underused in identifying what types of technical interventions work well in development and under what conditions. It makes sense to use experimental research techniques to determine which interventions are most effective and also to tweak their design to improve their efficiency both at a general level and at a local level to adapt them to context.

But in every project there a certain amount of “knowledge” that is needed to implement them successfully that isn’t easily measured through a scientific approach and which can’t be implemented in a standardized fashion across different contexts. (see an early blog of mine “The truth is out there, or maybe not” for more discussion on the limits of applicable scientific knowledge). The most important of these are politics, culture and personalities (i.e. the actual people involved in implementing the project or who are critical to its success). Dealing effectively with these issues requires a combination of local knowledge, experience, qualitative research and insights, peer assistance and advice and flexible adaptation. This is where knowledge management techniques such as communities of practice, peer-assists, after-action reports and even lessons learned databases and expert rosters come into their own. Similarly innovation tools such as human centred design and rapid prototyping can also be put to use to address the aspects of project design and implementation that don’t readily lend themselves to rigorous research, or for which standardized approaches can easily be designed.

Again, soft KM techniques, personal judgment and expertise and hard science should not be seen as competing approaches but as complementary ones. The challenge is figuring out how to combine them effectively, especially which approach to use and when, and what to do if they generate seemingly different conclusions. For me, this is an area we could do to think more about. Yes, to doing more and better research on development approaches, and yes to doing more to put the conclusions in the hands of decision makers and persuading to use them in their decisions – but at the same time we also need to think more about how to tackle the soft side of the “science of delivery” such as how do we adapt approaches to make them successful in a local context taking account of politics and power as well as culture and social norms – and how do we manage the people side of the project effectively, and how do we continually adapt our programmes to deal with the changing situation on the ground, as well as ensuring that we are constantly learning from each new experience and incorporating that learning into our future programmes. This is the “art-meets-science” of delivery where we still have a lot to learn.

Written by Ian Thorpe

October 31, 2013 at 9:54 am

2 Responses

Subscribe to comments with RSS.

  1. Thanks Ian, a great read.
    It reminds me of a current piece of work i’m doing with a client who wants to review and learn from a $2Bn construction programme. We scoped out a learning approach, some suitable open questions and a framework for capturing and preserving the outputs and messages for the future.

    What has been interesting is that one of the members of staff who is working to capture the stories is a former auditor. He has spoken openly of the internal conflict he has in wanting to steer the dialogue too quickly in a certain direction, and to drive to accountability and action, rather than focus on surfacing the stories and curating the narrative, recommendations, connections and artefacts for the next team.

    With some coaching, he’s doing a good job, but it’s been an interesting journey for him.
    Sound familiar?

    chriscollison

    November 3, 2013 at 10:05 am

  2. […] I was meeting with a KM team from another UN agency a couple of days ago when the conversation turned to two interesting and related questions 1. What is the relationship between knowledge manageme…  […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: