KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for April 2013

Knowledge management and traditional evaluations – benefitting from both efforts part 1

with 8 comments

This morning I’ll be presenting at a panel discussion at the UN Evaluation Group Annual Meeting  – Evaluation Peer Exchange on “Knowledge Management and Evaluation”.

I’m writing up this session as two blogs posts – this one which contains my talking points which I’m putting live at the same time as the session, and part 2 which will be my reflections on the session as a whole including the other presentations and the audience discussion.

Part 1:

I started out by UN career in the UNICEF Evaluation office in 1996 (is it really that long ago?) – but ended up drifting first into information management, then communication and finally knowledge management, so I’ve seen both sides of this discussion up close.

For me they are two valuable and potentially complementary approaches – but often practitioners of one aren’t able to see things through the same eyes of those of the other, and we don’t always make the best use of the two approaches in combination.

First, I’ll start out with what’s in common between the two approaches. In essence both are about “learning” – finding out what works and what doesn’t in order to understand and learn from our experience and feed what we have learned back into the project we are managing or extract any usable knowledge that might inform other projects or even our overall organization policy or approach.

But there also a number of important differences. This might be a bit over generalized but a number of key distinctions can be seen in this table:

Evaluation

 

Knowledge Management

Explicit focus on (donor) accountability

Limited implicit focus on (beneficiary) accountability

Objective

Subjective

Observation

Reflection

External – done by evaluators

Internal – done by programme managers

Meticulous

Messy

Focus on explicit observable knowledge

Strong focus on tacit knowledge, know how and experience.

Slow

Quick

Expensive

Relatively cheap

Data focused

People/relationship focused

Independent of programme

Integrated as part of programme

Selective/Strategic

Ubiquitous

Macro issues

Micro issues

Each of these then lends itself to different types of questions and situations. For example if you want quick cheap feedback on how your project is doing for immediate action then you might be better off using  knowledge management tools such as after-action reviews or peer-assessments.  You can’t evaluate everything because it would be too resource-intensive – and by the time you have completed an evaluation the situation might already have changed. But at the same time, if you want to explore scientifically in-depth what happened in a programme, or you want to have solid evidence for scaling up of a new approach then an evaluation will give you something much stronger than a KM process.

But it’s also interesting to compare the results of the two approaches for the same project or type of work – do they back each other up or do they contradict? Do they give you contradictory or complementary insights into what is working in a programme and what isn’t?

It is also interesting to compare their impact on use and policy. Evaluations are often called for as a way of assessing pilot initiatives or as a funding condition – but at the same time, due to their external arms length nature – while they might be good to support a particular advocacy position or funding discussion, they are often less useful as tool for internal learning in terms of feeding back into the programme or into practice in general. Knowledge management tools if done well are likely to had a greater impact on learning because they are usually done by the people running the programme themselves or professional peers doing similar work. The Management Response is a step forward in the sphere of evaluation in that it ensures  people formally respond to an evaluation  – but the risk is that people will go through the motions, especially if they don’t really agree with the observations.

But how might these approaches be usefully combined? One way is to make use of knowledge generated through KM processes in the evaluation itself. In particular to make use of knowledge networks to help support the evaluators in formulating the right evaluative questions, help bring in relevant experience and comparators during the evaluation, to validate the findings with practitioners and in helping to engage the “community” around the implications of the findings for their work, and in supporting the programme managers in implementing the follow-up. To be useful therefore, action on the follow-up to the findings should be started before the report is finalized.

Another aspect is that  if you keep getting similar conclusions about a systemic issue from informal KM processes you might want to follow this up with an evaluation to analyze the issue more rigorously and to help identify and justify whether, for example, policy change is needed. Or you might try to develop a system to collect feedback and reflections (both internal and external) throughout a project and then aggregate these as a primary  information source for your evaluation – which would also mean that you could produce evaluation  conclusions at any stage of a project once enough data has been collected rather than being forced to decide on a single episodic assessment.

Another is in methodological exchange. On the one hand evaluation can (and does) adopt techniques from Knowledge Management – examples include the use of storytelling – including through audiovisual techniques to collect either records of the programming process, or on the impact on and views of programme beneficiaries. This can be both informative – but also if scaled can be made to be more representative – and the materials collected can also be very useful to support the dissemination of the formal findings, since whatever policy makers say about what they want, in practice they react more to real life stories than they do to dry analysis alone.

My last point is that evaluators need knowledge management for evaluation work itself i.e. to reflect on and learn from their own experience (without needing to formally evaluate themselves) and to help network with and learn from other evaluators and provide peer support, such as through the community of practice that is being developed by the UNEG KM group. And knowledge managers need you – the results of knowledge management are notoriously difficult to quantify and to evaluate, posing a number of methodological challenges – and too often we shy away from doing so – yet we do need to do this to be able to improve and advocate for our work. Evaluators help us out!

Advertisements

Written by Ian Thorpe

April 15, 2013 at 9:30 am

7 reasons to try something new

with one comment

Teaching-Old-Dog-New-Tricks

This one’s for my more seasoned readers Smile

In a previous blog post I wrote about the challenges of the bureaucratic mindset, and how we also need to look at our own actions to see if we are just going through the motions and using bureaucracy as a screen.

One tactic I’d suggest for overcoming your own bureaucratic inertia is to do something new or different in the workplace. It doesn’t have to be something big, and it doesn’t have to be permanent – just something small and experimental, like trying a new technology, doing an unconventional presentation, or trying a novel approach to organizing a work meeting, learning a new skill or taking on a side project.

There are a lot of good reasons why a seasoned aid worker might not want to step out and try something new. Doing something differently is harder than business as usual since it requires more effort to learn how to do something a new way, you won’t be as good at it, it might not be as effective (at least the first time you try), people might be skeptical, and you can risk looking foolish if you don’t do it well or if it doesn’t work.

So given all these risks, why might it be a good idea to try to do something different?

  • Trying to do something new forces you to be more mindful of what you are doing and to be more conscious of what you are doing and why. Often when we do things the way we are used to we go into autopilot and forget why we are doing it and how that contributes to what we want to achieve. For example we often end up sitting through traditional meetings in order to take important decisions without any real, frank discussion whereas using a different meeting format can really shake things up.
  • A new approach or tool can also give you a new perspective on an issue because it forces you to tackle it from a different angle. for example trying to create an infographic to explain your data instead of a written analysis will make you look differently about what stands out and what is important. Using a different approach can  also help provide insights and make connections with other fields to help solve a problem.
  • You will develop a new skill, or tool or at least get an insight into what people are talking about when they refer to an approach and its potential. An example for me was using twitter. Despite working in knowledge management I was initially very skeptical, thinking it a superficial waste of time. But I was at a conference (Web4Dev2009) and I saw lots of other people doing it, so I signed up to see what all the fuss was about, and I’ve been hooked since. I also signed up much later for Pinterest and have found absolutely no use for it personally – but I now know what it is and have an idea what the fuss is about.
  • People will notice. If you do something differently it will attract attention. Some might be skeptical – but others will be attracted just by the fact that it is different. A very practical example is that I’ve found that whenever I use Prezi for internal presentations  people are always impressed  – not because the content is necessarily any better than a regular presentation but because it is a refreshing way to present it. Trying something new might also inspire others to give it a go and create some creative competition. I mean if YOU can do it surely they can do it too.
  • You might find a better solution to your problem. It’s not a given that your new way will work, or be a good approach for you – but if you don’t try something you will never find out. Our work is constantly throwing up new challenges which our existing tools don’t fully address and it often seems like the rest of the world is moving much faster than we are – so we have to keep trying if we want to keep up, even if it is hard (which reminds me of one of my favourite quotes: “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” – the Red Queen from Lewis Caroll’s “Through the Looking Glass”)
  • Doing something new can give you a buzz when it goes well. The buzz comes both from the mental stimulation of actually thinking about what you are doing and the apprehension about whether or not it will work and how others will react, and the joy when it succeeds or when it attracts the attention of others. The act of learning and experimenting is fun – and can be quite addicting. It’s like a game where you play to find out what works and how to get a better score.
  • It’s always nice to have a new skill or project on your CV, especially one that’s innovative!

So go out and try something new. It doesn’t need to be revolutionary, and it doesn’t need to be big and risky – just try something small and it will make work more interesting and fun, and it could take you far.

Written by Ian Thorpe

April 5, 2013 at 10:30 am

Posted in Uncategorized

%d bloggers like this: