KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

How am I doing?

with 4 comments

feedback

Summary: If we are trying to measure the results of knowledge management work, or any type of development work for that matter, we could do worse than ask our clients what they think of what we are doing.

Some years ago, when I was interviewed for my first “real” KM job, one of the questions I was asked was “how will you measure the results of what you are doing?”. At this stage we didn’t even know what we would be doing, so I gave an instinctive answer – but one I’d at least partly stand behind now. I told the interviewer that the best way to know whether the knowledge products and services we were doing were any good would be to ask our clients what they think – on a regular basis.

We are often struggling to find ways to measure the results of our work. We are looking to measure impact, but often this requires complex, potentially evaluation and identification of a clear theory of change. If we aren’t able to do this we often fall back on measures of output such as budget spent, work plan tasks implemented, supplies delivered, workshops carried out, website downloads and the like which tell us about our efficiency in getting things done, but not about the effectiveness of what we are doing.

But if we can’t easily measure impact, how about going half way? While beneficiary/partner feedback isn’t the same thing as “impact” it can be a very valuable proxy to look at what you are doing and where you need to improve or put additional focus. You can ask about their perceptions or ratings of what you do, as well as asking for their direct feedback on what they need, and what they want you to do differently.

The biggest criticism of asking for feedback is that what you get back is perceptions on you and what you are doing, rather than what you are actually doing, and that the people you are asking might not understand what you are doing well enough to comment on it, or might not value the “right” things.

While to some extent this can be true, knowing what people think about your organization, your image, what you do and what you should be doing can still be very illuminating. If people don’t know who you are, or misunderstand what you do, or think you are doing a lousy job when you think you are not then you might have a communication problem. And what good is it doing great work if no-one knows about it? Not just for your own ego, but also so you can build goodwill in your “client” populations for the work you do in order to make your work easier, or so you can have something to show to donors on how what you do is responsive to the needs of those it is supposed to help.

But lack of recognition or negative feedback isn’t just about how well you communicate. It might well be that you, and what you are doing is not seen as relevant or high quality by the people you are supposed to serve. If they don’t know about you and your work, it might well be because you are not reaching them or having any meaningful impact on their lives (whatever your monitoring statistics tell you). If they don’t like what you are doing, it might be that what you are doing doesn’t meet their needs, or that the way you are doing it isn’t respectful of them.

Asking for feedback reminds us that ultimately we are there to serve our beneficiaries (or “clients”) and to a large extent its they who determine whether or not we are doing a good job. Asking for feedback also has the added benefit that it can help build trust by showing that we value the opinion of those we are helping rather than simply deciding what is best for them, and it can also help elucidate important information about their aspirations, priorities and the realities they face which we can easily overlook in how we design and execute programmes.

There are a variety of means of collecting feedback which can include formal surveys, phone polls, in person interviews, focus group discussions, suggestion boxes etc. The correct tool will depend on your audience/clients, what you want to know and the resources you have to do the work. Simple survey questionnaires and suggestion boxes can be a relatively simple and inexpensive way of collecting data – but if they highlight an issue you might need to use face to face questionnaires or interviews to really probe and understand an issue in depth.

You can also develop standardized tools for collecting feedback which can be used to track performance over time, and which could be used to compare different services or programmes with each other (or similar programmes across different locations).

But one word of caution. if you ask for feedback, you also create expectations – in particular that you will share the feedback you received, or at least a summary of it, even if it isn’t positive, and that you will take action to respond to any negative feedback you receive. If you don’t do this, then next time you ask, you won’t get any feedback, or worse you will have damaged your recommendation and increased the cynicism of those you surveyed about your sincerity to listen to them and “really” help them.

Aid agencies are not particularly good at systematically seeking feedback from their beneficiaries, or from partners who might be intermediaries in their work, but there are a few encouraging signs. For example as part of its ongoing reform process the UN recently surveyed Programme Governments and partner NGOs about their views of the UN Development system and some of its coordination mechanisms and initiatives, and published the results (see here and here) – I hope we will now also see the next round of reform building on some of this feedback.

Digital technologies also make it easier and cheaper to collect and analyze this data than ever before through use of tools such as help lines, SMS polling etc. These can potentially reach large populations that would have been costly and logistically difficult to survey using traditional survey methods, and can also be more quickly tabulated.

So let’s not forget who we work for and regularly ask them what they want, and how we are doing both as an input to our planning and as a measure of our performance.

Written by Ian Thorpe

July 12, 2012 at 11:45 am

4 Responses

Subscribe to comments with RSS.

  1. Important and valid points, Ian. As is currently the fad these days, getting beneficiary perspective is ideal; but it’s not perfect either, as you correctly point out. It’s also one of the more difficult and resource-heavy approaches to getting feedback and therefore often gets buried under the toolbox.

    I think mining the data based on beneficiary behavior in response or reaction to one’s products and services is a great area for further exploration. For policy knowledge and technical advise, it could be observed in budgeting, planning, programming, and policy decisions. These can be reported as success stories. Yes, they’re subjective and qualitative but they can be quite powerful (sometimes more powerful than hard data). For development research and publications, your point on digital technologies really resonate with me. I think citation and link analysis results also give good feedback as proxies for the quality, reach, and uptake of policy/practice knowledge aid/devt organizations share. It’s no longer “publish or perish”, but rather “publish, get cited, or perish”. The same applies for links (eg, PageRank): the more and higher quality people or sites link to you, the better your product must be. We can also watch how they are trending or going viral on social media, but Google is now able to use tweets (and some say even Facebook likes) for their algorithm so perhaps that will be eventually covered by link analysis. As enterprise KM platforms become more social, getting feedback in this way on internal-only products will be possible.

    Aid/devt organizations who understand these developments are changing how they plan, design, and disseminate their research and publications (eg, less ‘dead-tree’ copies and more electronic publication, including modularizing or spinning off multi-media products; more collaborative creation and peer-review process that taps external experts and creates dissemination and advocacy ‘partners’ downstream; tapping 3rd party channels like document sharing/social publishing sites, online journal repositories and bibliographic databases, and of course social media, etc…

    It’s a far cry from when I started in KM4D, when a product was printed, launched, sent to a mailing list of important people (remember mail merge? oops – that dates me!), and getting feedback was a laborious and therefore often neglected exercise.

  2. […] as part of monitoring and evaluation. I’ve written previously about the need to “listen to beneficiaries”  mostly from the point of view of it being the right way to do participatory development […]

  3. First I really like that you pointed out that “the way you are doing it isn’t [might not be] respectful of them.” So easy to overlook something like that when discussing big picture problems and looking for/at metrics.

    Another challenge/important thing to keep in mind with feedback (I know this is going beyond your post..) are the limitations of each kind of feedback mechanism for your situation while considering the culture and people you’re working with (this might suggest using as many methods of feedback as possible).

    Face-to-face feedback for instance, particularly in a situation perceived more formal, can have people either avoiding criticism, not saying much at all, or being overly critical based on cultural inclinations, personal bias, etc. This would suggest that 1) it’s really important to have a good, personal relationship with beneficiaries if possible (or when getting feedback go through those people that have the best relationships with the beneficiaries), 2) that for feedback it’s probably really important to make it quite informal when possible, and 3) that new or other methods of feedback that you’re mentioning like anonymous surveys, sms, and social media can both add another layer and get around some of the issues with more direct feedback

    frontpack

    April 4, 2013 at 3:17 am

  4. […] those they are seeking to help ( a common topic on this blog, and one for which there are various approaches we should be using […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: