KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for January 2013

Communicating results

with 4 comments

The UN just recently finalized its “Quadrennial Comprehensive Policy Review of operational activities for development” which gives an overview of the priorities for operational reform of the UN’s Development work for the next 4 years. (Here’s a link, but as both a politically negotiated AND technical document it is not an easy read).

One of the major developments called for in this resolution is the strengthening of results and results-based management. And  who could object to that? – donors who provide money and the governments who receive UN assistance are all concerned to know, and under pressure from their constituencies to demonstrate that the UN is providing something useful for the resources provided to it.

But what for me is particularly interesting about the current resolution is that it not only talks about strengthening the systems for Results-Based Management – but also calls for the strengthening of HOW the UN communicates about what it does and the results it achieves. The underlying issue here is that although the UN needs to strengthen its results focus, it is already achieving many things, but it is not very good at sharing and explaining them.

I don’t doubt that this is true – quite often the UN is not as good as it could be at spreading the word about good work it has done, or at explaining in a simple, compelling way the complex role it plays and how that contributes to development. As a result public support for the UN (and possibly donor support for the UN) is less than what it might be.

But what might we do actually do communicate better on results?

Having worked on several reports and being told to make them “more results focused” an obvious danger is improving communication without simultaneously looking at both defining and measuring results in the first place.

It’s hard to communicate results if:

  • you don’t know what you were trying to achieve in the first place
  • you didn’t define indicators to measure what you were doing
  • you don’t have good data sources or a good data collection system to collect what you need to measure what you did, and on a regular basis

So it’s important to plan to measure your programmes at the outset and put in place the systems to collect and analyze the data. But it’s equally important to think at this stage about how you will use the data – one the one hand for monitoring, learning and course correction; and on the other for external accountability, reporting and for communication. Data collection systems can be costly and time-consuming – so it’s good to focus on collecting data you can/will actually use. On the plus side – there are many innovative ways to collect data that we in the UN have yet to fully explore and some of these lend themselves to better communication too (In fact Bill Gates believes better measurement will be THE most important initiative to improve aid).

But, It’s also important to remember that whatever interventions you make will have other effects, both positive and negative than those you expected (and therefore planned to measure). So you can’t only rely on internal project monitoring- but need to look at external validation too – whether though data collected by others – or though polls, or through collection of stories of impact from the perspective of the beneficiary.

A few  thoughts about the communication aspect of results itself:

1. Pitfall: When we formulate projects and pitch them to donors we tend to oversell or overpromise what they will be able to do, or how much we know about how/whether they will work. This inevitably leads to disappointment later when we try to communicate the results which seem less than the original promise. A related problem is using overly negative depictions of the current situation to justify aid without saying exactly what we expect to achieve with it. This gets a good response first time around as people are motivated by need – but if we keep using this approach it begs the question of whether what we are doing is having any impact if the situation still seems hopeless now as it did before.

2. Pitfall: Tangible short-term results (e.g. number of children immunized or fed) are both easier to measure and easier to communicate than long-term systemic results (such as empowering rural women). They are generally an easier sell, especially to individual donors. Unfortunately this often influences the type of project that gets proposed and funded. But at heart we know that it is “better to teach a man to fish that to give him a fish”. This means we need to find better ways to explain, justify and measure the results of longer term systemic work that is supported with aid rather than being tempted to choose something because it is more measurable in the short-term.

3. Pitfall: Once we get our results we naturally want to give them as positive a spin as possible to make us look good. But this also has its drawbacks. If we over-spin we actually make the communication less credible – I tend to find something that is 80% positive and 20% negative much more credible than something which is unrelentingly upbeat. If we don’t include some of the challenges or even failures (or “less successful aspects of the project” then we lose the opportunity to learn from them, as well as to signal to donors and the public the very real challenges on the ground –and to later show that we are using them to improve.

4. Pitfall: Process results are important for programme monitoring and course correction – but are deadly boring and a big turnoff for most external audiences. Avoid them unless that’s all you have.

5. Tip: Donors, and the “general public” are probably as much impressed with real-life stories as they are with reports and evaluations –  even when they ask for hard data. From  a communication standpoint it’s therefore important to illustrate results data with case studies and individual stories. This is especially important when we recognize that we are not able to do impact evaluations on everything we do – or to fully disentangle the various contributions of different factors in an outcome through scientific analysis. Stories and case studies also help explain how something works in practice and so can be more illustrative and convincing than data alone. A common criticism of stories is that these are not sufficiently representative to be useful – but with new techniques for large-scale qualitative monitoring this can also be good M&E (see this example from Global Giving).

6. Tip: a picture tells a thousand words. Charts, infographics and other visual aids bring complex data to life. Tufte and others have long written about how to create good graphics that are compelling, but also which explain and don’t mislead (Using infographics to mislead is it’s own art form!).

7. A challenge: Maybe you are not always the best person to communicate your results, or even decide what they are. Making as much of our results data, stories, reports, tools etc. fully open to outside analysis and scrutiny may be one of the best ways to show our willingness to share what we know, and open ourselves up to independent scrutiny. That way people can really se the results of what we do, and can analyze, present and communicate it themselves alongside our own communications. Similarly maybe other people’s stories and accounts of the results we achieve are both more impartial and convincing than our own.

Written by Ian Thorpe

January 29, 2013 at 9:00 am

Posted in Uncategorized

The iceberg of community management

with one comment

 

ws_Iceberg_1920x1200

It’s workplanning season again! Among recent discussions I’ve been involved in is defining standard approaches and key performance indicators for communities of practice. I’ve also been looking at “standard” terms of reference for hiring facilitators for online discussions.

It got me thinking how managing a  community of practice involves a lot of invisible things, as well as the more visible obvious things that usually turn up in job descriptions and performance measures. Like an iceberg there is more beneath the surface than above it. Etienne Wenger,  one of the “fathers” of the idea of communities of practice likes to describe the role of a community facilitator as that of a   “social artist” i.e. that the skill of being a good community manager goes beyond those things which can be done mechanically into those things which are a real skill and hard to easily describe and replicate, but which come from experience and from the personality  of the person themselves.

So what does a community manager do exactly? A few examples of the visible and not so visible.

The visible

  • manages the technology platform or space where the interaction takes place
  • manages community membership – approves requests to join, removes unsubscribers
  • approves (or post-moderates) content
  • produces discussion summaries
  • produces newsletters on community activity and news
  • monitors and reports on activity in the community – i.e. the key performance indicators
  • creates and manages formal e-discussions

The partly visible

  • welcomes new members
  • publicizes the community both through formal communications, but also through informal communication, personal contacts, taking advantage of opportunities as they arise
  • provides technical support and encouragement in using the platform
  • cross posts and shares material with other communities and pulls from them to enrich the discussion
  • reminds members and refers back to previous discussions and content

The mostly invisible

  • creates a welcoming environment by setting the tone of the community, and encouraging participation, helps create the community “identity”
  • gets to know the community and its members including who is in the community, what they know, what they want and what the group dynamics are and uses this to help manage the discussions
  • encourages/cajoles/hassles those with knowledge to get them to contribute what they know and if necessary hand-holds them through the process
  • fights “censorship” and control whether from senior management, community members or “experts” while keeping the content appropriate and of high quality
  • advocates for the community and its members with senior management including for resources, attention and personal support
  • calms down disagreements and turns them into productive difference, or “stirs up” useful debate to surface differences and avoid group-think
  • ensures the conversation stays on track, is productive and is relevant and useful to the community  members and encourages it to be put into practice and for that experience to be fed back into the community
  • manages the community “back-channel” by communicating with members using e-mail, phone, face to face interaction etc.
  • scans the horizon and spots and takes advantage of opportunities to advance the work of the community and its members, and keeps abreast of relevant developments within the topic of the community
  • advocates with technology providers for technology improvements and bug fixes
  • tidies up after people (manages the taxonomy and tags content, removes duplicate postings, fixes broken links)

So when thinking about hiring a community manager, and assessing how well they are doing their job, it’s good to look at the full range of tasks they actually perform, not just the obvious ones. It’s also important to note that some of these (such as writing a discussion summary) can be easily described and tested, but others (such as creating a welcoming environment) are less easy to describe and may be as much to do with the personality and competencies of the person as they are with visibly tangible knowledge. Just as communities of practice serve as a means to encourage the creating and sharing of tacit knowledge (rather than explicit codified knowledge or documents) – the skill of good facilitation has a strong tacit component.

Written by Ian Thorpe

January 16, 2013 at 3:00 pm

Posted in Uncategorized

Action–(Over)Reaction

with one comment

“For every action, there is an unequal and opposite overreaction.”* could have been Newton’s law of human dynamics.

I’m sure you can’t have missed the Sandy Hook school shooting in the news – and how it is provoking a discussion on how to keep kids safe, gun control, mental health etc.

If you are a parent with school age children, you have probably also encountered a hurried response of new security measures in your schools designed to allay the fears of parents – and to well – be sure to be seen to “do something”.

In my school district these were immediately put to the test when apparently, someone who works in security and who carries a firearm for his job, foolishly forgot to leave his weapon in the car when visiting the school premises to pick up something he had left there earlier. The school went into full lockdown, parents were called, activities cancelled, news interviews given and there is an expectation that security will be tightened further.

But from what I can gather – there wasn’t a real threat from this intruder. And even the new measures that have been put in place are substantially less than the precautions already in place in Sandy Hook but which were unable to prevent the tragedy.

Those of us who remember 9-11 (or any other major tragedy) have probably seen something similar happen. Just think how much more inconvenient and expensive air travel is now than it was before 2001. Outside the security area similar things have happened with financial fraud (think the compliance heavy Sarbanes-Oxley financial reporting requirements in the US), or with the financial crisis. Or more mundanely this can happen inside an organization if there is a minor fraud or a bad audit, or even a visibly unsuccessful project.

But while the instinct to respond quickly to ensure “never again” is understandable – some of our initial reactions are not always the most effective or sustainable.  I’ve written previously about the dangers of over-regulation and of how that can sometimes give the illusion of control over a situation when in fact by making things over-complicated we create unforeseen side-effects and in fact may be less in control and less well-informed of the situation rather than more.

A few of the risks of reacting too quickly and heavily in response to a crisis include:

  • The measures we put in place may look tough and be popular – but they also might not in fact significantly reduce the risk we are trying to manage or eliminate. Often in crisis there is a temptation to pick measures based on our perceptions of what they do, rather than what they actually do (an example might be “racial profiling” of “middle-eastern looking” passengers at airport security).
  • They might address the immediate manifestation of a risk, but not the underlying issues that cause it in the first place (an obvious example in school shootings is that while enhanced security *might* reduce the risk of a shooting they don’t address the reasons why these shootings might occur in the first place).
  • The measures we put in place might have undesirable consequences – for example a proposal to arm teachers might create a new opportunity for accidental shootings. Or heightened security might make children feel more anxious rather than more secure, or take away funding for teachers and classrooms. Or in finance – burdensome reporting rules might be too onerous for small business making them less competitive and might create an industry of people finding creative ways to avoid the rules among larger ones who have the resources to do this.
  • Lack of proportionality – no risk can be totally eliminated, and the closer you try to reduce it to zero the more expensive it will be. Similarly not all threats are equally likely or have equal consequences. This means there are limits to how much one should be willing to spend, or on how much inconvenience one is prepared to put up with depending on the likelihood of the threat, its severity and the cost of reducing the risk of it happening.

In a sense this type of reaction is an example of a “logical fallacy” (although not one not covered by this fabulous infographic).  I didn’t know this before, but apparently in security circles this is known as the “Affect Heuristic” – which Wikipedia defines as “…is a mental shortcut a mental shortcut that allows people to make decisions and solve problems quickly and efficiently, in which current emotionfear, pleasure, surprise, etc.—influences decisions. In other words, it is a type of heuristic in which emotional response, or “affect” in psychological terms, plays a lead role. It is a subconscious process that shortens the decision-making process and allows people to function without having to complete an extensive search for information….”

While this explanation stresses the value of this response as an immediate response to danger when full information is not available – a kind of “gut” survival instinct – what is troubling about it is its durability i.e. when the initial danger has passed we often stick with our initial response despite having the time to do a better analysis of a situation to understand the threat, the underlying issues that lead to the threat, its likelihood, potential consequences and the methods and costs of reducing the risk (which include looking at underlying causes of the threat not only its immediate manifestation).

This is probably a combination of how the human mind works, and how our political and decision-making systems work – i.e. on the need to appeal to popular emotional reaction to an event rather than to proportionally tackle its real causes . I don’t have an immediate answer as to how we can reduce the risk of inappropriate overreaction to negative effects – but maybe if we talked about this a bit more – including demonstrating some of the problems of this reasoning with data, it might be a good start.

Postscript: Last night, after posting this blog I realized that this phenomenon also often applies to the international response, especially the aid response, in emergencies. Thinking back on both the response to the Asian Tsunami and the Haiti earthquake a few similar things are visible i) an overreaction in terms of the resources and attention provided – at least in comparison with the under attention to other more chronic emergencies  – and this often led to piles of money which needed to be disbursed through poorly thought out programmes ii) a quick choice of inappropriate responses from some such as providing goods that were inappropriate and couldn’t easily be used aka #swedow – stuff we don’t want. Examples are shoes, second hand clothes, toys, unskilled volunteers …. iii) a failure to address underlying  causes instead of dealing only with the surface. Of course its hard to deal with the underlying cause of earthquakes – but you can and should look at the reasons why a country isn’t sufficiently resilient to deal with them effectively.

All the more reason for more research and public education on what actually works – whether in emergency response, or in security.

*this quote is not mine but I couldn’t find the original source.

Written by Ian Thorpe

January 8, 2013 at 4:25 pm

The joy of polls

with 8 comments

I’ve always been a fan of opinion polls. And after Nate Silver’s triumphant predictions of the US election outcome you’d think everyone would be. But the aid/development sphere has still a way to go to catch up.

In a recent blog post I explained a little about some of the challenges in the post-2015 global consultation. A while ago  week I was a “discussant” at the recent Tech Salon organized on using technology for qualitative M&E and I talked a little more about this project. In preparing my thoughts and listening to the discussion one specific idea kept coming into my mind – that is the importance of opinions and perceptions as part of monitoring and evaluation. I’ve written previously about the need to “listen to beneficiaries”  mostly from the point of view of it being the right way to do participatory development that also has a chance of being sustainable – but it also happens to be good, if not traditional M&E.

In a traditional approach to project monitoring we often look very closely at the supply side of things  – how much of our budget have we spent, how much did we deliver, how many people did we train, was it done on time and according to plan. Or if we are able to measure impact we might look at things the number of kids completing school, or the death rates from preventable diseases. An advantage of this type of measure is that they are usually quantifiable and measurable, and there is a clear logic or change theory around how these measures relate to your project activities.

But what about looking at demand. Do people want what we are giving them, or do they want something else? Are they satisfied with what we are giving them and how it is provided?

Asking people about their opinions, perceptions and feelings is also a useful and complementary way of project monitoring that might tell you a different story that a supply/delivery focused measurement based on “hard” data. Imagine a school project where the school has been built, the teachers hired, the curriculum designed and the cash transfers for the poorest have been distributed, but school attendance and achievement are not progressing at the same pace. You’ve spent your budget, delivered all your project outputs, but something still isn’t right. If you want to find out what’s happening, then you will need to ask people why don’ t they send their kids to school  – is the curriculum wrong, are the school hours inconvenient, do they value education, are there household labour needs for the children, or is there just no better jobs to get if you get an education. Maybe people wish that the aid project was on something else entirely.

Participatory research is of course one way to dig into what beneficiaries want. But this is also costly, time-consuming and usually is only able to cover a small sample of people. It’s good for digging deep to understand an issue  – but sometimes you also want to get a broad and, if you are lucky, representative view of what people are thinking and feeling. This is where opinion polls and surveys come in. Surveys are of course widely used for political purposes – but they are also used a lot by companies seeking to understand the reach and appeal of their brand and their products (which do you like better “Coke” or “Pepsi”, Which three adjectives on this list best describe our product?).

So why not use polls to ask about how you are doing with your project. Or even further, how can we find out about the image and reputation of your organization? Do people think you are effective? Do they think you are easy or hard to work with? And your “products” – do they think your programmes are effective, are they timely? are they responsive to what people need and what they want?

Many large NGOs do use opinion research to measure the appeal of their organization and their work for fundraising purposes – and use this to carefully tailor their appeals and marketing, although they are not always too keen to publicize this.  But use of polls in developing countries is much less common – partly due to logistical challenges – but also possibly because the incentives to do so are less, as there is a greater incentive to increase funding than to satisfy beneficiaries, who don’t pay.

But some organizations are now starting to also take this on. UNDP country offices run surveys of their national partners (which are curiously not to be found online). The UN recently carried out a partnership survey for the “Quadriennial Comprehensive Policy Review of operational activities for development”  – essentially a review of how well programme country governments think the UN is supporting them with its development work. This gave some interesting feedback on how we are doing which provoked some internal discussion – but also (thankfully only) a few who questioned whether government’s opinions of  how we are doing  are as valid as our internal monitoring data.

But the incentives to use polling for monitoring are still limited, unless donors are prepared to ask for them and even to finance them. They can be very informative to  help understand whether a programme is reaching and is appreciated by beneficiaries – but ultimately this will only be given importance if this feedback has consequences – and one of the best  ways would be for donors to ask for this – or for then to finance this type of research to be carried out independently themselves as an additional way of evaluating programmes and organizations they fund.

And now back to the post-2015 agenda for a brief advertisement.  While the process to develop the post-2015 development agenda is commendable for its use of  online discussions, consultations with technical experts and technical analytic papers – there is also an important role for opinion polling here too. Polling can reach a wide audience  that don’t have the means or the time to participate in face to face or online discussions but who can express their views via an online and/or SMS based poll on what their priorities for the post-2015 might be.

But luckily someone has already thought of that! The UN, the World Wide Web foundation and ODI have teamed together to launch the “My World 2015” public poll. Go there and vote for the changes that you would most like to see included in the post-2015 agenda.!

myworld

Written by Ian Thorpe

January 2, 2013 at 3:56 pm

Posted in Uncategorized

%d bloggers like this: