KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Communicating results

with 4 comments

The UN just recently finalized its “Quadrennial Comprehensive Policy Review of operational activities for development” which gives an overview of the priorities for operational reform of the UN’s Development work for the next 4 years. (Here’s a link, but as both a politically negotiated AND technical document it is not an easy read).

One of the major developments called for in this resolution is the strengthening of results and results-based management. And  who could object to that? – donors who provide money and the governments who receive UN assistance are all concerned to know, and under pressure from their constituencies to demonstrate that the UN is providing something useful for the resources provided to it.

But what for me is particularly interesting about the current resolution is that it not only talks about strengthening the systems for Results-Based Management – but also calls for the strengthening of HOW the UN communicates about what it does and the results it achieves. The underlying issue here is that although the UN needs to strengthen its results focus, it is already achieving many things, but it is not very good at sharing and explaining them.

I don’t doubt that this is true – quite often the UN is not as good as it could be at spreading the word about good work it has done, or at explaining in a simple, compelling way the complex role it plays and how that contributes to development. As a result public support for the UN (and possibly donor support for the UN) is less than what it might be.

But what might we do actually do communicate better on results?

Having worked on several reports and being told to make them “more results focused” an obvious danger is improving communication without simultaneously looking at both defining and measuring results in the first place.

It’s hard to communicate results if:

  • you don’t know what you were trying to achieve in the first place
  • you didn’t define indicators to measure what you were doing
  • you don’t have good data sources or a good data collection system to collect what you need to measure what you did, and on a regular basis

So it’s important to plan to measure your programmes at the outset and put in place the systems to collect and analyze the data. But it’s equally important to think at this stage about how you will use the data – one the one hand for monitoring, learning and course correction; and on the other for external accountability, reporting and for communication. Data collection systems can be costly and time-consuming – so it’s good to focus on collecting data you can/will actually use. On the plus side – there are many innovative ways to collect data that we in the UN have yet to fully explore and some of these lend themselves to better communication too (In fact Bill Gates believes better measurement will be THE most important initiative to improve aid).

But, It’s also important to remember that whatever interventions you make will have other effects, both positive and negative than those you expected (and therefore planned to measure). So you can’t only rely on internal project monitoring- but need to look at external validation too – whether though data collected by others – or though polls, or through collection of stories of impact from the perspective of the beneficiary.

A few  thoughts about the communication aspect of results itself:

1. Pitfall: When we formulate projects and pitch them to donors we tend to oversell or overpromise what they will be able to do, or how much we know about how/whether they will work. This inevitably leads to disappointment later when we try to communicate the results which seem less than the original promise. A related problem is using overly negative depictions of the current situation to justify aid without saying exactly what we expect to achieve with it. This gets a good response first time around as people are motivated by need – but if we keep using this approach it begs the question of whether what we are doing is having any impact if the situation still seems hopeless now as it did before.

2. Pitfall: Tangible short-term results (e.g. number of children immunized or fed) are both easier to measure and easier to communicate than long-term systemic results (such as empowering rural women). They are generally an easier sell, especially to individual donors. Unfortunately this often influences the type of project that gets proposed and funded. But at heart we know that it is “better to teach a man to fish that to give him a fish”. This means we need to find better ways to explain, justify and measure the results of longer term systemic work that is supported with aid rather than being tempted to choose something because it is more measurable in the short-term.

3. Pitfall: Once we get our results we naturally want to give them as positive a spin as possible to make us look good. But this also has its drawbacks. If we over-spin we actually make the communication less credible – I tend to find something that is 80% positive and 20% negative much more credible than something which is unrelentingly upbeat. If we don’t include some of the challenges or even failures (or “less successful aspects of the project” then we lose the opportunity to learn from them, as well as to signal to donors and the public the very real challenges on the ground –and to later show that we are using them to improve.

4. Pitfall: Process results are important for programme monitoring and course correction – but are deadly boring and a big turnoff for most external audiences. Avoid them unless that’s all you have.

5. Tip: Donors, and the “general public” are probably as much impressed with real-life stories as they are with reports and evaluations –  even when they ask for hard data. From  a communication standpoint it’s therefore important to illustrate results data with case studies and individual stories. This is especially important when we recognize that we are not able to do impact evaluations on everything we do – or to fully disentangle the various contributions of different factors in an outcome through scientific analysis. Stories and case studies also help explain how something works in practice and so can be more illustrative and convincing than data alone. A common criticism of stories is that these are not sufficiently representative to be useful – but with new techniques for large-scale qualitative monitoring this can also be good M&E (see this example from Global Giving).

6. Tip: a picture tells a thousand words. Charts, infographics and other visual aids bring complex data to life. Tufte and others have long written about how to create good graphics that are compelling, but also which explain and don’t mislead (Using infographics to mislead is it’s own art form!).

7. A challenge: Maybe you are not always the best person to communicate your results, or even decide what they are. Making as much of our results data, stories, reports, tools etc. fully open to outside analysis and scrutiny may be one of the best ways to show our willingness to share what we know, and open ourselves up to independent scrutiny. That way people can really se the results of what we do, and can analyze, present and communicate it themselves alongside our own communications. Similarly maybe other people’s stories and accounts of the results we achieve are both more impartial and convincing than our own.

Written by Ian Thorpe

January 29, 2013 at 9:00 am

Posted in Uncategorized

4 Responses

Subscribe to comments with RSS.

  1. […] Communicating results […]

  2. All excellent points. I particularly like Pitfall 1 though I’m not sure which will be harder, convincing us mortals to rein in our natural exuberance when it comes to the possible impact of our work, or convincing donors to be more realistic about what they ask us to deliver. But yes, we all need a good dose of realism when it comes to thinking about impact (not sure if you saw but I wrote about this last week… http://blogs.lse.ac.uk/impactofsocialsciences/2013/01/24/8864/) .

    I also like pitfall 4 though I wonder whether, in political environments, process indicators are more important and interesting than results indicators – the latter are likely to get blown way off course by the politics. Understanding how politics drives the need for course corrections could also help bring more realism to the monitoring process.

    Louise Shaxson

    January 29, 2013 at 11:55 am

  3. […] goes unread since it isn’t in a language or format decision makers can understand and use.  Communication of results is one part of this. Mobilization is another. Sometimes more anecdotal evidence is better for persuading […]

  4. […] 6. The last, and possibly the most important point is that we need be get real with donors and the public. We need to have a hard, truthful conversation where we explain what we can and can’t say about coordination and development, particularly what we can’t say. Often we try to please without facing the truth. In fact most of our donors and partners are struggling with the same problems in showing the impact of their own work to a skeptical public. Maybe it would be better to work with them to figure out how to make the most of what we can do with the information we have, and how to educate the public on what we can know and what we can’t, and share experience on how to communicate this more effectively. Ultimately we need to reassure donors and the public that their money is in good hands, and the reforms we are undertaking are making a difference without being misleading about how much we know about the magnitude of this difference or the exact formula which delivers development. A lot of this is not just in how we measure results, but how we communicate them  – something I’ve written about in more detail before. […]


Leave a reply to NGO Communications | Pearltrees Cancel reply