Archive for March 2012
The morning session was great, with a particularly lively and interesting discussion on different approaches to development, the highlight being a debate on the Millennium Villages Project, which was much more interesting and surprising than it sounds (let’s face we’re probably all a bit jaded with discussions about the MVPs, especially if you work in the UN since many people erroneously believe that MVPs form a major part of the UN’s approach to poverty, when in reality it only forms a small experimental part of our overall work with relatively little UN funding or support).
Bill Easterly set the stage by introducing the debates as a comparison between smart, expensive decision-making systems, in which he was also lining up the afternoon’s discussion on randomized control trials (RC Ts), and cheap, dumb solution finding systems, by which he means experimentation and success or failure based on market feedback. This is a new framing of his idea of searchers versus planners in development – but instead also looking not only at how development projects are planned, but also how they are evaluated.
In the MVP debate that followed, instead of having Sachs come into the lion’s den, Stewart Paperin of the Open Society Institute, a major funder of the MVPs gave the approach a spirited defense against critics Michael Clemens and Bernadette Wanjala who have both been publicly critical of the MVPs citing their lack of transparency and rigorous evaluation, and for overstating their results.
What was interesting about the debate was that Paperin was skillfully able to defend the MVPs on the grounds that they were, from his perspective at least, an investment in a practical, even entrepreneurial experiment that wasn’t certain to work, but were a good chance to try something different in order to learn more for the future.
In the end the most interesting aspect about the conference for me was the debate around the nature of actionable knowledge in development, and what can we trust as a basis to make decisions on development funding and action. This is both a scientific and a practical question.
The “debate” has been set up in at least three different ways:
In his book “The White Man’s Burden” Easterly talks about planners versus searchers i.e. those who think that top down set of proven approaches can work in development (a la Sachs) versus those who believe that all solutions are local and that people need to experiment and find their own solutions within their own context,
In his talk Clemens spoke instead about the goals movement versus i.e. those who believe they already have sufficient evidence for their solutions and have a vision and passion to take them to scale versus the evaluation movement i.e. those that believe that we need to rigorously measure what we do to know if it works and how to improve it.
But in his introduction Easterly also spoke of a third debate: between rigorous, expensive scientific measurement versus low-cost experimentation and market feedback. His case being that evaluation is costly, and often the results are not decisive or generalizable, so it might be more effective to use feedback from beneficiaries as a way of assessing what works.
Three big take-aways from the discussion were:
1. We might know some things about what works in development, but there is a lot we don’t know. Even when we do know something, it’s not a guarantee that it will work without a hitch in another context.
2. Evaluation (and tools such as RCTs) can tell us a lot about what works but they are expensive to run, and their results are not always easily generalizable or actionable.
3. But if you don’t measure your project in some way, then how will you know if a project works? and how will you improve it?
What strikes me here is that in a way all of these different perspectives have value, but their proponents have a difficult time understanding each other and figuring out how to best combine their approaches.
Wouldn’t it be good if the goals people – those who have a clear vision and a passion to pursue it would create momentum and raise resources around their projects, whether these are large-scale plans developed from extensive research and experience, or whether they are smaller scale hunches or experiments – but as entrepreneurs rather than as top-down planners. But at the same time if these projects would collect data from the outset to better enable them to track progress, and if where feasible they could try multiple approaches or variations on an approach to be able to compare then and learn from the differences.
Similarly if the results could be made public, then funders, beneficiaries and even academics could see them and independently assess them, and project managers could use them to modify their programmes and identify whether they should be scaled up or shut down.
Lastly, but perhaps most importantly – the missing element in evaluation of development projects is effective and ongoing beneficiary feedback. Entrepreneurs, unlike aid planners try lots of different things some which succeed massively while others fail dismally – the difference being that their success is measured by the feedback they get from consumers who buy their product. In the aid world we don’t yet have effective ways to get this feedback so we rely instead on evaluation – to rigorously, but only selectively assess the impact of our work, and communication – to sell our story of success but of continued need to funders who are far removed from the experience of those who the programmes are designed to assist. And evaluation and communication are often at odds.
The next big focus on measurement will hopefully be in the area of getting real-time feedback from beneficiaries which can be fed back into projects to improve them, and fed back to donors and the public transparently in order for them to better judge what and who to fund, and all this at a relatively low-cost and greater clarity than expensive evaluation.
Formal evaluation (and experimental project designs such as RCTs) can focus on those areas where getting the programme design specifics is important in terms of cost and impact, but where the results are also likely to yield insights which can be generalized beyond a specific programme.
One of the things you often take for granted, at least until you switch jobs or particularly organizations is the importance that social capital plays in your getting your work done.
Last year, after almost 15 years in UNICEF I moved to another UN agency with a a different set of rules, but also with different norms, customs and networks – with surprisingly little overlap between the two jobs in terms of the people I interact with.
On the positive side I was hired externally as a “knowledge management expert” which gave people a specific idea of what I might know, a “potential reputation” if you like, which was very different from the one I had in UNICEF where I still carried with me the idea of being young upstart, rather than any kind of expert.
But in UNICEF I had been around long enough to know how things work, to know who to call to get the information I needed, or even who to call to get things done that can’t get done easily if you do them the “official way”. I had a network of people who trusted me and who I trusted.
In my new job, although everyone was very welcoming and helpful, I suddenly found that I didn’t know how to do things, who to call, who were the experts who I could turn to for advice (and what people’s hidden agendas might be). And similarly it wasn’t obvious that when I had an idea that I would know who to share it with, whether people would listen or take me seriously or see me as a potential source of knowledge. What’s more I need to work with people across the UN system not just in one agency. UN co-ordination and knowledge sharing are two areas where social capital is particularly important as in both you need to pull together information and expertise from diverse and distributed groups, and while phone directories and resource databases can help, they are not enough.
So what do I mean by social capital? I’m sure there are many sociological and economic definitions out there, but for me it’s that intangible (but real) combination of your reputation and your networks that you can count on to help you out when you need something which could be as diverse as getting advice, or getting someone to write you a recommendation, or even to trust you to take a chance on your idea, or help you out when you are in a crisis.
Here are a few thoughts on social capital in the workplace based on my reflections on my recent move:
- Social capital is an important asset in getting our work done, yet one that is under acknowledged and undervalued by individuals and organizations alike. And while we all unconsciously do things to build social capital, we could probably be a lot better at it if we thought about it a bit more systematically.
- On an individual level it’s always important to maintain your social capital, but you need to pay attention to it especially when you are in a new position. Everyone does this differently – but there are a few important and simple things you can do when starting a new job such as making a particular effort to get to know your supervisor and supervisees (as their trust and advice will be critical to your success), go around and introduce yourself to everyone in the office as soon as possible, find out who your major work contacts will be and meet them for coffee/lunch – not just for a formal briefing, get to know the admin staff – they are often underappreciated and yet can be invaluable in helping you navigate complex bureaucratic systems. Last of all – make an extra effort to be positive and friendly, even if you are not by nature social, since first impressions count for a lot. And on the less social side – it’s very important to be helpful, and be willing to give much more help to the work of others than you expect to get in return. Lastly is to show your expertise and what you have to offer professionally, somewhat paradoxically by respecting and soliciting the advice of others and admitting what you don’t know, not assuming that you can apply your expertise in the new context without listening, yet also showing how you have particular skills and ideas which can be helpful, precisely because you are new.
- Organizations can and should do much more do to help their staff function better by supporting them to develop their social capital. This starts with hiring people with both good inherent social capital in their field of expertise (good reputation and connections) as well as good skills and competencies in developing this in themselves in the first place. Once someone is hired the organization can do a lot to help socialize them into the organization through little things such as ensuring new staff are introduced, encourage them time to take time to network before jumping into content, help set them up with “buddies” or “mentors” to help them learn the system, give them opportunities to use and show their particular skills, provide opportunities for them to participate in meetings or join cross functional teams. It’s also important to give them opportunities to interact in social occasions – even simple things like inviting them for lunch or a coffee. A couple of good practices from my new office are that my boss made me write a mind-map about myself when I first joined, and set aside time to have a discussion about me as a person rather than just in my job role. We also have monthly “get togethers” which are social events held immediately after our all monthly all staff meetings. On a larger scale encouraging participation in communities of practice or other types of networks, whether virtual or face to face, give staff members an opportunity to contribute their knowledge and also build their reputation at the same time.
Many of the things that individuals and organizations can do to help build social capital are quite small, but they also need ongoing attention, and our workplaces could be much more effective, as well as being more pleasant places to work if we paid more attention to them.
So, the Internets are abuzz with KONY2012, Invisible Children’s latest film offering. This comes broadly in two flavours:
1. The bulk of the masses, the mainstream media, plus a number of fawning celebrities all talking about how great this is.
2. A much smaller, but increasingly loud chorus of aid bloggers, researchers, journalists and Ugandans themselves criticizing the film as oversimplistic, inaccurate, misleading and potentially harmful.
I’m not an expert on what is happening in northern Uganda, and lots and lots has been written on this already (see Brendan Rigby’s excellent ongoing compilation of articles and blog posts on the topic), but the beauty and curse of the internet is that everyone can have their say, whether they know anything or not!
So here are a few thoughts from me from a knowledge manager’s perspective:
Firstly, I couldn’t have found a better illustration of my last two blog posts on storytelling. KONY2012 nicely illustrates on the one hand how the most effective way to engage people is through a story, not though research reports, statistics, and official documents, but on the other hand how a story can vastly oversimplify or even misrepresent a complex problem and leave you with little idea about what is really happening, especially if you don’t or can’t verify or if you rely too much on a single story.
Secondly this whole buzz does create two potentially important opportunities:
i) Kony is in the news! – maybe all this public attention to Kony and northern Uganda can actually provoke some useful discussion, and maybe all this “awareness” can be translated into increased political pressure, and even political will, not necessarily to do exactly what the campaign is requesting, but rather prompting people to learn more about what is a complex situation and to think a little bit more about , be a little better informed, and think more about what they can (and can’t) do to help the developing world. Maybe.
ii) Smartaid and badvocacy are once again a hot topic. The potential backlash against KONY2012 opens up a useful debate about the role of advocacy, of activists and about how to communicate and fundraise for development. There’s an important “awareness raising” opportunity here too for advocates for a more nuanced understanding of development, and for a more dignified and authentic presentation of people and problems in developing countries to bring this discussion to a broader audience.
A last point is that this situation highlights a fairly fundamental problem in knowledge sharing around development – the rather large gulf of understanding and perspective between researchers, aid practitioners, advocates and activists, governments and the donating public, and the most important and least listened to group of all – those affected by the problem (and one hopes the intended beneficiaries of any action). It highlights the immense challenge in bringing the knowledge of “experts” whether researchers, aid workers or affected populations (who are in a way the real experts) in a compelling and actionable way to those who could use it for evidence-informed action that might make a real different to people’s lives. At least it seems very difficult to do this without compromising the integrity of the knowledge itself, or perhaps the temptation to do so in order to get your message across is sometimes too great.
In a thoughtful blog on KONY2012 and the difficulties of bridging this gap James W. McCarty wrote: In this situation I think what we need is not academics who “simplify better” but activists who “complexify better.”
Undoubtedly we need both, but I think what we also need are more knowledge brokers, intermediaries who can help bridge the gap between those who know and those who can put that knowledge to use, with the aim of not only connecting people with relevant knowledge, but also putting it into a form that can be easily used and is interesting, and compelling enough for them to take notice and persuading them and helping them to use it effectively, and doing this while obeying the golden rule of advocacy “simplify but don’t distort“.
As a follow up to a recent presentation given by a colleague and myself on making our office more innovative, I started sending a weekly e-mail around the office featuring an inspirational TED talk. From our discussion it was clear that one of the important aspects of making ourselves more creative and innovative lies in our attitude to our work, and on our access to inspiration.
TED talks can be a great source of inspiration, because of the ideas they contain, but also because of their sense of optimism about how we can tackle some of society’s great challenges, or even some of the great challenges in our own personal lives.
Optimism and enthusiasm can carry us a long way in our lives, and enable us to keep going when things get tough. There’s lots written about this, and even for the most ardent rationalist, it’s not a big logical leap to see how your attitude can affect your performance, or even what you choose to do in the first place.
But I sometimes wonder if you can have too much of a good thing.
As an example – taken alone a good TED talk can be really uplifting – but if you add them all together you might get something like this (yes, this is a real video made by TED about the 2012 conference, and as far as I can tell, not intended to be satire.) Suddenly this all looks rather superficial and saccharine and not deep and inspirational at all.
So while an optimistic and positive outlook is a good thing – you can also have too much of it. Rather like a rich chocolate cake where one piece is delicious, maybe even two, but any more will leave you feeling a little sick.
Here are a few reasons why too much feel good is not that good at all.
1. If you are not dissatisfied with the way things are currently and are willing to make the best of it, to muddle along, to accept your friends, or you boss or your colleagues the way they are – then how can you summon the energy to do what is needed to make things better. Sometimes you need a bit of frustration, disappointment, or even occasionally a bit of righteous anger to put the fire in your belly to change things.
2. It’s good to believe in yourself, and that you can achieve the impossible, to change the world. But sometimes you need to know when something is really impossible, or when you should step back, accept that what you are doing isn’t going to work any time soon, and go do something else which is more likely to have an impact, or which will help you preserve your mental health. Otherwise you might stick with doing the wrong thing for too long. Believing in your unbounded capabilities despite all evidence to the contrary can also make you narcissistic, and just plain insufferable.
3. It’s good to try to trust and believe in others, indeed most societal advances depend on collaboration to some extent, the downside of always and unquestioningly seeing the good in others will quickly lead you to be taken advantage of. Seek to listen better and to understand and empathize by all means – but don’t let understanding fool you that everyone else and what they do is good for you.
3. Feeling happy, as with any other pleasurable experience wears off after time if you get too much of it, and you will need to work harder and harder just to keep where you are. Sometimes, to better appreciate the good, you need to experience the bad. You need to feel sad or disappointed sometimes, if only so that when something good comes along you can really appreciate it for what it is, rather experience life as an undifferentiated emotional haze. Many great inventions and particularly works of art have been born out of sadness. So enjoy your melancholy and take advantage of it.
4. Pursuing happiness itself as a goal probably won’t work as you will be thinking so much about whether you are happy or not, that you won’t have time to experience it when it comes along. Batter to do something, or put yourself in a situation where you can feel positive than to try to focus on being positive when you are not really feeling it.
Don’t get me wrong. I believe we can make the world a better place, and that I have the potential to play some small part in making this happen – and I believe you can too. I also believe we need to dream a little and push ourselves to achieve more than we believe we can. I also want to believe (despite all evidence to the contrary) in the inherent goodness of humankind. And most of all I believe that believing these things is good for me.
But if I have to listen to another self-help lecture telling me to think positive, or another round self-congratulatory and unyieldingly positive Facebook update or tweet, I think I’m going to be sick.
On the other hand if you can’t get enough of it here’s the song my kids were rehearsing non-stop for weeks for our school play – “Think Positive” from Charlie and the Chocolate Factory. If Charlie Bucket can do it, why can’t you! (Just keep playing it over and over and over)
I got lots of great comments on my recent blog post “If I told you a story, would you believe me?” In reading them I realize there are a couple of things I should have elaborated further on stories.
First of all I do believe that collecting personal narratives or testimonies is a legitimate research methodology, that while obviously somewhat subjective can nevertheless be used to collect valuable information and insights in a way that is complementary to more quantitative methods of research. Also as Jennifer and Max pointed out, if you systematically collect a large number of stories this can be particularly vaulable as it can help turn the qualitative data into quantitative and help cancel out some of the bias that might occur when only collecting a small number of stories. I get bothered when I hear quantitative researchers say that the plural of anecdote is not data, because done well it is. In my last job in UNICEF we were exploring the systematic collection of stories to help measure the value of communities of practice – see this blog. This was based on an approach developed by Etienne Wenger and Beverly Trayner. Unfortunately I left UNICEF before this could come to fruition and am unsure of where this project stands now.
It is important however to draw a distinction between collecting testimonials from partners/beneficiaries using some kind of standardized approach, while attempting to keep in listening mode one the one hand, and setting out to deliberately craft a storyline (even if based on “the facts” as they were recounted) that seeks to carry a specific message in order to transfer a piece of knowledge or to persuade someone to engage or take action on the other. Both of these have their uses, but the first is most useful in terms of research and evidence gathering whereas the second is most useful in terms of engagement and persuasion. I think some of the problems with stories can be due to a misperception of, or poor trade-off between the two different approaches and objectives.
In practice we often try to use both approaches together, that is we try to collect “authentic” stories, but then we also selectively pick those which serve our purposes, or selectively edit those to better make the point we are using the story to illustrate. And the level to which a story is a faithful representation or a carefully selected and edited story line is not always immediately clear to our audience (or perhaps even sometime to ourselves). Often we do this for expediency “well we are doing this great qualitative research that uses authentic voices, but we can also make use of the stories in our fundraising material”.
A particular KM related example of this mixed approach is in how good practice case studies are used. Here the main aim is to try to faithfully document what happened in a project and what positive features we can learn from and potentially replicate. But at the same time we risk to overemphasize the positives in an experience without really taking adequate account of the negatives as an equally valid learning opportunity, especially if we are publishing externally.
I think that identifying and sharing success stories is valuable in that it can help spread good ideas and demonstrate that positive change is possible (along the lines of Charles Kenny’s “Getting Better“) – but because of the emotional aspect of how we react to material presented in a story format we need to be especially careful not to use stories to manipulate rather than to inform, and as consumers of scripted stories, while it’s OK to be moved by a story but we also need to validate. And we need to be clear in our communication where the story comes from and how it was prepared. Good practice or success stories can be used to indicate what is possible, but its deceptive when these are used by fundraisers to imply that this is exactly what their donation will lead to.
Of course while stories can be used to create positive emotions about change, they can also be used to reinforce negative stereotypes such as in the painting of a desperate picture of the developing wold in order to mobilize more funds. The risks of this were nicely deconstructed in “The Worf effect and NGO rhetoric” by Matt of Aid Thoughts. And the dangers of relying on a single story, and on one created by outsiders rather than a people’s own voice are eloquently described in this great TED talk “The danger of a single story” from Chimamanda Adichie, a storyteller heself.
So I stick by my initial conclusion, listen , collect, allow yourself to be informed and to be moved, but also cross check and validate.
Within the UN system (or any large organization or enterprise for that matter) there are often several competing platforms, tools and methods for knowledge sharing, internal communication and work related social networking.
There’s the official tools (such as Teamworks in UNDP, UNICEF communities in UNICEF, or “unite connect” in the UN Secretariat, or various Sharepoint implementations). There’s the unofficial tools that people have set up such as Yammer, Ning Networks, Google groups etc. And then there’s the use of public tools such as facebook, twitter and Google plus as well as individual websites and blogs.
Having so many options can be confusing and lead to knowledge fragmentation – but getting leadership to agree on a single tool can also be very contentious and getting people to follow official direction can be challenging, especially when people have different preferences for how they network in terms of tools, functionality and the crowd of people they want to exchange with.
Here are three possible approaches for dealing with this:
Cartel: Management agrees on a common set of tools and makes it clear to all users that they MUST use these tools and no others, and that only officially sanctioned tools will be supported.
Competition: Management makes available a set of corporately supported tools, but doesn’t impose them outright. Instead they actively work to convince people to use the official tools by convincing them how much more useful, well supported and secure they are than the unofficial tools. They tolerate, but don’t support or encourage tools and provide incentives for people to use the official tools.
Collaboration: The organization provides official tools which it encourages and supports people to use, but also recognizes that for some purposes people prefer to use non-official tools and so they build interfaces to allow people to link them together e.g. to share posts via twitter or to bring in a feed from Yammer. Over time they might incorporate features from external tools that users find most useful or use external tools to provide those services which are harder to develop in-house.
This situation is made more complex in the UN because each agency has its own official tools which are different from agency to agency, as well as its favoured unofficial tools which have spread within and between them. There is no single decision point (person, agency or committee) that can map out a single approach to this for the UN as a whole, only on an agency by agency basis with informal co-ordination at best between them.
Which of these approaches (or combinations of them) do you think will work best? And how can this be applied to bring some coherence to the UN’s overall approach in this area?