KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for January 2011

Knowledge management lessons from job interviews

with 14 comments

I’ve just completed a training course in “Competency Based Interviewing”. I’ve been trained in this before some years ago, but took the latest version of the training since it will soon be mandatory for anyone who sits on a job panel to have done the course, so it was time for me to brush up my skills in this area.

For those not familiar with the approach, the basic premise behind competency based interviewing is that traditional technical based job interviews often fail to elicit information that gives you a realistic indication of how well someone will do in a new job. This is because doing a job well can be as much about behaviour as it is around technical knowledge. The personal or behavioural characteristics needed to do a job well are called competencies, and for competencies its not enough to know them in theory, but the best predictor of whether you have them is whether you can describe how you used them in the past. Examples of competencies might be interpersonal communication, negotiation skills, organizing and planning work etc.

So, to take the example of negotiation skills, a typical interview might ask you how you negotiate  or how you would go about a hypothetical negotiation, whereas in a competency based interview you would be asked to describe a real negotiation you were involved in and what you actually did, and what actually happened.

Another important feature is that these behaviours can change over time based on experience – but often also require a concerted effort of self reflection, feedback and commitment to action.

Another feature about competencies is that while people naturally have different levels in them, they can also be learned through experience and conscious effort to improve, so the interview also looks for evidence of self-reflection and learning from what didn’t work and then whether this was applied in the future.

The use of competency based interviewing and testing is backed up by a fair bit of research and experience, and is the norm in many large organizations, including in UNICEF (in fact these were adopted by many private sector organizations back in the 1980s).

What is a little more surprising for me is that we often struggle to apply a similar approach when looking at  aid organizations, or even more specifically at programming approaches or practices. There seem to be a few key lessons which we could use to also assess the capabilities of teams and organizations, as well as to particular approaches or practices in development. In short:

1. One of the best predictors of whether something will work is whether it worked in the past. (Not whether in works in theory)

2. Why it worked in practice (or didn’t) might be best analyzed not by the theory of the approach that was followed but by what actually happened and how this contributed to the success of failure.  This requires self-reflection AND feedback from others.

3. An important element of improvement is looking at what didn’t work and extracting lessons from it. BUT extracting the learning isn’t enough – for a lesson to be really learned it then needs to be applied successfully in practice (otherwise it is still hypothetical learning).

4. Another element of the interview process that applies well to assessing an approach or team is observe first, assess later. That is collect all the observations you can first before you make an assessment. In job interviews first impressions can count for a lot, but they can also be misleading, so in  competency based interviews you are trained to observe and record only, and then assess only once all the data has been collected. This is good advice for programme assessments too.

5. Many eyes are better than one. Interviews have a panel of interviewers because each panelist may see different things and come to different conclusions, so having several people helps get a more complete picture of a candidate. Similarly they should compare data first before comparing assessments so as not to unduly influence each other. Also great advice for programme assessments – use multiple viewpoints – compare observations first, then draw conclusions. Of course many assessment methodologies embody this  -BUT often the assessment is already creeping into the researchers mind before the evidence is formally analyzed, and this inevitably leads to (unconscious) bias in the assessment.

I’m sure there are many other lessons too, but there were a few take-aways for me that are useful for work on identifying and applying lessons learned.

If you are interested to learn more about competencies and competency based interviewing here is an old guide from UNICEF (undated but I’d probably place it around 2005). The current guide, with our updated framework and example questions is not available online -probably for the obvious reason of not giving candidates too much of a jump on possible questions! But the old guide gives you a good sense of what the apporach is about.

Written by Ian Thorpe

January 27, 2011 at 10:40 am

Do aid agencies need venture capital?

with 12 comments


William Savedoff from the Center for Global Development recently wrote an interesting blog post on “What can development agencies learn from venture capital firms” where he looks at what might be learned from venture capital firms in terms of how they manage their investments versus how aid agencies manage the projects they “invest” in.

But there’s another key area of venture capital I’d be interested to explore. What is the possible role of venture capital itself in funding development startups or new aid technologies.

When I shared my recent blog post “failure without borders” internally one interesting comment was that donors generally feel more comfortable funding things that are “tried and tested” and are reluctant to put money into new approaches, especially where there is a high chance of failure, even when the payoff could be great. It is possible to get funding for some pilots, but rarely for something that hasn’t already been tried elsewhere or by someone else.

One interesting solution might be to set up a kind of “venture capital” fund for development i.e. a set aside funding source that would be specifically designed to invest in high risk – potentially high payoff innovations and pilot projects that  are otherwise unlikely to be funded. This would be different from regular venture capital of course in that the projects themselves might not result in a financial profit for the investors, but rather public good in terms of new and better ways to deliver aid or promote development.

For this to work a few things would be needed:

i) a method for selecting investment projects that is transparent, robust but also quick and fairly non-bureaucratic. This should seek to identify ideas with high potential benefits and good management plans while not discounting ideas just because they are difficult or risky. It might be good to include actual venture capital experts and entrepreneurs in reviewing proposals to help avoid this being a more typical grant selection process.

ii) projects would need to have some form of clear monitoring and evaluation framework so that progress and results can be tracked. This should include monitoring of impacts both intended and unintended as far as feasible, and some sort of end of project assessment.

iii) there would need to be some kind of exit strategy so that the venture funding is restricted to a certain time period after which the project is picked up for financial support through the regular agency funding mechanisms or by government or another investor  (as a tried and tested, or at least highly promising approach), OR if the project did not realize its promise a failure or lessons learned report is produced and the project is closed.

A few other things might be desirable:

i) given the high risk and high potential failure rate of these projects it might be good to seek private philanthropic funding rather than public funding (For example New York City adopted this approach by securing private funding to test out new initiatives such as the now abandoned conditional cash transfer programme Opportunity NYC).

ii) As Savedoff mentions, venture capitalists often provide quite a bit of advice and support to the startups they fund. This would also be desirable for aid startup projects, and perhaps the type of advice and advisor needed is not your typical aid project manager, but instead someone experienced in startups or successful aid pilots who can provide practical advice to the project managers.

iii) It would also be good if some of the standard aid project management requirements could be relaxed for the duration of the pilot – in particular allowing the project freedom not to use a standard logframe or results matrix, but rather allow them to be more flexible to evolve the workplan and  targets as they learn from experience.

So anyone got a few spare million to spare for me to try out this new (high risk potentially high return)  idea?

(P.S. I’d like to give a quick shout out to Dennis Whittle since a lot of the ideas in this post comes from a very stimulating conversation I had with him over coffee a few months ago)

Written by Ian Thorpe

January 25, 2011 at 8:32 pm

There’s no such thing as a free journal

with 12 comments

Roving Bandit recently blogged with justifiable indignation about how Elsevier, a leading academic publisher (and publisher of the Lancet) had revoked their deal that offered free journal access to many in the developing world.

In an ideal world academic research would be free to everyone. Costs for access to research are an important barrier to access to knowledge in developing countries, but not only there. We are struggling with limited budgets ourselves to provide access to staff to the most valuable information sources and have to make hard choices about what we provide to staff and what we don’t. Many smaller organizations find this even more challenging than we do.

Journals are not “free” to produce of course. Producing them costs money, whether it’s to organize the peer review process, for editing, layout, printing, distribution, advertising, web design, subscription management and so on.

So the real question is who should pay if we want to broaden access? There are a few different options none of which are fully satisfactory:

1. Developed world pays and subsidizes free access for developing countries. This until recently was the Elsevier model. It’s not foolproof since i) there are rich organizations who can afford it in the South and poor ones who need it but can’t afford it in the North. Also I’m sure there are loopholes whereby people from the North access research via Southern institutions. And what stage can a country “afford” to pay itself and how much of a premium is the developed world willing to pay to provide access to the developing world.

2. Means tested – some other kind of benchmark around who pays and who doesn’t based on some perceived ability to pay. This is probably a non-starter since who would be able to define and monitor such a scheme to ensure it In practice publishers do provide discounts to certain organizations on a case by case basis but this is probably as much a result of negotiation as it is of merit.

3. Public funded – In his post Lee was advocating for the threat of nationalization to ensure open access (I’m presuming not seriously). There are notable public funded open access research databases such as Pubmed or consortia of organizations  such as PLOS (Public Library of Science). But Pubmed is limited to publicly funded research, and sometimes there is a delay before this is available. There is a limit to how much leverage government have with journal publishers, and a limit to how much research and publishing they are willing to fund themselves. Also with nationally run publishing operations there is always a perception (if not reality) of the review process being politicized rather than merit based as well as inefficient.

4. Philanthropically funded private provision. Maybe foundations and major philanthropists could agree to pay Elsevier and others for their research and then provide it for free. Providing free access to journals is probably not the top of the list of things  philanthropists want to be remembered for though. Another challenge would be setting the price – how much would global access to the Lancet be worth for example? Without payment, how can you work out which journals or articles are the most valuable. (The Lancet is expensive because people think it is good and are willing to pay a high price for it).

5. Advertising supported market provision – this is a model that I’m surprised has not been adopted more (maybe there’s some obvious reason – please let me know in the comments). Just as Google search or Facebook are free but supported by advertising, shouldn’t it be possible to provide free public access to academic journals or research articles supported by some form of advertising? Those journals or articles in greatest demand should attract the most advertising revenue, and so the greatest funding thus enabling their publishers to maintain their economic incentives while still providing access to knowledge for free to “consumers”. It might be possible to charge for premium features such as print  versions, company branded portals etc.

Something in me would like 5. to work. It would be great if there were a market oriented version that also provided public access. I suspect though that one of the biggest challenges is that of journal prestige. Everyone wants to get their work published by a prestigious (i.e. exclusive) journal as a sign of the quality of their work. This means it’s hard for newcomers to enter the market, and so it would be hard for a new journal with a different business model to get established and gain credibility to the extent that it could sustain itself economically.

Maybe what is needed is for a few major investors, philanthropists, governments and researchers to get together to say openly that the current publishing system isn’t serving the public interest and to support a few high-profile, high-quality pilot journals that are both public access and commercially viable that can break the current business mould.

Anyone up for the challenge?

(for another take on this issue – and the problems with the academic publishing “business” as a whole read this great post by Josef Scarantino – Africa needs an open publishing manifesto for academia…the time is NOW)

Update: Lee (Roving Bandit) has written a new blog post on this elaborating on his views (More Clarifications: On Academic Publishing) which is well worth checking out. I think we are agreed that research should be a public good – where we differ what might be the best model to deliver it. Whatever it is, I think it could only be good if more people started to make noise about this.

Written by Ian Thorpe

January 24, 2011 at 2:54 pm

Owen Barder talks to UNICEF about knowledge, evolution and effective aid

with 5 comments

This past Monday we were lucky enough to have Owen Barder come by our office to give a webinar on knowledge for development. His presentation is given below. I don’t have a recording of the meeting but here’s a much shorter video of a similar talk from last year’s AgKnowledge Share Fair in Addis Abeba which is also well worth a look.

The presentation was packed full of thought and conversation provoking insights, especially for those of us working in a large “traditional” aid organization. And quite a few additional ideas came up during the Q&A at the end.

A few takeaways:

  • Knowledge is a major driver of development (and inequalities in knowledge are key determinants of inequalities in development)
  • Development problems are “wicked problems”  i.e. they are complex (see my previous post on complexity)
  • Complex problems are best solved by evolution, not by top down “intelligent design”.
  • Solutions evolve through experimentation – trying lots of ideas, and by good feedback loops, collecting and sharing data on results, listening to beneficiaries in order to identify which ideas work and continually improve them.
  • Compare any solution against the benchmark of “just giving cash to the poor”

In the Q&A Owen made a number of other interesting observations, one of which was that in development organizations there is a tendency to focus on knowledge sharing as a dissemination exercise, (if only we could get our knowledge out there into the hands of practitioners and policy makers). In practice one of the biggest constraints is actually the demand for knowledge. Aid workers have too many things to do, and are not rewarded for or required to keep on top of the latest knowledge and experiences in their field. So to help knowledge spread we need to free up and incentivise aid workers to seek out knowledge that will help them do their jobs better (blog post forthcoming on this!).

Another interesting discussion was about the future role of large aid organizations, and how to move an organization such as ours into this new way of working. Owen used the example of how technology has dramatically changed the nature of the travel business. In the past you would go into a travel agency and they would be in charge of picking the flights, finding the best price, looking for itineraries etc. whereas now the travel service is much quicker, more efficient and offers more choice, but is largely self service.  He explained that eventually the “aid business” will go the same way as donors look to interact more directly with beneficiaries and to have more direct choice in what they fund and how they receive information about their “investments” and how they are doing. The role of aid organizations here would be to provide and facilitate the platform for exchange.

In terms of moving organizations he mentioned two elements i) creating competition so that the best organizations and ideas are those that thrive, with greater transparency being one means to encourage this ii) since lare organizations are hard to move from within set up small scale projects outside the mainstream, (he referred to them as “skunkworks“) which can innovate by working outside existing rules (and which can take risks and where failure is an option) to develop new approaches, which if successful can then be adopted by the broader organization. In fact there are a few examples of this already within the UN (one example would be the UNGlobalPulse project) – but we certainly need more of them, and I’d like to volunteer Knowledge Sharing as one of them!

All in all it was a very interesting and inspiring discussion. It’s still very challenging for large aid organizations like ours to take on these ideas, but I hope this was a small step towards building some internal momentum for change.

Written by Ian Thorpe

January 20, 2011 at 8:50 am

Who’s afraid of complexity in aid?

with 14 comments

Complexity is fast becoming a hot topic among development economists and aid bloggers. There have been a number of great presentations and papers on this (Owen Barder and Ben Ramalingam have both written accessibly on this on their blogs and in papers). There has also been some back and forth about what complexity really is, and whether people are really understanding and using it properly, some of which is quite academic, and a little intimidating.

This increased attention is a positive thing since in development many of the real life situations we deal with can indeed be characterized as “complex adaptive systems” and so treating them as if they are engineering problems with a clear linear cause and effect, as has been the tendency in aid planning in the past, will continue to lead to disappointing results.

But I’m also a little concerned that the tone of some of the current discussion can also leave the non-initiated with some unfortunate and mistaken impressions:

1. Complexity is well, complex (or is that complicated?), so to understand it you need to be really, really smart. The rest of us should stay aware for fear of making a faux pas and looking stupid (Maybe the tone and language of some of the academic debate doesn’t help).

2. Given how hard this is to understand, if we are to take it seriously in how we plan and deliver aid, we will need to hire high powered academics and management consultants or create a  specialized cadre of “complexity officers”.

3. Since development is  so complex then we can never be really sure about the results of what we are doing,  why try?. Won’t the system just “evolve” itself into the most suitable outcome over time anyway? and won’t this be much more effective and efficient if we don’ t interfere?

Not quite.

Complexity is indeed a complicated field of study, and like with many other topics in development, one around which there is incomplete knowledge and different schools of thought. So if you want to publish academic papers on it, or debate about it with other intellectuals you need to get yourself up to speed.

BUT even without a deep academic knowledge  it’s quite possible to understand what a complex system is, recognize one when you see one, and to use some simple approaches to deal with it.

Complex systems are not something new, they are as old as humanity and we have all been navigating them mostly successfully before we were even aware that such a topic existed.

One of my favourite analogies for this is that of raising a child. Yes, we can read books and get advice on how to do it. But there is no fail-safe recipe for how to raise a healthy well-adjusted child.

Why is this the case?  There are many actors involved – not just us, but also other relatives, teachers and peers, and most importantly not forgetting the child herself, and they all have different views and interests in the raising of the child. There are many environmental factors which you influence but don’t fully control such as the town you live in, the school you send the child to, what the child eats, exposure to illness, exposure to violence in society, consumption of media etc.  The relationship between the actors and the factors is “complex” and it’s hard to predict what approach will work best at a given time, and its not certain that what worked for one child will work for another. It’s hard (as any parent will tell you), but its certainly possible. People do it all the time.

Turning back to aid and development, there are multiple actors and factors all interconnected in ways which make the outcome of any specific action very hard to predict. But that doesn’t mean that there is nothing we can do, nor that we need deep academic knowledge or expensive consultants. An aid agency is only one of the multiple actors in the system but it can still take action and make an impact, even if the exact nature and scale of the impact can’t be easily predicted in advance.

Here are a few suggestions:

1. Start something that seems a reasonable approach based on what we know at the outset (drawing on information such as what has worked elsewhere, whatever scientific literature exists, what do partners say, what own experience and instinct tells us).

2. Adapt your approach over time in light of your actual experience and how well you are doing. Be prepared to modify and improve your approach continually based on what actually happens rather than what theory or past experience tells us.

3. If things are not working at all, admit it , stop doing it, try something else.

4. Continually collect and widely share data and information on what you are doing and what the results are.

5. Build in feedback mechanisms to see how you are doing including feedback from beneficiaries.

6. Try multiple experiments  – don’t put all your resources into a single approach. This way you can compare different methods and then scale up the most promising one(s).

7. Look out for and be open to unintended outcomes. These could be both negative and positive. It might be that the project has a positive benefit, just not the one you were initially looking for when you started out. Also small changes can sometimes  have large impacts.

And for those who are ready for it there are more sophisticated tools and approaches you can use in practice for programming (e.g. action learning, fail fast) and continual learning and evaluation (e.g. most significant change, outcome mapping).

There are many others who have written much more eloquently than me on this, but I just wanted to get down online that development is a complex problem, but there are still simple things we can do to work on it if we are just prepared to look at it differently from how we ave in the past.

[postscript: by chance, or perhaps by spontaneous order, Bill Easterly also posted on this exact same topic on aidwatch today – I’d suggest you also take a look at what he has to say]

Written by Ian Thorpe

January 19, 2011 at 8:30 am

Failure without borders

with 20 comments

We all make mistakes.

Some of the most valuable life lessons come from the significant mistakes and hard knocks we take. In the business world it’s often said that the successful entrepreneur is someone who has persevered through a lot of failures.

The aid world not so much.

There are many reasons it’s hard to admit failure if you are in the aid world. We believe that our donors will not fund us if we admit fallibility. We believe we can’t afford to fail if we are using public money. Our reporting structures and tools encourage us to upsell our achievements and downplay our failures. Receiving funding is often seen as a big sign of success (and on this scale I must admit to failing big), and perhaps understandably your pilot project will only attract scaled-up funding if it is a success.

Yet, if we don’t admit our failures, then how can we learn from them, and stop repeating or worse avoid continuing them while telling the world they are successful when they are, in reality, flawed. And how can you try to innovate and tackle emerging problems if you are afraid to fail.

There are a few hopeful signs of change:

Engineers Without Borders (EWB) just launched this excellent website at their recent annual conference.  This is a site where organizations  can share their aid related stories of failure. EWB, GlobalGiving and the Peace Dividend Trust have all committed to entering examples and I hope more organizations will choose to do so to.

EWB have already set a good example through their annual failure report where they list some of their notable failures over the past year. This practice was also recently adopted by the Dividend Peace Trust who issued their first failure report this year.

Early last year Mobile Active created a concept known as Fail Faire , and event where ICT for Development practitioners shared their failed projects and what they had learned from them. Here’s a blog I wrote about the event for MobileActive (before I had my own blog). This concept has now been replicated several times with another one on ICT for Development hosted by the World Bank, and other fairs run by Ashoka and the SOCAP (An annual conference on Social Capital markets). MobileActive have also created a handy set of tips on how to organize your own Failfaire.

Let’s hope that more aid agencies will pick up this trend (including my own), and that more donors will support them to do so. And best of luck to all you brave failers – you are the ones that are really creating new knowledge for development.

Written by Ian Thorpe

January 16, 2011 at 10:10 am

Posted in Uncategorized

Up in the air

with 8 comments


Apparently one way to get more hits on your blog  hits is to mention celebrities. Tricky on a blog about knowledge management, but here goes.

Many of you will have heard about the George Clooney project to use satellite imagery to help deter possible genocidal attacks following on the South Sudan independence referendum. Many people much more knowledgeable, and witty than I am have called into question the wisdom or likely utility of this project.



But what what got the twittersphere abuzz yesterday were Clooney’s own remarks in response to his critics:

“I’m sick of it,” he said. “If your cynicism means you stand on the sidelines and throw stones, I’m fine, I can take it. I could give a damn what you think. We’re trying to save some lives. If you’re cynical enough not to understand that, then get off your ass and do something. If you’re angry at me, go do it yourself. Find another cause – I don’t care. We’re working, and we’re going forward.”

As Joshua Keating rightly notes in this FP piece ” This kind of “at least I’m doing something” rhetoric drives development scholars absolutely bonkers and for good reason”. While we might feel morally compelled to do “something” or “anything” about a pressing problem, that doesn’t mean we should. In fact by doing the wrong thing we might even actually make things worse. But I’d like to unpack this a little (from a knowledge management point of view of course).

It’s rare that we either have either:

i) no knowledge whatsoever about what to do, but decide to do something anyway or

ii) enough information to be absolutely certain that we are doing exactly the right thing (see related post “the truth is out there”).

So in practice we’re frequently faced with a compelling problem, but with incomplete knowledge about how to handle it. Different people will feel they need a greater or lesser sense of information and confidence in it before they act. In aid work, where there are a lot of unknowns, there is often  a need for a judgement call as to when to act and when not to act based on what we do and do not know – with advocates erring more towards action and researchers more towards “needs more research”. So perhaps the difference between Clooney and the aid commentariat is not necessarily a difference of approach, but simply of degree along this continuum – he’s more of an advocate than a scholar.

A couple of additional points:

1. If we have a potentially useful idea, but it’s never been researched or tested in the current set of circumstances, then the only way to really find out is  just to do it. We’d be foolish to tell people that it is certain to work, and we will need to carefully and honestly monitor it to see if it does work, or if it has any negative unforeseen consequences, and be prepared to modify or drop it if needed.

2. Of course while we can’t know everything before acting, it would be extremely remiss not to consider information about the approach that is already easily accessible before deciding whether to go ahead.

So on this basis, if I had to say what I thought about Clooney’s project and his reactions to critics I’d have to say,  it’s up in the air…

Written by Ian Thorpe

January 11, 2011 at 10:10 am

20 (deceptively) low cost ideas for development

with 13 comments

Alertnet recently issued a piece “AlertNet’s top 20 big ideas that don’t cost the earth” which highlights a number of proven, low-cost, mostly low tech or easily replicable tech development solutions. This includes traditional items such as  bednets, breastfeeding and handwashing to some more recent development innovations such as mobile health or solar power.

On reading this I had three reactions, in this order:

1. Wow isn’t this a great list. Imagine what the potential could be if these simple approaches could be widely adopted.

2. Wait, if these are so good how come they are not used more widely already. Some are new, but some of these have been known about for a long time already – why are they not being used more?

3. Are they really as effective, simple and low cost as they are being presented in this list?

For some of the examples given, dissemination might well be a problem – as of yet they are not widely known by developing country governments and aid agencies, or perhaps even if they are known, there is still need for greater technical documentation on how exactly they work. And here both communication, and knowledge management have a role to play in order to raise general awareness and interest in the approaches and then share the technical details needed to make them work.

In quite a number of the examples given, the ideas are not new, and the technical side is well known (e.g. manufacturing specs for TB detecting paper), but nevertheless they are still not as simple as they first appear.

Let’s take the well documented and well discussed example of bednets. On the one hand there is general agreement that  sleeping under an insecticide treated bed net is an effective means to help combat malaria. Check. On the other hand getting this to happen is a little bit more complicated than it appears on the surface. Let’as take a (very simplistic) look at how bednets get there and get used to illustrate this.

Step 1: Manufacture – where should we do this? Do we buy in bulk from overseas to ensure quality, regular supply and good price (assuming we can persuade the government not to charge import taxes)? Or do we foster local manufacturing to better stimulate local markets, employment and possibly demand?

Step 2: Delivery/distribution – how do we get the bednets to the people? First challenge might be logistics, how to get the bednets transported to the most remote areas. especially in conditions of insecurity? Second – how about distributing them – where/how should this be done? Should they be distributed in healthcentres, or should they be distributed door to door, or sold in the market, given out at schools?

Step 3: Use – how do you persuade people to use bednets, especially if they are not familiar with them, don’t believe they work, don’t like sleeping under them, or perhaps believe it makes more sense to sell them, or use them for something else. Bednets also have a limited lifespan and need to be replaced every 4 to 5 years – less well documented is how best to get people to consistently replace them when they are no longer effective.

Bonus step – Money. Although cheap, bednets still cost around $5 each (and I’m not sure if that includes all the costs of distribution and marketing) which makes them unaffordable to the poorest (or at least perhaps not on their top priorities for a highly constrained household budget). So who should pay for them? should the money come from governments, from donors, NGOs, should the bednets be distributed free of charge or should some part of the cost be paid by the user? How does the cost effectiveness of this compare with other interventions for malaria, or for that matter in health or development more generally.

And just to say of course that in fact these four steps are not in reality distinct but they all influence each other, sometimes in unknown ways.

So while a good idea such as bednets may in of itself be relatively cheap, and proven to be effective, making it happen in practice is rarely simple or easy. Many of the challenges in doing this are not in the technology itself, but lie in developing systems (wouldn’t it be much easier to distribute bednets if you have a well functioned, adequately financed  health system) and in changing human behaviour.

While “awareness raising” about the potential of these approaches is valuable – I’m not sure that giving the impression that they are easy technological fixes is quite so helpful. And from the perspective of a knowledge manager – the innovations and knowledge that seem most valuable to share are not the technologies themselves, but rather what did it take in terms of policy, politics, systems, procedures, behaviour change and participation that enabled these technologies to be adopted and used successfully, and whether or how can these be adapted to work in different contexts.

Written by Ian Thorpe

January 10, 2011 at 9:46 am

New kids on the block meet law and order

with 5 comments

In UNICEF we’re working hard on fostering communities of practice, and I participate in a good number of more and less formal online communities. One thing that fascinates me about them is community culture.

Oft-believed myths about social media and online communities are that they are open, egalitarian, anarchic and transparent. They may be much more of these things than traditional collaboration structures, but even the most “liberal” of them are still much more structured than meets the eye, and often this structure is not explicit but is something that emerges over time as “culture”.

To give an example, I’m a member of the Adoption 2.0 Council a community of practitioners working on introducing social technologies (or in the group’s lingo Enterprise 2.0) into their workplaces. It is a really well run, dynamic community which has a rigorous selection process for members, fairly detailed guidance for newcomers on how to get started and what are the rules and expectations of membership. In this case, while it is “2.0” there are nevertheless a few “rules” and as I’ve been there a while I also noticed that there are unwritten “social norms” within the group, and there is a kind of unofficial hierarchy of members based on experience (partly reinforced by a points and ranking system to recognize contributions on the main online platform), and these are things you wouldn’t fully grasped without participating, and I suspect they have changed over time.

To take a very different example, look at twitter. It has few explicit rules, is very simple,  and is open to anyone to participate and engage with anyone else without any official guidance and there is no real moderation or management. Yet it wasn’t very long before conventions emerged (Retweeting, Follow Friday etc.) nor before “experts” started telling you how you should and should not be using twitter, and before twitter celebrities emerged, not only among actual  celebrities, but those people you must follow and must listen to but who before twitter you had never heard of.

I’m now curiously watching the evolution of Quora, which currently seems to be exploding on the web. Like Twitter, Quora is quite open with relatively few rules, although the Quora team have been posting some tips. That said experienced Quora users are already getting concerned about the behaviour of newcomers and are seeking to lay down the law. This post  ( was particularly strong but it also seems like it was very popular. At the same time other established users are also trying to take a softer more welcoming line with newcomers ( It’s going to be interesting to see where this goes, but rules, norms and hierarchies are clearly emerging and people are jockeying to get position and influence (or to maintain it).

When we were setting up our own communities we had a number of discussions around rules and norms, but in the end settled on very few, most of which are not explicit. A few conventions are built into the collaboration platform itself – you can’t contribute anonymously for example – because the system doesn’t let you. You can’t see past the homepage in a community unless you join, and each community decides whether to let anyone join or whether you need to be approved by the community facilitator. We also have an implicit hierarchy of membership between founders (organizational sponsors), facilitators and regular members. But we didn’t codify much in terms of rules and have been cautious in how we give guidance to give examples of how things can work rather than saying definitively how they should.

It’s interesting to see that each community over time has started to develop its own way of working, with its own norms. Some are much more formal than others, some much more centralized, some prefer blogging whereas some prefer discussion groups. I also wonder how newcomers find it when they join communities that have already been established – I think we might not be fully aware of how open things look to them.

A few general thoughts:

  • Every community establishes sets of formal or informal rules, social norms and hierarchies over time, whether or not they explicitly set out to do this and whether or not they formally “design” these. The actual norms of a community might well not be the ones you formally designed or what is written in the rules.
  • Technology influences these these norms (and good design can be used to help direct this) but only to an extent.
  • Norms and hierarchies evolve over time. Communities start out much flatter at the beginning when people haven’t yet established the norms and their positions within the group but inevitably some kind of “social order” is created over time.
  • The first active members of a community play a key role in establishing  the norms of the community. They set the tone, and get a natural “reputation boost” by virtue of having been around the block. New members must work much harder to get a foothold, to have their contributions accepted or to develop influence within the community. Old timers have a key role to play in welcoming newcomers – they can welcome, guide, advise and nurture – but they should also overcome the temptation to be condescending, or to dismiss the possibly legitimate questioning of the established wisdom and norms of the group by the newcomer.

So watch out you new kids on the block – even in the 2.0 world you need to respect your elders. And oldtimers – if you *really* believe in the ideas of 2.0 you might need to think about what sort of example are you setting: are you a good cop or bad cop.

Written by Ian Thorpe

January 7, 2011 at 1:14 pm

Posted in Uncategorized

An example of actual work we do on lessons learned

with 2 comments

Just a quick blog post to promote some of the actual work we do in our KM team (as a change from me pontificating).

We’ve just released a compendium of some of the more interesting innovations and lessons learned reported to us by our country offices on their 2008 programmes of co-operation. See these on UNICEF’s website here.

Note: as regular readers of this blog will know, I’m not a fan of best practices. Rather we prefer to document and share real-life practices and experiences that can be used to help inform or inspire other programmes. This publication is an example of some of the experiences we have documented. And its worth noting that while we do research, editing, questioning, reviewing etc. from headquarters all these experiences are initially proposed and written up to us by our country offices based on their reflections and experience, so congratulations and thanks to them for taking the effort to share their work.

Written by Ian Thorpe

January 7, 2011 at 11:23 am

Posted in Uncategorized

%d bloggers like this: