Archive for January 2011
I’ve just completed a training course in “Competency Based Interviewing”. I’ve been trained in this before some years ago, but took the latest version of the training since it will soon be mandatory for anyone who sits on a job panel to have done the course, so it was time for me to brush up my skills in this area.
For those not familiar with the approach, the basic premise behind competency based interviewing is that traditional technical based job interviews often fail to elicit information that gives you a realistic indication of how well someone will do in a new job. This is because doing a job well can be as much about behaviour as it is around technical knowledge. The personal or behavioural characteristics needed to do a job well are called competencies, and for competencies its not enough to know them in theory, but the best predictor of whether you have them is whether you can describe how you used them in the past. Examples of competencies might be interpersonal communication, negotiation skills, organizing and planning work etc.
So, to take the example of negotiation skills, a typical interview might ask you how you negotiate or how you would go about a hypothetical negotiation, whereas in a competency based interview you would be asked to describe a real negotiation you were involved in and what you actually did, and what actually happened.
Another important feature is that these behaviours can change over time based on experience – but often also require a concerted effort of self reflection, feedback and commitment to action.
Another feature about competencies is that while people naturally have different levels in them, they can also be learned through experience and conscious effort to improve, so the interview also looks for evidence of self-reflection and learning from what didn’t work and then whether this was applied in the future.
The use of competency based interviewing and testing is backed up by a fair bit of research and experience, and is the norm in many large organizations, including in UNICEF (in fact these were adopted by many private sector organizations back in the 1980s).
What is a little more surprising for me is that we often struggle to apply a similar approach when looking at aid organizations, or even more specifically at programming approaches or practices. There seem to be a few key lessons which we could use to also assess the capabilities of teams and organizations, as well as to particular approaches or practices in development. In short:
1. One of the best predictors of whether something will work is whether it worked in the past. (Not whether in works in theory)
2. Why it worked in practice (or didn’t) might be best analyzed not by the theory of the approach that was followed but by what actually happened and how this contributed to the success of failure. This requires self-reflection AND feedback from others.
3. An important element of improvement is looking at what didn’t work and extracting lessons from it. BUT extracting the learning isn’t enough – for a lesson to be really learned it then needs to be applied successfully in practice (otherwise it is still hypothetical learning).
4. Another element of the interview process that applies well to assessing an approach or team is observe first, assess later. That is collect all the observations you can first before you make an assessment. In job interviews first impressions can count for a lot, but they can also be misleading, so in competency based interviews you are trained to observe and record only, and then assess only once all the data has been collected. This is good advice for programme assessments too.
5. Many eyes are better than one. Interviews have a panel of interviewers because each panelist may see different things and come to different conclusions, so having several people helps get a more complete picture of a candidate. Similarly they should compare data first before comparing assessments so as not to unduly influence each other. Also great advice for programme assessments – use multiple viewpoints – compare observations first, then draw conclusions. Of course many assessment methodologies embody this -BUT often the assessment is already creeping into the researchers mind before the evidence is formally analyzed, and this inevitably leads to (unconscious) bias in the assessment.
I’m sure there are many other lessons too, but there were a few take-aways for me that are useful for work on identifying and applying lessons learned.
If you are interested to learn more about competencies and competency based interviewing here is an old guide from UNICEF (undated but I’d probably place it around 2005). The current guide, with our updated framework and example questions is not available online -probably for the obvious reason of not giving candidates too much of a jump on possible questions! But the old guide gives you a good sense of what the apporach is about.
William Savedoff from the Center for Global Development recently wrote an interesting blog post on “What can development agencies learn from venture capital firms” where he looks at what might be learned from venture capital firms in terms of how they manage their investments versus how aid agencies manage the projects they “invest” in.
But there’s another key area of venture capital I’d be interested to explore. What is the possible role of venture capital itself in funding development startups or new aid technologies.
When I shared my recent blog post “failure without borders” internally one interesting comment was that donors generally feel more comfortable funding things that are “tried and tested” and are reluctant to put money into new approaches, especially where there is a high chance of failure, even when the payoff could be great. It is possible to get funding for some pilots, but rarely for something that hasn’t already been tried elsewhere or by someone else.
One interesting solution might be to set up a kind of “venture capital” fund for development i.e. a set aside funding source that would be specifically designed to invest in high risk – potentially high payoff innovations and pilot projects that are otherwise unlikely to be funded. This would be different from regular venture capital of course in that the projects themselves might not result in a financial profit for the investors, but rather public good in terms of new and better ways to deliver aid or promote development.
For this to work a few things would be needed:
i) a method for selecting investment projects that is transparent, robust but also quick and fairly non-bureaucratic. This should seek to identify ideas with high potential benefits and good management plans while not discounting ideas just because they are difficult or risky. It might be good to include actual venture capital experts and entrepreneurs in reviewing proposals to help avoid this being a more typical grant selection process.
ii) projects would need to have some form of clear monitoring and evaluation framework so that progress and results can be tracked. This should include monitoring of impacts both intended and unintended as far as feasible, and some sort of end of project assessment.
iii) there would need to be some kind of exit strategy so that the venture funding is restricted to a certain time period after which the project is picked up for financial support through the regular agency funding mechanisms or by government or another investor (as a tried and tested, or at least highly promising approach), OR if the project did not realize its promise a failure or lessons learned report is produced and the project is closed.
A few other things might be desirable:
i) given the high risk and high potential failure rate of these projects it might be good to seek private philanthropic funding rather than public funding (For example New York City adopted this approach by securing private funding to test out new initiatives such as the now abandoned conditional cash transfer programme Opportunity NYC).
ii) As Savedoff mentions, venture capitalists often provide quite a bit of advice and support to the startups they fund. This would also be desirable for aid startup projects, and perhaps the type of advice and advisor needed is not your typical aid project manager, but instead someone experienced in startups or successful aid pilots who can provide practical advice to the project managers.
iii) It would also be good if some of the standard aid project management requirements could be relaxed for the duration of the pilot – in particular allowing the project freedom not to use a standard logframe or results matrix, but rather allow them to be more flexible to evolve the workplan and targets as they learn from experience.
So anyone got a few spare million to spare for me to try out this new (high risk potentially high return) idea?
(P.S. I’d like to give a quick shout out to Dennis Whittle since a lot of the ideas in this post comes from a very stimulating conversation I had with him over coffee a few months ago)
Roving Bandit recently blogged with justifiable indignation about how Elsevier, a leading academic publisher (and publisher of the Lancet) had revoked their deal that offered free journal access to many in the developing world.
In an ideal world academic research would be free to everyone. Costs for access to research are an important barrier to access to knowledge in developing countries, but not only there. We are struggling with limited budgets ourselves to provide access to staff to the most valuable information sources and have to make hard choices about what we provide to staff and what we don’t. Many smaller organizations find this even more challenging than we do.
Journals are not “free” to produce of course. Producing them costs money, whether it’s to organize the peer review process, for editing, layout, printing, distribution, advertising, web design, subscription management and so on.
So the real question is who should pay if we want to broaden access? There are a few different options none of which are fully satisfactory:
1. Developed world pays and subsidizes free access for developing countries. This until recently was the Elsevier model. It’s not foolproof since i) there are rich organizations who can afford it in the South and poor ones who need it but can’t afford it in the North. Also I’m sure there are loopholes whereby people from the North access research via Southern institutions. And what stage can a country “afford” to pay itself and how much of a premium is the developed world willing to pay to provide access to the developing world.
2. Means tested – some other kind of benchmark around who pays and who doesn’t based on some perceived ability to pay. This is probably a non-starter since who would be able to define and monitor such a scheme to ensure it In practice publishers do provide discounts to certain organizations on a case by case basis but this is probably as much a result of negotiation as it is of merit.
3. Public funded – In his post Lee was advocating for the threat of nationalization to ensure open access (I’m presuming not seriously). There are notable public funded open access research databases such as Pubmed or consortia of organizations such as PLOS (Public Library of Science). But Pubmed is limited to publicly funded research, and sometimes there is a delay before this is available. There is a limit to how much leverage government have with journal publishers, and a limit to how much research and publishing they are willing to fund themselves. Also with nationally run publishing operations there is always a perception (if not reality) of the review process being politicized rather than merit based as well as inefficient.
4. Philanthropically funded private provision. Maybe foundations and major philanthropists could agree to pay Elsevier and others for their research and then provide it for free. Providing free access to journals is probably not the top of the list of things philanthropists want to be remembered for though. Another challenge would be setting the price – how much would global access to the Lancet be worth for example? Without payment, how can you work out which journals or articles are the most valuable. (The Lancet is expensive because people think it is good and are willing to pay a high price for it).
5. Advertising supported market provision – this is a model that I’m surprised has not been adopted more (maybe there’s some obvious reason – please let me know in the comments). Just as Google search or Facebook are free but supported by advertising, shouldn’t it be possible to provide free public access to academic journals or research articles supported by some form of advertising? Those journals or articles in greatest demand should attract the most advertising revenue, and so the greatest funding thus enabling their publishers to maintain their economic incentives while still providing access to knowledge for free to “consumers”. It might be possible to charge for premium features such as print versions, company branded portals etc.
Something in me would like 5. to work. It would be great if there were a market oriented version that also provided public access. I suspect though that one of the biggest challenges is that of journal prestige. Everyone wants to get their work published by a prestigious (i.e. exclusive) journal as a sign of the quality of their work. This means it’s hard for newcomers to enter the market, and so it would be hard for a new journal with a different business model to get established and gain credibility to the extent that it could sustain itself economically.
Maybe what is needed is for a few major investors, philanthropists, governments and researchers to get together to say openly that the current publishing system isn’t serving the public interest and to support a few high-profile, high-quality pilot journals that are both public access and commercially viable that can break the current business mould.
Anyone up for the challenge?
(for another take on this issue – and the problems with the academic publishing “business” as a whole read this great post by Josef Scarantino – Africa needs an open publishing manifesto for academia…the time is NOW)
Update: Lee (Roving Bandit) has written a new blog post on this elaborating on his views (More Clarifications: On Academic Publishing) which is well worth checking out. I think we are agreed that research should be a public good – where we differ what might be the best model to deliver it. Whatever it is, I think it could only be good if more people started to make noise about this.
This past Monday we were lucky enough to have Owen Barder come by our office to give a webinar on knowledge for development. His presentation is given below. I don’t have a recording of the meeting but here’s a much shorter video of a similar talk from last year’s AgKnowledge Share Fair in Addis Abeba which is also well worth a look.
The presentation was packed full of thought and conversation provoking insights, especially for those of us working in a large “traditional” aid organization. And quite a few additional ideas came up during the Q&A at the end.
A few takeaways:
- Knowledge is a major driver of development (and inequalities in knowledge are key determinants of inequalities in development)
- Development problems are “wicked problems” i.e. they are complex (see my previous post on complexity)
- Complex problems are best solved by evolution, not by top down “intelligent design”.
- Solutions evolve through experimentation – trying lots of ideas, and by good feedback loops, collecting and sharing data on results, listening to beneficiaries in order to identify which ideas work and continually improve them.
- Compare any solution against the benchmark of “just giving cash to the poor”
In the Q&A Owen made a number of other interesting observations, one of which was that in development organizations there is a tendency to focus on knowledge sharing as a dissemination exercise, (if only we could get our knowledge out there into the hands of practitioners and policy makers). In practice one of the biggest constraints is actually the demand for knowledge. Aid workers have too many things to do, and are not rewarded for or required to keep on top of the latest knowledge and experiences in their field. So to help knowledge spread we need to free up and incentivise aid workers to seek out knowledge that will help them do their jobs better (blog post forthcoming on this!).
Another interesting discussion was about the future role of large aid organizations, and how to move an organization such as ours into this new way of working. Owen used the example of how technology has dramatically changed the nature of the travel business. In the past you would go into a travel agency and they would be in charge of picking the flights, finding the best price, looking for itineraries etc. whereas now the travel service is much quicker, more efficient and offers more choice, but is largely self service. He explained that eventually the “aid business” will go the same way as donors look to interact more directly with beneficiaries and to have more direct choice in what they fund and how they receive information about their “investments” and how they are doing. The role of aid organizations here would be to provide and facilitate the platform for exchange.
In terms of moving organizations he mentioned two elements i) creating competition so that the best organizations and ideas are those that thrive, with greater transparency being one means to encourage this ii) since lare organizations are hard to move from within set up small scale projects outside the mainstream, (he referred to them as “skunkworks“) which can innovate by working outside existing rules (and which can take risks and where failure is an option) to develop new approaches, which if successful can then be adopted by the broader organization. In fact there are a few examples of this already within the UN (one example would be the UNGlobalPulse project) – but we certainly need more of them, and I’d like to volunteer Knowledge Sharing as one of them!
All in all it was a very interesting and inspiring discussion. It’s still very challenging for large aid organizations like ours to take on these ideas, but I hope this was a small step towards building some internal momentum for change.
Complexity is fast becoming a hot topic among development economists and aid bloggers. There have been a number of great presentations and papers on this (Owen Barder and Ben Ramalingam have both written accessibly on this on their blogs and in papers). There has also been some back and forth about what complexity really is, and whether people are really understanding and using it properly, some of which is quite academic, and a little intimidating.
This increased attention is a positive thing since in development many of the real life situations we deal with can indeed be characterized as “complex adaptive systems” and so treating them as if they are engineering problems with a clear linear cause and effect, as has been the tendency in aid planning in the past, will continue to lead to disappointing results.
But I’m also a little concerned that the tone of some of the current discussion can also leave the non-initiated with some unfortunate and mistaken impressions:
1. Complexity is well, complex (or is that complicated?), so to understand it you need to be really, really smart. The rest of us should stay aware for fear of making a faux pas and looking stupid (Maybe the tone and language of some of the academic debate doesn’t help).
2. Given how hard this is to understand, if we are to take it seriously in how we plan and deliver aid, we will need to hire high powered academics and management consultants or create a specialized cadre of “complexity officers”.
3. Since development is so complex then we can never be really sure about the results of what we are doing, why try?. Won’t the system just “evolve” itself into the most suitable outcome over time anyway? and won’t this be much more effective and efficient if we don’ t interfere?
Complexity is indeed a complicated field of study, and like with many other topics in development, one around which there is incomplete knowledge and different schools of thought. So if you want to publish academic papers on it, or debate about it with other intellectuals you need to get yourself up to speed.
BUT even without a deep academic knowledge it’s quite possible to understand what a complex system is, recognize one when you see one, and to use some simple approaches to deal with it.
Complex systems are not something new, they are as old as humanity and we have all been navigating them mostly successfully before we were even aware that such a topic existed.
One of my favourite analogies for this is that of raising a child. Yes, we can read books and get advice on how to do it. But there is no fail-safe recipe for how to raise a healthy well-adjusted child.
Why is this the case? There are many actors involved – not just us, but also other relatives, teachers and peers, and most importantly not forgetting the child herself, and they all have different views and interests in the raising of the child. There are many environmental factors which you influence but don’t fully control such as the town you live in, the school you send the child to, what the child eats, exposure to illness, exposure to violence in society, consumption of media etc. The relationship between the actors and the factors is “complex” and it’s hard to predict what approach will work best at a given time, and its not certain that what worked for one child will work for another. It’s hard (as any parent will tell you), but its certainly possible. People do it all the time.
Turning back to aid and development, there are multiple actors and factors all interconnected in ways which make the outcome of any specific action very hard to predict. But that doesn’t mean that there is nothing we can do, nor that we need deep academic knowledge or expensive consultants. An aid agency is only one of the multiple actors in the system but it can still take action and make an impact, even if the exact nature and scale of the impact can’t be easily predicted in advance.
Here are a few suggestions:
1. Start something that seems a reasonable approach based on what we know at the outset (drawing on information such as what has worked elsewhere, whatever scientific literature exists, what do partners say, what own experience and instinct tells us).
2. Adapt your approach over time in light of your actual experience and how well you are doing. Be prepared to modify and improve your approach continually based on what actually happens rather than what theory or past experience tells us.
3. If things are not working at all, admit it , stop doing it, try something else.
4. Continually collect and widely share data and information on what you are doing and what the results are.
5. Build in feedback mechanisms to see how you are doing including feedback from beneficiaries.
6. Try multiple experiments – don’t put all your resources into a single approach. This way you can compare different methods and then scale up the most promising one(s).
7. Look out for and be open to unintended outcomes. These could be both negative and positive. It might be that the project has a positive benefit, just not the one you were initially looking for when you started out. Also small changes can sometimes have large impacts.
And for those who are ready for it there are more sophisticated tools and approaches you can use in practice for programming (e.g. action learning, fail fast) and continual learning and evaluation (e.g. most significant change, outcome mapping).
There are many others who have written much more eloquently than me on this, but I just wanted to get down online that development is a complex problem, but there are still simple things we can do to work on it if we are just prepared to look at it differently from how we ave in the past.
[postscript: by chance, or perhaps by spontaneous order, Bill Easterly also posted on this exact same topic on aidwatch today – I’d suggest you also take a look at what he has to say]
We all make mistakes.
Some of the most valuable life lessons come from the significant mistakes and hard knocks we take. In the business world it’s often said that the successful entrepreneur is someone who has persevered through a lot of failures.
The aid world not so much.
There are many reasons it’s hard to admit failure if you are in the aid world. We believe that our donors will not fund us if we admit fallibility. We believe we can’t afford to fail if we are using public money. Our reporting structures and tools encourage us to upsell our achievements and downplay our failures. Receiving funding is often seen as a big sign of success (and on this scale I must admit to failing big), and perhaps understandably your pilot project will only attract scaled-up funding if it is a success.
Yet, if we don’t admit our failures, then how can we learn from them, and stop repeating or worse avoid continuing them while telling the world they are successful when they are, in reality, flawed. And how can you try to innovate and tackle emerging problems if you are afraid to fail.
There are a few hopeful signs of change:
Engineers Without Borders (EWB) just launched this excellent website admittingfailure.com at their recent annual conference. This is a site where organizations can share their aid related stories of failure. EWB, GlobalGiving and the Peace Dividend Trust have all committed to entering examples and I hope more organizations will choose to do so to.
EWB have already set a good example through their annual failure report where they list some of their notable failures over the past year. This practice was also recently adopted by the Dividend Peace Trust who issued their first failure report this year.
Early last year Mobile Active created a concept known as Fail Faire , and event where ICT for Development practitioners shared their failed projects and what they had learned from them. Here’s a blog I wrote about the event for MobileActive (before I had my own blog). This concept has now been replicated several times with another one on ICT for Development hosted by the World Bank, and other fairs run by Ashoka and the SOCAP (An annual conference on Social Capital markets). MobileActive have also created a handy set of tips on how to organize your own Failfaire.
Let’s hope that more aid agencies will pick up this trend (including my own), and that more donors will support them to do so. And best of luck to all you brave failers – you are the ones that are really creating new knowledge for development.
Apparently one way to get more hits on your blog hits is to mention celebrities. Tricky on a blog about knowledge management, but here goes.
Many of you will have heard about the George Clooney project to use satellite imagery to help deter possible genocidal attacks following on the South Sudan independence referendum. Many people much more knowledgeable, and witty than I am have called into question the wisdom or likely utility of this project.
But what what got the twittersphere abuzz yesterday were Clooney’s own remarks in response to his critics:
“I’m sick of it,” he said. “If your cynicism means you stand on the sidelines and throw stones, I’m fine, I can take it. I could give a damn what you think. We’re trying to save some lives. If you’re cynical enough not to understand that, then get off your ass and do something. If you’re angry at me, go do it yourself. Find another cause – I don’t care. We’re working, and we’re going forward.”
As Joshua Keating rightly notes in this FP piece ” This kind of “at least I’m doing something” rhetoric drives development scholars absolutely bonkers and for good reason”. While we might feel morally compelled to do “something” or “anything” about a pressing problem, that doesn’t mean we should. In fact by doing the wrong thing we might even actually make things worse. But I’d like to unpack this a little (from a knowledge management point of view of course).
It’s rare that we either have either:
i) no knowledge whatsoever about what to do, but decide to do something anyway or
ii) enough information to be absolutely certain that we are doing exactly the right thing (see related post “the truth is out there”).
So in practice we’re frequently faced with a compelling problem, but with incomplete knowledge about how to handle it. Different people will feel they need a greater or lesser sense of information and confidence in it before they act. In aid work, where there are a lot of unknowns, there is often a need for a judgement call as to when to act and when not to act based on what we do and do not know – with advocates erring more towards action and researchers more towards “needs more research”. So perhaps the difference between Clooney and the aid commentariat is not necessarily a difference of approach, but simply of degree along this continuum – he’s more of an advocate than a scholar.
A couple of additional points:
1. If we have a potentially useful idea, but it’s never been researched or tested in the current set of circumstances, then the only way to really find out is just to do it. We’d be foolish to tell people that it is certain to work, and we will need to carefully and honestly monitor it to see if it does work, or if it has any negative unforeseen consequences, and be prepared to modify or drop it if needed.
2. Of course while we can’t know everything before acting, it would be extremely remiss not to consider information about the approach that is already easily accessible before deciding whether to go ahead.
So on this basis, if I had to say what I thought about Clooney’s project and his reactions to critics I’d have to say, it’s up in the air…