Archive for May 2012
Ben Ramalingam posted an interesting blog some time ago on whether organizations need “official positions” on policy issues.
He referred to David Ellerman’s paper from 2000 which criticized organizational positions drawing on his experience at the World Bank. They both gave a number of good arguments as to why fixed organizational positions can be harmful. In particular formulation of positions is time-consuming and political and so positions can be hard to change once they have been adopted, even in the face of new evidence and changing circumstances. They can also stifle research and new thinking as people are reluctant to do something that might give results at odds with the official position, both because it is less likely to be accepted, but even out of fear on the effects on their careers. Official positions often get ahead of the available research or are presented much more simplistically and categorically than is supported by available evidence, and what new evidence there is tends to be interpreted to support existing thinking rather than to challenge it.
Almost all the commenters on Ben’s blog – including David Ellerman himself– were supportive of not having official positions.
Sounds pretty damning right?
But “official positions” might also be a necessary evil. If you are a research institution your position might be simply to ethically pursue the truth, but if your organization wants to meaningfully engage in advocacy or programmatic work then it would be hard to do so without organizational positions or views to back them up.
Here are a few concrete reasons why official positions are useful:
1. Organizations who are engaged in advocacy work need (evidence informed) policy positions which to advocate from.
2. Positions also help donors and supporters understand what the organization does and what it stands for (and whether they want to fund them or not). What’s more they want to know what your position is on an issue, often as a source of authority.
3. Positions help staff to be able to speak and act on behalf of the organization, even in areas where they are not expert researchers – pretty important for the press officer, or even more importantly the head of office.
4. The process of arriving at positions can be important in creating dialogue and building consensus within the organization, and perhaps one of the few times when diverse evidence and different interests are brought together around an issue to agree on common ground.
5. Positions, especially ones that include values as well as evidence (such as a commitment to human rights or to eliminate child labour) are motivating to staff and partners. People are more motivated and work harder for organizations whose positions they believe in.
Regardless of the pros and cons of official positions, I think they are unlikely go away any time soon. The problems with official positions that Ellerman highlights are real, but for me the problem is not so much the idea of official positions itself, but more how they are developed and how they are used in practice. A few thoughts on good and bad ways of developing official positions:
1. An organization doesn’t need an official position on everything. Maybe it is better to build an organization’s work around a few core positions where the organization “knows” and where it has competence to act, and to be clear and honest about those areas which might fit within your organizational mandate, but for which the evidence about how to address them is far from clear.
2. A position need not be extremely detailed and prescriptive – it might rather be broad and principle based. Policy positions can be clear about intent without being overly prescriptive about the means to achieve them, that way they don’t need to constrain action too tightly and can be interpreted usefully in different contexts.
3. An official position need not mean that all programmes and technical advice correspond 100% with the official position – they might allow some latitude for the position to be interpreted or even occasionally contradicted based on circumstances on the ground. Doing this requires empowered managers and an organization that is willing to stand behind them and their decisions.
4. Research and evaluation should be allowed to be independent – in other words they should be carried out and their results published regardless of whether they support or contradict the official position. This might mean having some kind of research board or creating an independent research office that has freedom to set its own research agenda and a clear policy to publish whatever the results reveal about current policies and programmes.
5. Official positions need to be continually reviewed and revised based on both new evidence and also changing circumstances, including political ones. This review process needs to be institutionalized in some way such that policies need to have a limited shelf-life before they need to be reviewed.
6. The process of at arriving at a position needs to draw in evidence from a wide variety of sources both internal and external. It also needs to include diverse perspectives including beneficiary feedback, political analysis etc. evidence informed – but not decided by “scientific” research alone.
What is harder to achieve, but would be really helpful would be to foster a culture of critical thinking where staff are expected to challenge assumptions, and where they can (internally at a minimum) safely express disagreement with current positions and offer evidence in support of that. The idea being that current thinking should be constantly evolving – and agreed positions should follow it, but not too far behind. This is challenging because it requires clear expectation setting and example setting from organizational leadership – since otherwise the default behaviour is to tell your bosses what you think they want to hear since bosses all too often unthinkingly “shoot the messenger”.
There has been quite a bit of discussion online about the Financial Times article “How Aid got Smarter” featuring an interview with UNICEF’s Executive Director Tony Lake.
The article makes some important points about the need to improve the use of evidence to making decisions in aid, in particular in discovering what works and what doesn’t, admitting it and acting on it.
What is perhaps a pity about the article is that it can be read to imply that until recently aid decisions were largely faith-based, but now suddenly, at last, the role of science and evidence is being taken seriously in development. As usual the reality is a bit more complex than that.
Discussions around the need to make development work more evidence based have been around as long as I’ve been working in development (and probably a lot longer than that). And the progressions towards improved use of knowledge and evidence in development often seems like a case of two steps forward, one step back.
Over my past 20 or so years working in aid some notable improvements in the attention to evidence include an increased investment in and focus on evaluation resulting in more professionalized evaluation departments with greater resources and thus more and better evaluations; greater investment in supporting statistical data collection including in agreeing on harmonized standards and common sets of indicators to track over time; greater attention in various forms to supporting better internal communication and knowledge management to help staff have better access to, and make better use of available development knowledge. There are probably many others.
But many challenges remain. A few of the most thorny (and recurring) challenges in using knowledge in development work seem to be:
- How far we are able to “know” what works and what doesn’t. We don’t have the resources and skills to measure everything scientifically – and some of the knowledge we need is highly contextual and based on experience as well as on science (See my previous blog “The truth is out there” about the limits of what we can know).
- But even when we have a large body of relevant, available knowledge it is not always used in decision-making. It’s important to understand the reasons for these and try to tackle them along with work to increase the supply of knowledge (see my previous blog “Creating a demand for knowledge”).
- In our desire to understand something, and to “break it down” so we can tackle it in manageable pieces or sell it to donors, or the public, we often forget that many of the things we are dealing with are “complex adaptive systems” where the whole works differently from the sum of the parts and where a good practice in one context might not work in another. Of course, this doesn’t mean we shouldn’t use evidence – but we need to understand it in context, and apply it flexibly rather than expecting to find universal answers. (See my previous blog “Who’s afraid of complexity in aid”)
But while evidence based aid isn’t a new idea, and even though we are still not there yet, there is still good reason to be optimistic that aid is becoming, and will continue to become more evidence informed. Here are a few reasons why:
1. The results agenda – donors and beneficiaries alike are putting increasing pressure on aid agencies to managing for and report on results – in particular to be sure that ever scarcer aid money is being well invested (see this blog by Owen Barder on some of the benefits and challenges of the results agenda for improving aid).
2. Aid transparency – as more and more aid agencies sign up for IATI then it becomes easier to see who is dong what and where which is an aid to accountability and to improved coordination – but also to research as there is a whole lot of new data to crunch to understand more about how aid works (or doesn’t) especially when linked to the results agenda.
3. Open data and research – more and more development data is being made freely available for public use which provides a whole range of raw material for researchers. Increasingly (although still slowly) publicly funded research (and even data sets and analyses) is also being opened up for public access – which means there is a lot more chance that it will be used.
4. Real time data analysis – Often one of the big challenges in using evidence is that by the time you know enough about a problem its already too late (think global food/economic crisis). New “big data” techniques to more quickly understand what is happening – at least enough to act, if not enough to scientifically “know”. (See this previous blog on the possibilities of “real time”).
5. Beneficiary feedback – this is one area where there is great (as yet mostly untapped) promise, and a number of interesting initiatives. Too often external solutions are imposed on beneficiaries, using science as a basis, but without enough attention to getting real time feedback from the people who the programme is designed to help on whether they want the programme, if it is likely to work, and whether they are satisfied with it, or whether they have their own ideas about how to improve it. More listening can make projects more likely to work, and more participation can also help them be more sustainable in the long term giving beneficiaries a say and a stake in the project’s success (see my previous blog “listening to the people we work for” for more).
6. Lastly, there are a lot of smart, committed individuals talking about and working on how to improve aid. Sure, there always have been, but it seems (to me at least) that the volume and depth of this discussion has increased over the past few years, including from the detractors who in their own way, through their own critiques are advancing the discussion and thinking about how to do aid work well. And with more and more aid agency heads such as Tony Lake are speaking up in favour of smart aid – we can hope for more discussion about what smart aid really means- and for aid workers to feel more empowered to advocate for it inside their own organizations.
The quest for smarter aid is not new, and it will not be achieved overnight. Evidence based development work is more an ongoing journey rather than a destination. But the lights of Oz are looking a little bit brighter.
(Image: Dave Cutler)
This is a guest posting from Weh Yeoh, Sub-editor and Business Development Manager of whydev.org who writes about their excellent aid worker peer coaching initiative which I’d recommend you all check out. Here’s Weh:
International development work is often difficult, exhausting, and isolating. Many people who seek to serve and live abroad often become burned out by the overwhelming nature of their work. In isolated places, often the only people you can turn to for support are your boss or your partner. For various reasons, neither of these are a good choice.
However, we know that the support of a peer is an easy and effective way to reduce stress, burnout and, just as importantly, have access to someone to bounce ideas off.
This is why we, at whydev.org, have decided to build an online platform where international aid volunteers and workers can connect and discuss their challenges and experiences, allowing them the opportunity to support others across the globe who are also making a difference. Knowing that the world of aid and development is under-resourced as is, we think our idea fits well. This service does not require more resources to be added to the sector (in the form of professional mentors, coaches or counselors), but rather, builds on existing resources that are not connected.
We would like to think that it’s the first of its kind – an international support network for isolated aid workers.
Luckily, we’re not the only ones who think this is a good idea. Since asking for expressions of interest earlier this year, we’ve had over 320 people sign up to our pilot program. This is great news for everyone involved, because the larger the pool, the more likely we’ll be able to achieve a good match.
One international aid worker said, “I feel isolated, uncertain and a little forlorn about finding my way into development-related work, and would like to have someone to share my experience with, who is perhaps also experiencing the same thing.”
It is perspectives like this that make us want to keep working towards creating this platform. But, this is where we need your help. We’ve launched a crowdfunding campaign over on StartSomeGood where people can chip in amounts of money, small or large, to help us get this project going. If you are reading this post, chances are you are either working, studying or are at least interested in aid and development. Therefore, you’re probably the right demographic to understand the difficulties that aid workers can face across the globe.
Jennifer Lentfer, of How Matters, writes that having self-awareness of your own qualities and needs is crucial in becoming an effective aid worker. If you want to help us to build a future that supports the needs of aid workers across the globe, then this may be a worthwhile campaign for you.
Like anyone interested in smart aid and development, you’re probably interested in sustainability. So, just how sustainable is your funding? Good question! Once the platform is built, we think that we can keep the service running by adding in a tiered system of participation, so that it is self-sustainable.
Our vision is that peer coaching should always be accessible at no cost, as we promised right from the start. That option will remain, and people will still be able to be linked up to suitable peer coaches around the world at no charge. However, we think that people may also be willing to pay a small amount of money to get a value-added service. As such, we’ll be adding in different levels of participation so that those who are willing to pay a little extra will get a little more out of it. Whatever we make from this can then be fed back into the project to account for running costs. That’s why seed funding is so vital for us – the major outlay is not running the program, but getting it off the ground.
We’d appreciate it if you would consider donating whatever you can to our StartSomeGood campaign here, and spreading the word far and wide about what we’re trying to achieve.
For the final word on the topic, here is Brendan, speaking from Ghana:
You can donate to our campaign on StartSomeGood here.
Weh Yeoh is a current job-seeker based in Cambodia. He is a professionally trained physiotherapist who has completed a MA in Development Studies at the University of New South Wales. With experience in the NGO sector both in Australia and in China, with Handicap International, he hopes to combine his interest in development and passion for visiting far-flung destinations in the future. You can view his LinkedIn here and follow him on Twitter here.
The UN Special Session on Children took place ten years ago. At the time I was very busy doing invisible background support to a number of events and activities linked to the UN meeting. Among those were a children’s forum, a concert and a global vote/advocacy campaign called “Say Yes for Children”.
The Say Yes for Children campaign was a bold, (over)ambitious and at the time novel campaign to try to mobilize people from around the world to have their say about some of the key issues affecting children. The aim was to both mobilize large numbers of people worldwide to show the level of interest in the topics being discussed, but also through the pledges to get a sense of which specific issues people were most concerned about from a total of 10 issues mentioned on the pledge. It managed to mobilize 94 million pledges which were featured in he Special Session itself. It was a big achievement, but also with many frustrations in terms of mobilizing people, tallying the pledges and then figuring out what to do with them afterwards.
I’m thinking about it today because it illustrates some of the challenges of “open” versus “participatory” development I mentioned in my last blog post, and I see parallels in a lot of the current work being done by various groups to use social media for public engagement in big development issues, and I’m really hoping we can learn some lessons from the good and the less good of what went before.
Ten years on it’s now de rigueur that at every big international conference there is some sort of public “have your say” website set up and promoted via social media in the run up to the event in order to “raise awareness” or “mobilize”. However there are a couple of big issues with this approach as it is often practiced:
1. You can have your say, but is anyone listening? In many cases there is little, if any link between the discussions mobilized by the “campaign” and the actual discussions and decisions taken at the conference itself. In most intergovernmental conferences, only governments have a formal decision-making role, and the level to which they are willing to listen to inputs from outside varies widely but is often limited. It’s important to be aware of the limitations of what the real interest is in listening to non governmental voices whether organized ones such as from civil society, or individual ones from interested citizens. But many creators of campaigns give the impression that people’s contributions will be given more attention than they really will – sometimes even asking for inputs, not to make use of them, but as a means of generating greater public engagement with the issue or positive attention to the organizer, mainly as a PR exercise rather than the genuine conversation they are sold as. We can do better.
2. Dropping the ball – Even if the online dialogue around a big event can only marginally feed into the official conference itself it’s still possible to use the dialogue to get the pulse of the public’s mood about the issue, and to identify a group of potentially engaged citizens who can take action and push the issue in their own right. But too often the campaigns either dissipate after the conference is over as the campaign organizers move on to the next big thing – failing to capitalize on the generated interest to actually do something about the issue in question. Worse than this sometimes campaign organizers simply use the generated list of names as their starter mailing list for their next campaign, or even their fundraising drive.
There are a lot of big international events and conferences coming up with potentially wide-ranging effects on the future of development. Technology now allows us to reach out more broadly than before to find out what citizens feel about them, and to involve them in taking them forward. Let’s push for greater public dialogue on these issues, but also when doing this let’s not oversell the real level of influence people currently have, and once we’ve woken people up and whetted their appetite for more engagement, let’s not let them down, and waste the opportunity of their interest while we move onto the next big thing.
Ever since Robert Zoelick’s speech on “democratizing development” there has been a lot of buzz around the idea of open development and lots of discussion about what it really means. Some of the recent discussion is nicely summed up on this post on the Bank’s public sphere blog “Openness for Whom? and Openness for What“. I’ve also tackled the related issue of “Development 2.0” or the new way we can/should be doing development work taking advantages of changing technologies and business models now open to us.
Trying to unpack this a little bit I realize that there are two related but different aspects to the open development discussion that often get a little mixed up:
1. Transparency and reducing friction – i.e. making use of technology to make it easier to share information and knowledge in a standard way so it can be easily assessed, compared, mashed up, acted upon.
2. Participation – using technology to give people a voice and to change existing power structures, and decision-making processes.
In the mid 1990s I was doing some work on knowledge sharing on public finances. I can draw a parallel between the types of discussions we were having on sharing public finance information with the discussions on open development today. In more traditional public finance work the aim of major players (led by the IMF) was to provide high quality technical advice and capacity building to ministries of finance, usually behind closed doors. Along came two new related but different approaches, coming mostly from the human rights and democracy movements. There were:
1. Budget transparency – making government budgets public, widely disseminating them, and presenting them in a form that could also be understood by non-specialists (which includes parliamentarians, the media and civil society as well as the public at large). The aim here is that making the information available in a public, comprehensible and unbiased format would put pressure on government to justify and implement budgets more effectively, and also make it easier for them to be held to account through existing means (such as through parliamentary oversight or public opinion).
2. Participatory budgeting – Some municipalities/regions chose to take this a step further – they also created standing, open, consultative mechanisms that allowed citizens to directly influence at least part of the public budget, and be involved in its oversight.
The obvious corollary is that while transparency is necessary for greater participation, and can also help increase interest and engagement, it is not a sufficient step. Deciding to take the step to participatory budgeting requires and additional political commitment to devolve power back from the authorities (whether elected or civil service) back to citizens. And so open budgeting is much more widespread than participatory budgeting. It’s also worth noting that in participatory budgeting the devolution to citizenry is partial not absolute, covering part of the budget, or having and role in decision-making but not as final arbiter.
So back to open development….
Much of what is currently being talked about is in the first category – making more and more consistent information widely available so anyone can use it however they see fit within their means and influence. Making aid data open puts pressure on aid agencies to be accountable to all, but mostly to their current and potential donors. Similarly access to open data and open research allows aid workers and government policy makers to make better informed decisions (if they choose to do so), open public procurement means that companies have a level playing field for competing for aid contracts which also hopefully helps reduce costs for donors and beneficiaries alike.
But this openness only takes you so far in making development more democratic and empowering. It’s now becoming possible to collect and incorporate beneficiary feedback and local voices and knowledge into development work , but only IF you want to. While many of the big players are opening up their data, most are not opening up their allowing more input from beneficiaries and partners in their decision-making and resource allocation – here formal Boards and big donors still call the shots. Those that are experimented with limited feedback mechanisms are often doing this from the perspective of using the feedback to improve the likelihood their programmes will work by avoiding unseen pitfalls, or to help get better public and “buy-in” for their projects rather than to explicitly empower beneficiaries and change the power relations between “beneficiaries” and “benefactors”.
While the possibilities of technology can make the beneficiaries more vocal, in the end those who currently have the power will need to agree to give some of it up if technology is to really make aid more empowering. As with participatory budgeting, I don’t imagine that this will be a complete reversal of power relations, donors will still ant a say over where their money goes, but rather a rebalancing in favour of devolving of part of the donors or aid worker’s current role back to the people they wish to empower and assist to become sustainably self-sufficient.
A KM colleague from another organization recently asked me my advice on setting up an “strategic information system” to help better monitor the work of the organization, as this was a priority of the head of the organization.
In the aid world where there is an increasing emphasis on measuring and demonstrating results, this kind of system is increasingly popular. And indeed, who wouldn’t want to have some kind of dashboard that enables you to look across the organization and see how you are doing and whether or not you are on track.
The typical kind of dashboard will allow you to track key external statistics (such as poverty levels, child mortality etc.) on maps and then overlay those with information on projects, project spending, and project results (often outputs such as number of wells dug, trainings done, supplies delivered, but sometimes if you are lucky outcomes).
BUT often these dashboards miss an important thing. It isn’t just what you do that gets results.
A couple of years ago a very interesting study on UNICEF programme performance looked at information and knowledge management as one of the strategic capabilities the organization needed to have in order to manage its programmes well. In this three basic types of information/knowledge were identified as being critical:
1. Knowledge about the situation (in this case the situation of children and women). This can be data about the current situation, ideally disaggregated by sex, region, age and other key characteristics. But it could also be information about the underlying causes of something for example how do attitudes and values impact women’s empowerment, or the latest knowledge on the epidemiology of a disease.
2. “Know-how” on how to address the situation, This can be both technical knowledge such as on how to manage the logistics for a cold chain, or which targeting schemes for cash transfers are the most efficient at reaching the poorest, or which kind of incentives work best for keeping children in school. But it can also be more tacit know how such as on how to persuade skeptical governments or politicians to try out a new approach, or how to win over local community leaders, or how to deal with unexpected security problems, or seize on unexpected opportunities to advance your programmes. It also includes those little things that experienced people know how to do to get things done, but which don’t usually appears in the scientific literature, nor in the programme guidance.
3. Knowledge about organizational performance i.e how efficiently (and if measurable how effectively) ) are the programmes being implemented. This could include things like whether the planned outputs have been delivered, whether the budget has been spent as planned, or indicators on how well the office is managed (e.g. when was the work plan signed, how long does it take to fill vacancies, how many outstanding audit observations are there).
And as you can now see, the big missing step in many strategic information systems is that they don’t take into account the middle step – the “know how”. They often assume in one way or another that if you monitor the situation and monitor the plan you will know how well you are doing and be able to correct things when they are off track.
One might argue that if a programme plan is developed based on a sound evidence based problem analysis and clearly articulated in some way such as in a log frame or a theory of change then that largely takes care of the “know how”. Some programme approaches are even built on very elaborate standard systems models that look in detail at barriers, bottlenecks or causal chains which can be closely monitored and contain an implied understanding of the whole system in which the programme operates.
But the problems with this assumption are that i) the articulated theory of change might be incorrect, so you might be efficiently implementing an ineffective programme ii) the situation is changing so a valid analysis now might not still hold part way through implementing the programme and related to these reasons iii) the project itself is usually part of a larger complex system within which the relationship of the interconnecting parts is not fully predictable and so while the likely immediate impact of an intervention might be known its knock-on effects, positive or negative are not. An example of this would be that increased media communication about the benefits of vaccination might lead to more parents bringing their children to health clinics, but might also lead to a broader backlash against foreigners telling people what to do that could affect the programme long-term.
A larger objection is that all projects, no matter how well modeled are run by people with different technical skills, but also personalities, and the interactions between the various actors can be as important as the technical role they play. Few models take account of this and assume that success is largely based on technical competence in implementing a particular approach.
The challenge with adding the critical know-how component to strategic information systems is that “know-how” is much harder to map and monitor. But a few things that can be done are:
1.Introduce some element of tracking and reviewing the effectiveness of interventions into programme monitoring i.e. try to spot then there is a disconnect between an efficiently running programme that is following the plan, but not achieving the desired impact. This can be episodic such as through evaluations, but finding ways to track this through regularly monitored indicators is also important.
2. Ensure that “know-how” systems such as communities of practice, capturing and sharing lessons learned, or tools such as peer-assists are present to help support programmes to have access to the kind of know how they need.
3. Programme implementation and monitoring needs to be sufficiently flexible that when programmes go “off-track” there is the possibility not only to fix efficiency problems but also to consider modifying or changing the approach, or even modifying the overall programme objectives themselves – and on an ongoing basis not only after 3 years at a mid-term review.