KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for December 2013

The next innovation: scaling up

with 17 comments

 Right now innovation is one of the hottest topics in development. Even the UN is all over it :-), started by pioneering work from UNICEF, now UNDP, UNHCR, OCHA and many others are creating innovation teams, strategies and labs to help their programmes (and organizations?) become more innovative.

As this great paper from Bruce Jenks and Bruce Jones nicely elaborates, The UN, and the “development system” more broadly is at a crossroads. The current business models were designed in a different age and need to change to adapt to the emerging post-2015 development agenda, but also to the reality that change is permanent and accelerating and so we need to be more agile to keep up and stay relevant and effective.

In the past the most common way to test out new ideas and programme approaches was the “pilot programme” followed by “rigorous evaluation”. The problem with this was that a lot of pilots were created, but few were scaled up to national or global programmes. The pilots would often run for years, but with a kind of loving attention and specific starting conditions that couldn’t easily be replicated (like Millennium Development Villages on a smaller scale). The other challenge was that these projects would need to be run as designed for a couple of years before they could be evaluated (preferably against a control group or even in an RCT). This means there was little scope to adapt them based on feedback, on the ground reality and changing circumstances. A lot of these pilots were also based on theories brought from the outside rather than on local participatory design.

The new wave of innovation projects has done a lot to address these shortcomings. Projects are developed using participatory approaches such as “human centred design” so they better respond to on the ground reality. They are quickly iterated based on feedback and there is an emerging (if still challenging) culture of being willing to fail quickly and start again. There are now a proliferation of trainings, manuals, guides and checklists for those who want to give this approach a try.

But looking ahead, I see a number of challenges in making the most of this welcome development. I wonder how do we make sure that this new approach has a systemic and scalable effect on how we do our work? Here I see three related challenges

1. How do we make the culture or approaches of innovation more mainstream in our organizations so that it is a regular part of how we do our work rather than another pilot project or a high-profile, high visibility approach that organizations use to say “look we’re changing” while keeping most of their operations (and resource allocations) chugging along using the existing (and tired) model.

2. How do we scale the learning from the individual innovations themselves? While there have been a number of very successful innovations developed – how do those become adopted or adapted widely enough to generate large-scale impacts on development (rather than  a significant but localized effect as is currently the case).

3. How do we innovate in the area of policy, or use innovation to inform policy making. In many countries the main game in town for external development actors is increasingly support to upstream policy work. This is inherently national and thus at scale, but innovating at a policy level and creating a space for experimenting, failing and learning is politically challenging for governments.

I don’t have all the answers to these but here are a few thoughts on each of them:

On the first a key step would be to “mainstream” innovation within the work of the organization as a part of (but not the whole of ) the programming toolkit rather than as a parallel process or set of discrete projects. This would imply developing programme guidance that allows innovation and allows but also manages the risks involved and explains when and how the approach should be used. A number of our current programming instruments such as in work planning, monitoring, budgeting management and procurement/partnership need to be revised to remove bottlenecks to innovation. But this is equally a “hearts and minds” issue where there is a need for clear messages and behaviour modeling from leadership to demonstrate that it innovation is encouraged (and to avoid where senior leaders encourage innovation in their words while their actions send a quite different message). Skill building and knowledge sharing is also key for staff to learn how to do it. In this model a team of innovation experts is essential, but their role will become more one of champion, policy setter, capacity builder and advisor than one of project manager.

On scaling up the knowledge from the innovations themselves, there are a couple of challenges. Innovations are often tailor-made to a specific context and may often not have a strong body of evidence nor “theory of change” behind them that explains the underlying method through which they work. This makes it hard to extract the generalizable learning that can be applied elsewhere. But there are a few things that can be done to mitigate this problem such as systematic data collection around innovations, and formative evaluation of the most successful to better understand how they work. Another is to share concrete examples and replicate them as “first prototype” in new contexts to see how they can be rapidly  adapted to fit the next context. This can both make it easier to generate ideas and also help determine which parts are common features of similar projects in different locations and which are elements that need context specific customization. Another approach is to try similar experiments in multiple locations and see which work and which don’t to see what patterns emerge. But this also requires a more coordinated approach to innovation that the “let a thousand flowers bloom approach”. Short of this enhanced networking among innovators can help spread ideas.

Another challenge with scaling up innovation though is that the skills to manage a programme at scale are different from those required to innovate in the first place. As a project becomes more successful and matures there is less ideation and evolution and more traditional “good management” and standardization. The challenge is to organize a smooth transition between these phases so the project management style evolves as the project matures. Another idea is trying to find ways that when new ideas are tested there is already some consideration of scale from the outset. This could be in the form of constraints on the initial design that only allow designs that can function at larger scale without a high degree of customization or a high degree of technical support.

On the challenge of innovating in the area of policy – while it’s not wise to fail, even if it is fast in national policy initiatives, small-scale experiments can help identify and test possible policy actions before they become officially adopted. In particular they can help not only determine the likely effects of policy, but also different approaches to implementing it in practice. This is particularly promising in the complex areas of social policy and behaviour change for which technical fixes don’t work. But this requires carving out a space for small-scale policy experiments that are designed to help influence policy at the macro level (such as the UK behavioural insights team).

A related challenge for public sector innovation is the low tolerance for failure, even for experiments, on the part of the public who funds it. An interesting approach used in the early days of New York City’s innovation work was to tap into private and foundation funding that is more tolerant of risk but in the right circumstances might be willing to invest for social good rather than just economic gain. This also has potential to be applied in other countries, and particular in the UN system where we are looking in any case to diversify funding sources. New partnerships with the private sector could actually be a spur for increased innovation both through funding and expertise. Even with traditional funding sources it might be wise to take a portfolio approach to public spending with an explicit “set-aside” for innovative activities that follows a different set of rules for planning, budgeting and evaluation allowing more risk, but only on a small part of the overall budget.

Another challenge for policy innovation is that what works in practice is not always what is popular politically and so the policy process often takes a different direction from what the evidence from experiments would indicate. This is equally a problem with traditional “evidence based advocacy” and scaling up in general. A potential advantage for innovators is that they may be able to mobilize a constituency of supporters around a successful experiment in a way that a traditional researcher cannot.

Right now innovation is literally “the new thing” but if we are not able to come up with sound approaches to mainstreaming it and scaling up the results then it may be just another development fad rather than a new way of doing business. Then innovation won’t be as “sexy” as it is now but it will be making a more lasting difference.

Written by Ian Thorpe

December 18, 2013 at 9:00 am

Posted in Uncategorized

A bottom-up data revolution for post-2015

with 6 comments

This post was inspired by the blog series ‘What kind of ‘data revolution’ do we need for post-2015?’ on post2015.org  – and now cross posted there too! – http://post2015.org/2013/12/10/a-bottom-up-data-revolution-for-post-2015/

The High Level Panel’s report on the post-2015 development agenda called for a “data revolution”. It’s already clear from this set of blog posts that there is both a strong enthusiasm about the new possibilities of data analysis to support the implementation and monitoring of a new development agenda, but also a wide range of interpretations of what this means, who will be the principal actors and who will benefit.

Often the benefit is seen in terms of having access to better statistics, real-time monitoring and feedback, big data analysis and open transparent data on aid and government spending. These provide a treasure trove of data to support better monitoring and evaluation of development interventions so that aid agencies can design better programmes and donors can allocate resources more efficiently and researchers can better test out their development theories.

But I’d argue that the most significant and also most challenging part of the data revolution will come from the bottom up. While aid transparency can help with accountability to funders or even to partner governments, the really interesting area where improved accountability is needed is with respect to those who the aid is intended to help.

One promising area is in soliciting feedback and ideas for development projects directly from the communities where they are implemented. Both new technologies (such as SMS or online surveys) and old technologies (public opinion polls, paper questionnaires, interviews) can be used to help collect information on both the preferences of project beneficiaries and their levels of satisfaction with the services they are being provided. This is helpful both as a means of improving programme design to make it more effective, but also conferring the important right of giving the poor a voice (“nothing about us without us”). See “listening to the people we work for” for more on this idea.

But people don’t always tell you what they really want or really think. Sometimes you also have to observe them and see how they act, or even try to “walk a mile in their shoes” to better understand the lives they lead, the challenges they face, the choices they make and why they make them. Ethnographic studies have been with the development world for a long time, and the notion of “human centred design” is also not new, but a data revolution can help expand the use of these techniques and make them easier to do and more cost-effective. Use of “big data” to observe behaviour patterns such as use of mobile phones, transport or health services can help us understand much more about how people are really making choices. Similarly use of remote sensing devices, hand-held cameras and recorders and other tools can help scale up ethnographic research and participatory evaluation, including giving individuals and communities the tools and skills to “document themselves” and share their own stories.

But an even stronger step that is still in its early stages is to empower citizens in developing countries not only to be able to express their views or share their lives, but to provide them with the tools and skills to take advantage of the data revolution themselves. At a simple level this can mean helping them have access to and the skills to make use of  the data they need to make individual decisions (such as choosing between schools or health centres or make healthy nutrition choices). But a more ambitious goal would be to help them develop the skills they need to be able to mobilize an advocate for their own interests making use of the data that is out there (and often about them) rather than relying on the goodwill and decisions from others with stronger technical skills and better financial resources to invest in using data.

Here the open data revolution is a good starting point to empower citizens, but in reality most citizens, especially those in developing countries lack the capacity to make effective use of this data. Instead open data may create a new “digital divide” between those who have the ability to collect and analyze the new data and those who don’t with rich word governments, academia and private enterprise being the main beneficiaries of these new data sources while those we are trying to help are being left behind.

In the end, if we really want to realize the promise of the data revolution, and use this to bring about sustainable change, then we need to think from the bottom up rather than the top down. How can we develop the capacities of the communities we seek to serve, including the most disadvantaged, so that they can participate fully in the new data revolution and be leading their own development rather than relying on the goodwill and analytical capacities of others.

Written by Ian Thorpe

December 10, 2013 at 10:00 am

Posted in Uncategorized

Learning on the frontlines

with 5 comments

A few months ago I was appointed “Learning Manager” for my office, responsible for leading an office learning plan and helping foster the creation of a culture of learning in the office as well as helping facilitate staff access to learning opportunities. This is not a full-time position, rather a set of additional responsibilities added on to my existing job.

The “Learning Manager” role is something that UNDP created for every office some years ago as a way of strengthening organizational learning in individual offices, and this plus the considerable wealth of online courses available to staff via the intranet on the “Learning Manager System” displays a strong commitment to foster learning by UNDP.

At the same time learning managers are often quite junior staff (I’m in the relatively rare position of being a learning manager and an actual manager too). Typically this role is lumped in with the Human Resources  assistant position (or HR associate as they are called in UNDP) which is also typically a local staff recruitment. Last week I participated in an orientation/skills development course for new learning managers which was a great opportunity to speak to other learning managers and find out how they do their job.

What I heard was both daunting but also encouraging. Many new learning managers were struggling to get traction on learning in their offices due to resource constraints, mixed levels of support from managers, lack of focus on learning due to workload and the challenge of getting things done without any formal authority and on top of doing their “regular” job (and I might add unrealistic expectations from the organization on what a learning manager can physically manage to do).

But I also encountered a highly motivated and resourceful group who were finding different ways to achieve results in challenging circumstances. The shared challenge that all were trying to address is how to maximize office  learning with limited time and money and no formal authority. I’m sharing here some of the ways, both strategic and tactical that learning managers are getting the job done.

1. One of the key challenges is making the case for learning with the head of office and management team. Different approaches used for this include making the case for learning as an investment in office productivity (appeal to logic), reminding that it is part of the “rules” and measured in the office scorecard and comparing how the office is doing to similar offices e.g. in the same region (appeal to authority), or emphasizing the effect it will have on staff morale and creating goodwill in the office as well as helping the staff at a personal level to deal with changes in the organization (appeal to emotion).

2. Another challenge is balancing the roles of facilitator and enforcer. Learning managers are expected to ensure that all staff do their mandatory online trainings (ethics, gender, security etc.) and that the office has a learning plan and that individuals have learning goals in their performance appraisals – yet don’t have the authority to make people do this, especially those who are reluctant. Ultimately though the most fulfilling role for the learning manager, and probably the one that achieves the best learning results is to foster a learning culture by responding to people’s needs, interests and aspirations and acting as a facilitator and coach to help people learn rather than trying to force them to do the compulsory things they may not be enthusiastic to do.

3. At a tactical level when budgets are tight it is often not cost-effective to send individuals on external training courses out of the country, and local opportunities may be limited. However you can benefit from extensive expertise and experience that is already in the office.  Example approaches to this include i) organizing a “skillshare” session where staff members share a skill they have (possibly from a previous job) with the rest of the office, either as a training course or as a coach ii) have staff who do go on external trainings or who go on work travel to debrief the office on their learning as a routine event or requirement iii)  taking advantage of visitors from HQ or regional offices and asking them to carry out a training or briefing as part of their visit iv) inviting speakers from local partners.

4. Pooling resources – e.g. sharing learning opportunities with other UN agencies or with government and NGOs. This could be by organizing joint trainings or by having a reciprocal arrangement to allow people from other organizations to join trainings organized by the office in exchange for being able to send people to their trainings, and routinely sharing information on learning events with one another.

5. Make use of online resources – this can involve using online courses developed by the organization or licensed through an external provider for example in UNDP staff have access to a wealth of UNDP and externally developed courses through their Learning Manager System. Other options include use of MOOCs (massively open online courses), webinars or other online and remote learning opportunities.

6. Mentoring and coaching – setting up individual peer-to-peer learning exchanges within the office or between offices in the same region. This can be valuable as it provides ongoing support rather than just an episodic training. One more sophisticated way to do this is to use the self-assessment peer-assist methodology (from Collison and Parcell’s Learning to Fly) where offices (or individuals) self-assess their learning needs against a set of criteria and then are paired up according to needs and strengths.

7. Organizing regular learning events or learning days (e.g. once per month) where staff devote time to learning to ensure learning is regular and recognized. Other similar approaches are sending out weekly TED talks, articles., presentations or other short pieces of interest to stimulate learning without consuming much time.

8. Some offices seek to regularly send staff on “detailed assignments” or give them “stretch assignments”. these are short-term opportunities to take on a more challenging assignment to fill in for a temporary vacancy or a colleague on extended or maternity leave either within the same office or in another office. These may also be used in place of hiring external consultants for specific needs e.g. preparations for a major UN event. These provide on the job learning that can be particularly helpful for national staff who have deal with the catch-22 of needing international experience needed to move to an international posting.

9. Find ways to reward learning by publicly acknowledging those who have completed learning activities (such as having them receive an award from the head of office at a staff meeting) or those who have contribute to sharing their knowledge and skills. There was also some discussion on the pros and cons of “name and shame” for those who don’t complete mandatory trainings, although I’m not personally in favour of this.

10. Network of learning managers – perhaps the most powerful way of sharing good ideas, learning opportunities or even just to get moral support is through networking between learning managers in different offices. Having access to experience and advice from other offices is an excellent way to improve learning whether by sharing templates and examples, or helping share resources or by providing feedback on potential courses or trainers. Perhaps the most valuable support though is in sharing advice on how to get management support and how to motivate learners.

Written by Ian Thorpe

December 2, 2013 at 9:00 am

Posted in Learning