I recently rejoined UNICEF to head up a small team with the “modest” aim of developing an approach and systems and tools to support more effective knowledge exchange within the organization and with partners.
One of the biggest challenges is that there is not a strong culture of sharing knowledge and experience within the organization – and most exchanges are either through hierarchical and official channels or are informal through personal networks (and largely invisible).
As decentralized organization working globally there are often many similar parallel projects taking place in different parts of the organization – but the people working on them are often unaware of each other, or at least unaware of the opportunities for collaboration and experience sharing.
One element we want to try to introduce is the idea of “Working out Loud”.
Bryce Williams coined this term some years ago (and here is an early blog where he elaborated on the idea), Basically:
Working Out Loud = Observable Work + Narrating Your Work
I.E. i) sharing your work as go it while it is still in preparation (rather than when it is close to being finalized or already final) and inviting people to comment or contribute throughout the process and ii) talking about your work, your observations and experiences as you do it through blogging, yammer posts, twitter etc.
The aim of this is to allow people to see and and provide inputs on what you are doing before it is fully cooked. That way you can see earlier if it will really meet the needs of those you are doing it for, and if other colleagues can strengthen your work by providing inputs and suggestions – and can help you to avoid pitfalls they can see but you can’t. Another benefit is finding other people who are working on similar projects with whom you might collaborate with (instead of duplicating their work) or learn from – or who you might be able to influence in how they go about their work. A third benefit is that it helps publicize your work and engage and interest people in it and thus make it more likely that it will be used (plus potential personal fame and fortune)
The big challenge with this is getting people started in doing this when it is not the usual way we do business. People are often reluctant to share early work when it may still be rough in terms of quality, presentation and also political correctness. There is particular resistance to doing this with external partners as there is a fear that showing anything less than thoroughly polished and fact checked material might damage our credibility and brand.
I personally think these fears are overblown, and the benefits of engagement outweigh the risks of not appearing polished enough – especially when we are clear that this is work in progress and we are seeking feedback to make it better. Greater openness in our work also goes along with the move towards greater transparency – i.e. explaining what we are doing while we work to accompany/narrate our open data and financial information, not only in a nice glossy edited end of project donor report.
But rather than simply making the case that this is so, I believe the only way to convince people that this is possible and highly valuable is to just do it and share what happens. Right now our team has taken the step that in our work on developing systems, tools and approaches for knowledge exchange we will practice what we preach i.e. we will do it all by working out loud ourselves – by sharing things at an early stage and sharing our reflections and learning as they happen. Apart from this we are identifying a few organizational processes (with willing process owners) that should be collaborative by nature but often aren’t (such as collecting inputs and comments on technical policy positions) to prove our point. To make this safe we are still doing this internally for the moment – but I’ll also be sharing what I can on this blog (rather than pontificating which was more frequently what I was doing on the blog before!).
Wish me luck and stay tuned as I let you know how it is going.
A couple of days ago I listened to an interesting webcast organized by UNICEF in its “Activate” series of talks – in this case on innovative approaches to advocate for child rights.
In a refreshing style, rather than asking some seasoned experts on how to advocate on youth issues they asked actual successful youth advocates themselves.
The speakers all had interesting and inspiring stories to tell about their own advocacy projects which carry a lot of useful insights for other would-be youth advocates and the organizations that seek to work with them and to support them.
But what struck me most as a person working on knowledge sharing, with a side interest in transforming the UN, was how relevant some of the key ideas were to our continual discussions on how to improve development organizations themselves.
A few relevant take-aways:
1. Not being afraid of failure. One of the speakers Erik Martin (@Eriklaes) spoke about how education often discourages failure but should instead encourage children to experiment and take risks and how it is important to have a safe space to fail in order to learn. But this too is a challenge for aid organizations where the pressure is to deliver consistently, but not to risk failure in order to achieve greater gains.
2. The need listening to and engaging with the people you wish to influence or to advocate for in order to better understand their needs and constraints, and to involve them in producing solutions that will be relevant and effective for them. This is a good practice for all parts of aid work not only youth advocacy, and something that aid agencies need to do more systematically. But this lesson also applies to internal organizational issues – when you are making strategic plans or carrying out organizational restructuring, or pondering about the future of the organization in the post-2015 world, it also makes sense to listen to and engage the staff of the organization who are the ones who will be expected to make these changes happen – but too often the engagement is only superficial.
3. The importance of building networks of people for mutual support as well as to work together and share their insights and experience, particularly across different locations or groups of people. Just as international youth networks can be stronger through the creation of large diverse networks of supporters, similarly for professional aid workers there is a lot of benefit to having strong networks of peer professionals with whom to share ideas, get advice and provide mutual support – and yet often this is left to individuals when it is in an organization’s interest to support and strengthen these networks to help their staff be more effective. An important element that came up in the discussion was the benefit of bringing a diverse group of people together from different backgrounds and expertise in order to generate innovative ideas. Subject matter silos which stifle new ideas are also a well-known phenomenon in organizational culture and finding ways to create cross-sectoral networks and connections is also something we need to pursue.
In this blog I’m just picking up on those points that were particularly resonant for organizational change and knowledge sharing, but there were a lot of other interesting points on youth engagement and advocacy so if this interests you I’d recommend the whole webcast which is available on UNICEF’s Activate talks site here, and on the UN webcast site here.
I know I’m a bit late commenting on the discussion of a recent World Bank paper that found that a third of all World Bank reports are never downloaded (I just switched jobs), but I’m fascinated by some of the challenges of use of evidence that it brings to light.
While the consultations and negotiations on the post-2015 development agenda and the SDGs (Sustainable Development Goals) are ongoing, inside the UN discussion is already moving to whether or not the UN is ready and able to support this new agenda. This discussion has been labeled internally as the “Fit for Purpose” discussion the general gist of which can be seen in this statement by the Deputy Secretary General.
As a core member of the UN Transformation Network I’m very happy to see this internal soul searching. I have been marginally involved in some of the discussions (I was co-drafting an issues paper based on a high level meeting – which has since gone into another part of the internal policy machine) but wanted to share a few thoughts of my own about some of the things can and should think about in order to stay relevant and effective.
[huge disclaimer: this next part is a few of my own thoughts on how to make the UN fit for purpose, it’s incomplete, and doesn’t in any way represent any official UN position – although I do hope they take some of these ideas into account] So here goes….
The UN was created over 60 years ago in a very different world from the one we are in today. It continues to change rapidly with a number of key trends to consider including the increasing number of middle and higher income countries, increasing disparities of wealth and access to rights and resources including in those higher income countries, an increasingly pressing need to deal with environmental sustainability and climate change, increasing availability and use of new technologies (for good and ill) and an increasingly crowded and diverse field of development actors. Given that a new development agenda is also in the making, and one that is likely to be more comprehensive and universal than the MDGs it makes sense for the UN to reexamine its role and current business models.
Although UN was designed for a different world, it has gone through several rounds of reform during its history and continues to do so now. Unfortunately change is often slow in any bureaucracy, but in the UN this is made even more difficult by its complex governance structure, and the large degree of consensus that is needed among agencies and UN member states for change to happen, something that is often lacking.
So what are some things that can be done? Much of recent UN reform has focused around increasing efficiency and effectiveness through strengthened accountability and reporting and administrative reforms, and to a certain extent on improving “coherence” – which in everyday language is the extent to which different parts of the UN work together effectively, or at least do not duplicate or contradict one another. While these three are important, I’d argue that the most pressing challenge the UN faces right now is to ensure it remains relevant to a changing world and a changing agenda.
One way to remain relevant is to try to predict the future and then to adapt the organization to be best suited to it. However rather than tying change to any specific demographic, ecological or economic shift I’d suggest three approaches the UN might use to be better able to stay relevant whatever the content of the new development agenda.
1. Listening to the people we work for – If the UN is to stay relevant it needs to listen more to its clients. Traditionally the UN’s key partners are member states – donors and programme countries. But we also need to listen more to the beneficiaries themselves – to know better what they want and need, and also to get feedback on how we are doing and what we need to improve. The post-2015 dialogues are a good example of this, and the current discussion on participatory monitoring for accountability could help inform how the UN (and governments and other partners) do this more systematically and effectively in the future.
2. Listening to staff – Typically in the UN when we face a systemic issue we form a task of senior level officials or a high level panel of external experts. While this approach can have the value of credibility and representation, it can also miss out on the wealth and breadth of experience and ideas within the UN system. Some of the best sense of what is needed in day-to-day work, as well as some of the most practical and innovative answers on how to address them are most likely to come from frontline staff. We should think therefore how we can better listen to staff when we are looking for ways to improve how we work, and what we work on.
3. Acting as a knowledge broker – connecting development partners to relevant knowledge and expertise from wherever it comes from, not just from within the ranks of the UN. The UN is uniquely placed with its global presence, normative role and technical mandates to be able to bring together expertise from diverse sources and help make this accessible to partners (although we still struggle to share knowledge internally between agencies at present). This includes doing more to foster South-South knowledge exchange as well as North-South and South to North. This role is unique and is needed whatever the content of the post-2015 development agenda and whatever constellation of country typologies we have.
All of these three approaches are actually part of a broader strategy that I believe the UN needs to take which is to find ways to be more nimble and quicker to respond to emerging issues whatever they are. Right now change is often slow and thus we can be quickly overtaken by events, and sometimes we can’t find agreement to make changes for the most important challenges. Rather than preparing the UN for any particular future it might be better to make the UN better equipped to reinvent itself and change course more easily both substantively in terms of the issues it is addressing, but also to be able to quickly make the necessary changes in policy, structure, budgets and particularly human capacity in order to be able to adapt to any possible future. In a rapidly changing world, we need to be able to change rapidly too, if we are to remain relevant.
Yesterday I had an interesting conversation with colleagues at UNDP who are organizing an event about “scaling up”.
The idea of scaling up successful pilots or innovations has long been one of the holy grails of aid work, and it seems we’re still quite not sure how to do it, or at least how to do it consistently, or how to “pick winners” i.e. ideas that can be scaled successfully.
The conversation reminded me of a presentation I attended and an internal blog I wrote some years ago about how to spread good ideas when I was back with UNICEF. It occurs to me that quite often in looking at scaling up a successful pilot or prototype we tend to focus on i) identifying those ideas which have been successfully piloted and ii) for which we can use the evidence of success to mobilize resources from donors and domestically.
However even when new ideas have been shown to work in a successful pilot or prototype (or have even been “proven” through extensive research and clinical style testing), i’s not a guarantee that they will scale. A big challenge is the issue of “adoption” i.e. how can you persuade others to apply them other than with scientific evidence and cash – because those are not enough.
Below is a slight reworking of my old blog post that looks at some of the challenges of spreading (or diffusing) good ideas:
Colleagues working on communication for Development (C4D) at the meeting also felt that the conclusions of the paper were highly relevant to their work, and it seems there could be a promising common interest in different parts of the organization to pursue these ideas further. One obvious challenge both within UNICEF, and in disseminating innovations externally is that often quite a few of the ideal conditions for successful diffusion are not present, and we may have a varying ability to foster them. We therefore also need to do some thinking about what we can do that is useful even when we know we are not able to create the kind of environment that we would ideally like in order for new approaches to be adopted.
From the paper, one important aspect of whether a new innovation or approach is adopted is the nature of the idea itself. Below are highlighted some aspects of an innovation which can have an important impact on whether it is adopted (The text below is adapted from Nancy’s presentation with some added commentary from my side).
Is the new approach more effective than the current strategy? Is it more cost-effective? If it is not perceived to have an advantage, it could be dead in the water, though even having an advantage does not guarantee adoption. (Note: this benefit needs to be clear to the adopter and the benefit for the adopter might well be different from the perceived benefit to management or to the promoter of the approach)
Is it compatible with user’s values, norms, ways of working, and perceived needs?
Simple is good! If it is complex, can it be broken down into smaller bites? The perception of complexity can also be partly overcome with demonstrations and hands-on experience.
Is there space given to try it out on a limited basis?
Are the effects of the innovation readily observable? (and preferably measurable)
Is there room to adapt it to local realities or to refine it? This seems particularly important for dissemination of “good practices” spread through horizontal networks (and is very relevant to our work on lessons learned – and would seem to imply that these should be seen more as an idea bank than templates or how-to guides)
Fuzzy boundaries (related concept)
i.e. an innovation should have a fixed core of common elements used in all cases, but with other elements that can be adapted around the edges to meet different circumstances and needs. These should preferably emerge from repeated trails in different contexts. What’s is important here is that there is a set of core principles to the innovation that make it successful, but a n accepted range of modifications that make it suitable for different applications.
If there is a high degree of uncertainty in adopting the new approach, then it can be perceived as personally risky and thus it is less likely to be adopted (especially in a risk-averse environment).
If a new approach is relevant to user’s work and improves their performance, it is more likely to be adopted, especially if it is feasible and easy.
To spread an idea you need a support system for new implementations – help desk, training, customization, implementation advice (even if you can see the benefit, it’s hard to do something new if you don’t know how – in this context I’d also add that being able to be in contact and share experiences with others who are have tried or are also trying out the new approach can be invaluable).
An implication of this seems to be that incremental changes are easier to promote than radical ones – since they fit more easily into existing norms and values and the benefit can appear more tangible, even if the potential improvement is less. At the same time just because and idea is easier to spread, it doesn’t mean that it is better. There are some interesting trade-offs here between ideas that can be easily implemented and spread and those which might have a more profound impact.
So, what do you think of these?
Do they hold true from your own experience?
What do you think are the lessons we can learn for our own work in trying to share new approaches between offices, for adopting good ideas from the external world, or in getting our partners to take on proven approaches that for they themselves are new?
Post script: back when I wrote this I had planned to write something about the types of people and organizations that effectively spread new ideas. I never got to this but I’ve now put it on my list for a future blog post.
Last week we ran a session in our office on “Building the evidence base, and evidence based reporting” which was identified as one of our priority areas of work for this year. The purpose was to unpack a little what we mean by “evidence“ in UN coordination work and what are we lacking and what can we do about it.
Perhaps unsurprisingly the biggest gap we identified was evidence of the impact of what we do. Donors have been willing to invest in UN Coordination with the assumption that it will lead to better results, but now under pressure from their own constituencies are starting to ask for the proof.
But what do we mean by results? Some have specifically asked us to show evidence of impact of UN coordination on development results. Ideally we would love to show this, it is after all what motivates us to do what we do, but trying to prove (with hard evidence) how many children’s lives were saved, or how many jobs were created, or how much economic growth occurred as a result of joint workplans or pooled funding mechanisms or regular sector meetings is tough job – rather like looking for the impact of a butterfly’s wings on hurricane patterns.
Why is this so difficult? Well for starters, there’s a lot of debate about whether or not aid contributes to development at all – but even assuming it does – attribution is difficult. Development is the result of the actions and resources of many players – bilateral donors, multilateral banks, NGOs, private sector and not least the government, so identifying what the contribution of the UN is to development versus the other actors is very difficult. Now imagine trying to determine what the impact of particular process changes in how we work together affect our ability to contribute to those results. Furthermore the real development results of an action are only fully apparent many years after an action is taken. So the impact of what we do now might only be measurable in 5-10 years time or more. We also don’t have a control case against which to compare i.e. you can’t randomly choose to coordinate half of your offers and not coordinate the other half and compare the difference.
So if it’s next to impossible to answer the development impact question with confidence what can we do? A few thoughts:
1. Lower your expectations – measure what you can actually measure. Look more at outputs or process outcomes rather than development impact. Look at those things where the change is more easily quantified such as greater efficiency (e.g. reduced number of person hours to do something), faster response time, reduced prices through joint procurement, reduced duplication, greater population reach through joint work. Those things can be measured and can demonstrate the value of coordination assume they do if fact improve – but we can and must measure them to see whether they do make things better and how big the gains are.
2. Use what we have in terms of linking coordination to development results however limited it may be. All UNDAFs are supposed to be evaluated and although the time frame is too short to show impact they can look at both coordination and delivery and see how they relate to one another. Similarly a number of evaluations of joint programmes have been done on country and global programmes (for example all joint programmes funded by the MDG Achievement Fund were evaluated – a treasure trove of information if someone had the capacity to do a meta-analysis of them all) – again these can help us to determine the relation although they are far from the complete story.
3. Collect individual case studies that illustrate the impact of coordination and explain the chain of events through which they do it. Case studies help illustrate in a real life situation how coordination takes place and what are the potential and actual gains. They illustrate both the challenges and the gains in a way that is tangible and credible. While they are not “scientific” they can have a strong explanatory power. The key here is to present both the successes and the failures – this contributes more to future learning and improved approaches and is also more credible – we need to avoid the temptation of only sharing the positive thinking that this is what donors want to hear. To overcome the limitations of the individual examples whose success my be contextual its is important to collect many case studies from different contexts. This improves the confidence in the observations, and also can be a basis for meta-analysis to look for broader patterns and lessons (see an example here of case studies on human rights mainstreaming).
4. Make a plausible case to look at how process impact can lead to actual impact e.g. if joint procurement of vaccines reduced prices by 10% then this means we are able to vaccinate 10 % more children. If we reduce reporting burden by 50% then staff have 50% more time for programming (or we need 50% less staff). This is of course theoretical impact but it does clearly demonstrate the opportunity cost of maintaining the status quo, strengthening the case for change (and investing in it).
5. Take a look at perceptions – even though we can’t always generate solid quantitative measures of improved effectiveness and efficiency, it’s worth looking at qualitative measures. Government counterparts and other partners often have a sense whether we are working more effectively with them and reducing the burden on them, delivering better advice etc. based on their regular interactions with us. Ongoing dialogue and periodic feedback surveys or polls can be very informative as to how well our main clients think we are doing and whether we are improving over time. We can also anonymously ask our staff the same questions to see whether or not we feel we are doing better. The good news is we have the tools to do this or can easily set them up.
6. The last, and possibly the most important point is that we need be get real with donors and the public. We need to have a hard, truthful conversation where we explain what we can and can’t say about coordination and development, particularly what we can’t say. Often we try to please without facing the truth. In fact most of our donors and partners are struggling with the same problems in showing the impact of their own work to a skeptical public. Maybe it would be better to work with them to figure out how to make the most of what we can do with the information we have, and how to educate the public on what we can know and what we can’t, and share experience on how to communicate this more effectively. Ultimately we need to reassure donors and the public that their money is in good hands, and the reforms we are undertaking are making a difference without being misleading about how much we know about the magnitude of this difference or the exact formula which delivers development. A lot of this is not just in how we measure results, but how we communicate them – something I’ve written about in more detail before.
Measuring and transparently sharing what evidence we can gather, being honest about what we don’t know and sharing real stories and examples of our work is probably the best we can do with this difficult challenge.