Continuing the tradition from 2013 and 2012 - this guest post is a fabulous overview of 2014 forecasts and predictions from the world of development and aid from former colleagues in UNICEF, who have kindly agreed to allow me to share it with all of you. Thanks to Eva Kaplan, Katell Le Goulven, Nima Hassan Kanyare, Yulia Oleinik and Maggie Ronoh who pulled together and synthesized this great reading list!
Welcome to 2014! As UNICEF enters the first year of its new strategic plan, gears up for celebrating the 25th Anniversary of the Convention on the Rights of the Child, and embarks on the final push towards the MDGs, we step back and take a broad look at predictions for the year ahead.
First, to check in on how we did last year. The 2013 predictions were largely born out. The sluggish growth predicted for 2013 was indeed just that, with the global economy growing at just 3% compared to the projected 3.6%. Some conflicts also caught the global community by surprise—the military coup in Egypt, the eruption of violence in CAR, and South Sudan’s move towards conflict. We also had the expected unexpected, with extreme weather events like Typhoon Haiyan continuing to catch us off guard.
And what’s in store for 2014?
The global economy is on the up and up. . .
We’re not back to where we were before the economic crisis, but as the Economist predicts, the global economy should grow at 4%, up from 3% last year.
- This growth will be “powered by America.” Mark Zandi of Moody’s Analytics predicts that the US will “experience its fastest growth in a decade.”
- According to the OECD, the pace of global growth will be moderated by slowed growth in emerging markets. China will undergo economic reforms in 2014, with most growth projections hovering around 7.5%. This is down almost 3% points from 2010, but some economists embrace the lower number, arguing that it could mean necessary reforms are taking hold. Brazil’s economy will only grow at 3%, while India could achieve 6% growth, despite political uncertainty.
- “European policy makers are buoyant,” says the Council on Foreign Relations’ Robert Kahn. The Euro zone has stepped away from the brink of fiscal disaster and is growing. Japan is poised to near 2% growth.
- The UN World Economic Situation and Prospects (WESP) predicts that African economies will remain strong, growing an average of 4.7%. LDCs as a whole will grow by an impressive 5.7%, according to WESP. Fragile states also have a strong showing—in the Economist, five of the top 12 growing economies are fragile states.
. . . but no good news for inequality or global unemployment
- The World Economic Forum’s Klaus Schwab predicts a continuing exacerbation of inequality due to “labour’s declining share of national income.” The World Economic Forum’s Outlook on the Global Agenda 2014 also, once again, places widening inequality high on the list of global trends, and cites an increasing mistrust of economic policy, including both economic policy makers and economic institutions, like banks.
- The World Bank’s World Economic Prospects echoes this, underlining that “policy stasis” in the face of growth could be a particular challenge for developing countries.
- The World Economic Forum points to the emergence of “a ‘lost’ generation of young people coming of age in the 2010s who lack both jobs and, in some cases, adequate skills for work, fuelling pent-up frustration.” Robert Khan reminds us that youth unemployment averages 24% in the Eurozone and has exceeded 35% in several countries.
- OECD’s Director of Labour, Employment, and Social Affairs Stefano Scarpetta warns that without embracing policies or innovations to redress the imbalance, these trends could manifest in social unrest. The Economist’s index of social unrest ranks 19 countries as “very high risk,” including both low- and middle-income countries such as Bangladesh, Bolivia, Greece and Nigeria. An additional 46 countries are labelled “high risk.”
Changing leadership . . .
Calling 2014 “a huge year for democracy,” the Economist notes that 40 elections will take place this year, representing 42% of the world’s population. With India, Indonesia, Afghanistan, the European Union, and several Latin American countries all holding major elections, there is the possibility of political shifts in Asia, Latin America, and Europe. A great graphic of all 2014 elections is here.
De-escalating, continuing and growing conflicts. . .
This year will mark 100 years since the outbreak of World War I. This could be a time for reflection on how conflict—and our ability to manage conflict—has evolved. Sadly, on the whole, predictions on 2014’s conflicts do not look good.
On the upside, there continues to be cause for a careful optimism in Myanmar and Colombia; a major diplomatic effort is underway around Iran’s nuclear programs; and Pakistan experienced its first ever democratic handover of power. However, overall, predictions are pessimistic. Foreign Policy’s 2014 list of conflicts to watch, by Louise Arbour, compares as follows to the 2013 list: “Five entries are new: Bangladesh, Central African Republic, Honduras, Libya, and North Caucasus. Five remain: Central Asia, Iraq, the Sahel, Sudan, and Syria/Lebanon.”
Changing technology hotspots. . .
In 2014, as Facebook celebrates its 10 year anniversary and mobile phone subscriptions outnumber people, old paradigms in technology development will shift:
- The Economist’s Leo Mirani notes the best ideas for new technology will likely emerge from unlikely places as “the ‘developing world’ turns into the ‘developer world.’” The Brookings Institute’s Homi Kharas predicts that developing markets will continue to expand “leapfrog” technologies such as mobile banking, with drones poised to be the next game-changer in development, from delivering humanitarian aid or vaccines to monitoring poaching of endangered animals.
- Hans-Holger Albrecht, president and chief executive of Millicom International Cellular, predicts that major telecoms will focus on emerging markets—manufacturing smart phones that are ever cheaper, developing digital services and apps that are ever more locally appropriate, and generating ever more data.
Be sure also to check out the innovations forecast from UNICEF’s very own innovators, Erica Kochi and Chris Fabian.
Shifting sources of development finance. . .
Expect development financing to be at the centre of lively debate during the post-2015 discussions.
- Predictions on ODA can only be described as confused. In the EU and Australia, foreign aid budgets continue to be marked by a decided uncertainty, while in the US ODA for 2014 is projected to decrease by 8% over last year. While much of this is accounted for by the decreasing budget for Afghanistan, contributions to international agencies are down 9%, with increases in budgets to organizations which focus on multi-stakeholder arrangements such as GAVI and the Millennium Challenge Corporation.
- Development Impact Bonds formally took the stage in 2013. It might not be a fully baked sector quite yet, but Center for Global Development (CGD) is betting that early investments from trusts, foundations, and development finance institutions will help grow the field.
- Domestic resource mobilization to fill development financing shortfalls will be a key topic in post-2015 debates. The World Bank notes, “Sub-Saharan African countries collected nearly US$10 in own-source revenue for every dollar of foreign assistance received.” However, on both the income and expenditure side, there remains much work to be done to strengthen government efficacy.
Action and inaction: A big year for international development
Challenges that have been the centre of development debates for the past decade will remain simmering while some new initiatives will rise in prominence.
- As we enter the final stretch of the world’s first-ever global development targets, the Millennium Development Goals, discussions on the post-2015 goals will hit a high note. The new goals, to be agreed upon in September 2015, will likely expand targets in the MDGs and enhance the focus on sustainability. More tricky than the what will be the how. Engaging middle-income countries will be the critical, says Alex Evans, Senior Fellow at NYU’s Centre on International Co-operation. “Governments agreed at [last] year’s UN General Assembly that the post-2015 goals should be universal, targeting not only the 1 billion people living in absolute poverty, but all 7 billion of the world’s inhabitants. The reality, though, is that the new development agenda will be anything but that unless middle-income countries engage with it seriously – and at present, it’s unclear what, if anything, they really want or feel they stand to gain.”
- Climate change action remains on the path of “too little, too late.” Even as the post-2015 agenda emphasizes sustainable development goals, most predictions are pessimistic about real action on climate change. As Christiana Figueres, the Executive Secretary of the United Nations Framework Convention on Climate Change notes, “There’s action nationally, internationally, and on the ground, but it is absolutely not enough.” The global population is projected to grow by 82 million in 2014. This could exacerbate the “superlative” trend of recent years, with extreme weather events on the rise even as population pressures erode our ability to manage them.
- Electricity will be at the fore, with the UN kicking off “the decade of sustainable energy for all.” The US government recently announced the Power Africa Initiative which would place clean energy as a major USAID priority. The focus on these will be clean electricity provision, achieved through smart multi-sectoral partnerships.
- Toilets will also have their day, and here we may also expect to see a leapfrogging technology. In 2013, the Gates Foundation announced the winners of its waterless toilet challenge and in 2014 will begin to roll out these “next generation toilets” in India. Africa is also poised for a toilet revolution, with initiatives like Sanergy’s toilet franchise reinventing the entire sanitation value chain.
At the front lines: critical actors in development
Much has been made of the changing relationship to the private sector. However, the space for development actors (and action) is even broader in practice:
- Cities are on the forefront of development progress. As populations continue to shift to cities, achieving development outcomes will increasingly put cities in the spotlight on everything from food security to climate change. As the Atlantic Council’s Jeff Lightfoot states, “Cities have clout, both in the countries of which they are apart, but also independently and as a network of global cities.”
- They may not get headlines, but the role of last-mile service workers was highlighted in a handful of studies in 2013, including a UNICEF report. As the Institute for Development Studies’ Lawrence Haddad notes, “Policy is what policy does, goes the saying, and policy only does if frontline workers can implement it and have the incentive to do so.” For 2014, he predicts that “frontline workers will increasingly be front and center”.
Final thoughts: keeping children at the centre
The Center for Global Development doesn’t publish predictions, but rather wishes. This year, CGD President Nancy Birdsall crowdsourced hers, inviting readers to contribute. From the suggestions, she compiled her list, ending with this gem:
“My final wish is simple: Count every child because every child counts, starting with a drive for increased birth registration and improved service delivery for children in low-income developing countries. According to UNICEF estimates, some 230 million children under 5, about one-third of the total, are not registered and therefore do not officially exist. New approaches, including SMS, linking registration to the ID of mothers and strengthening this through the use of biometric data, can accelerate registration even in poor countries and roll out basic services far more efficiently. This should be complemented by a global effort to provide retroactive registration to the approximately 750 million unregistered people under 16.”
What are your predictions for 2014? We certainly look forward to staying in touch and collaborating with you in the coming year!
Eva Kaplan, Katell Le Goulven, Nima Hassan Kanyare, Yulia Oleinik, Maggie Ronoh
Multilateral System Analysis Unit, UNICEF
We value your feedback – please take a moment to tell us what you think (firstname.lastname@example.org) or leave a comment on the blog.
Disclaimer: The opinions expressed are those of the authors and editors and do not necessarily reflect the views of UNICEF, nor of any particular Division or Office. References to a non-UNICEF site does not imply endorsement by UNICEF of the accuracy of the information contained therein or of the views expressed.
Right now innovation is one of the hottest topics in development. Even the UN is all over it :-), started by pioneering work from UNICEF, now UNDP, UNHCR, OCHA and many others are creating innovation teams, strategies and labs to help their programmes (and organizations?) become more innovative.
As this great paper from Bruce Jenks and Bruce Jones nicely elaborates, The UN, and the “development system” more broadly is at a crossroads. The current business models were designed in a different age and need to change to adapt to the emerging post-2015 development agenda, but also to the reality that change is permanent and accelerating and so we need to be more agile to keep up and stay relevant and effective.
In the past the most common way to test out new ideas and programme approaches was the “pilot programme” followed by “rigorous evaluation”. The problem with this was that a lot of pilots were created, but few were scaled up to national or global programmes. The pilots would often run for years, but with a kind of loving attention and specific starting conditions that couldn’t easily be replicated (like Millennium Development Villages on a smaller scale). The other challenge was that these projects would need to be run as designed for a couple of years before they could be evaluated (preferably against a control group or even in an RCT). This means there was little scope to adapt them based on feedback, on the ground reality and changing circumstances. A lot of these pilots were also based on theories brought from the outside rather than on local participatory design.
The new wave of innovation projects has done a lot to address these shortcomings. Projects are developed using participatory approaches such as “human centred design” so they better respond to on the ground reality. They are quickly iterated based on feedback and there is an emerging (if still challenging) culture of being willing to fail quickly and start again. There are now a proliferation of trainings, manuals, guides and checklists for those who want to give this approach a try.
But looking ahead, I see a number of challenges in making the most of this welcome development. I wonder how do we make sure that this new approach has a systemic and scalable effect on how we do our work? Here I see three related challenges
1. How do we make the culture or approaches of innovation more mainstream in our organizations so that it is a regular part of how we do our work rather than another pilot project or a high-profile, high visibility approach that organizations use to say “look we’re changing” while keeping most of their operations (and resource allocations) chugging along using the existing (and tired) model.
2. How do we scale the learning from the individual innovations themselves? While there have been a number of very successful innovations developed – how do those become adopted or adapted widely enough to generate large-scale impacts on development (rather than a significant but localized effect as is currently the case).
3. How do we innovate in the area of policy, or use innovation to inform policy making. In many countries the main game in town for external development actors is increasingly support to upstream policy work. This is inherently national and thus at scale, but innovating at a policy level and creating a space for experimenting, failing and learning is politically challenging for governments.
I don’t have all the answers to these but here are a few thoughts on each of them:
On the first a key step would be to “mainstream” innovation within the work of the organization as a part of (but not the whole of ) the programming toolkit rather than as a parallel process or set of discrete projects. This would imply developing programme guidance that allows innovation and allows but also manages the risks involved and explains when and how the approach should be used. A number of our current programming instruments such as in work planning, monitoring, budgeting management and procurement/partnership need to be revised to remove bottlenecks to innovation. But this is equally a “hearts and minds” issue where there is a need for clear messages and behaviour modeling from leadership to demonstrate that it innovation is encouraged (and to avoid where senior leaders encourage innovation in their words while their actions send a quite different message). Skill building and knowledge sharing is also key for staff to learn how to do it. In this model a team of innovation experts is essential, but their role will become more one of champion, policy setter, capacity builder and advisor than one of project manager.
On scaling up the knowledge from the innovations themselves, there are a couple of challenges. Innovations are often tailor-made to a specific context and may often not have a strong body of evidence nor “theory of change” behind them that explains the underlying method through which they work. This makes it hard to extract the generalizable learning that can be applied elsewhere. But there are a few things that can be done to mitigate this problem such as systematic data collection around innovations, and formative evaluation of the most successful to better understand how they work. Another is to share concrete examples and replicate them as “first prototype” in new contexts to see how they can be rapidly adapted to fit the next context. This can both make it easier to generate ideas and also help determine which parts are common features of similar projects in different locations and which are elements that need context specific customization. Another approach is to try similar experiments in multiple locations and see which work and which don’t to see what patterns emerge. But this also requires a more coordinated approach to innovation that the “let a thousand flowers bloom approach”. Short of this enhanced networking among innovators can help spread ideas.
Another challenge with scaling up innovation though is that the skills to manage a programme at scale are different from those required to innovate in the first place. As a project becomes more successful and matures there is less ideation and evolution and more traditional “good management” and standardization. The challenge is to organize a smooth transition between these phases so the project management style evolves as the project matures. Another idea is trying to find ways that when new ideas are tested there is already some consideration of scale from the outset. This could be in the form of constraints on the initial design that only allow designs that can function at larger scale without a high degree of customization or a high degree of technical support.
On the challenge of innovating in the area of policy – while it’s not wise to fail, even if it is fast in national policy initiatives, small-scale experiments can help identify and test possible policy actions before they become officially adopted. In particular they can help not only determine the likely effects of policy, but also different approaches to implementing it in practice. This is particularly promising in the complex areas of social policy and behaviour change for which technical fixes don’t work. But this requires carving out a space for small-scale policy experiments that are designed to help influence policy at the macro level (such as the UK behavioural insights team).
A related challenge for public sector innovation is the low tolerance for failure, even for experiments, on the part of the public who funds it. An interesting approach used in the early days of New York City’s innovation work was to tap into private and foundation funding that is more tolerant of risk but in the right circumstances might be willing to invest for social good rather than just economic gain. This also has potential to be applied in other countries, and particular in the UN system where we are looking in any case to diversify funding sources. New partnerships with the private sector could actually be a spur for increased innovation both through funding and expertise. Even with traditional funding sources it might be wise to take a portfolio approach to public spending with an explicit “set-aside” for innovative activities that follows a different set of rules for planning, budgeting and evaluation allowing more risk, but only on a small part of the overall budget.
Another challenge for policy innovation is that what works in practice is not always what is popular politically and so the policy process often takes a different direction from what the evidence from experiments would indicate. This is equally a problem with traditional “evidence based advocacy” and scaling up in general. A potential advantage for innovators is that they may be able to mobilize a constituency of supporters around a successful experiment in a way that a traditional researcher cannot.
Right now innovation is literally “the new thing” but if we are not able to come up with sound approaches to mainstreaming it and scaling up the results then it may be just another development fad rather than a new way of doing business. Then innovation won’t be as “sexy” as it is now but it will be making a more lasting difference.
This post was inspired by the blog series ‘What kind of ‘data revolution’ do we need for post-2015?’ on post2015.org - and now cross posted there too! - http://post2015.org/2013/12/10/a-bottom-up-data-revolution-for-post-2015/
The High Level Panel’s report on the post-2015 development agenda called for a “data revolution”. It’s already clear from this set of blog posts that there is both a strong enthusiasm about the new possibilities of data analysis to support the implementation and monitoring of a new development agenda, but also a wide range of interpretations of what this means, who will be the principal actors and who will benefit.
Often the benefit is seen in terms of having access to better statistics, real-time monitoring and feedback, big data analysis and open transparent data on aid and government spending. These provide a treasure trove of data to support better monitoring and evaluation of development interventions so that aid agencies can design better programmes and donors can allocate resources more efficiently and researchers can better test out their development theories.
But I’d argue that the most significant and also most challenging part of the data revolution will come from the bottom up. While aid transparency can help with accountability to funders or even to partner governments, the really interesting area where improved accountability is needed is with respect to those who the aid is intended to help.
One promising area is in soliciting feedback and ideas for development projects directly from the communities where they are implemented. Both new technologies (such as SMS or online surveys) and old technologies (public opinion polls, paper questionnaires, interviews) can be used to help collect information on both the preferences of project beneficiaries and their levels of satisfaction with the services they are being provided. This is helpful both as a means of improving programme design to make it more effective, but also conferring the important right of giving the poor a voice (“nothing about us without us”). See “listening to the people we work for” for more on this idea.
But people don’t always tell you what they really want or really think. Sometimes you also have to observe them and see how they act, or even try to “walk a mile in their shoes” to better understand the lives they lead, the challenges they face, the choices they make and why they make them. Ethnographic studies have been with the development world for a long time, and the notion of “human centred design” is also not new, but a data revolution can help expand the use of these techniques and make them easier to do and more cost-effective. Use of “big data” to observe behaviour patterns such as use of mobile phones, transport or health services can help us understand much more about how people are really making choices. Similarly use of remote sensing devices, hand-held cameras and recorders and other tools can help scale up ethnographic research and participatory evaluation, including giving individuals and communities the tools and skills to “document themselves” and share their own stories.
But an even stronger step that is still in its early stages is to empower citizens in developing countries not only to be able to express their views or share their lives, but to provide them with the tools and skills to take advantage of the data revolution themselves. At a simple level this can mean helping them have access to and the skills to make use of the data they need to make individual decisions (such as choosing between schools or health centres or make healthy nutrition choices). But a more ambitious goal would be to help them develop the skills they need to be able to mobilize an advocate for their own interests making use of the data that is out there (and often about them) rather than relying on the goodwill and decisions from others with stronger technical skills and better financial resources to invest in using data.
Here the open data revolution is a good starting point to empower citizens, but in reality most citizens, especially those in developing countries lack the capacity to make effective use of this data. Instead open data may create a new “digital divide” between those who have the ability to collect and analyze the new data and those who don’t with rich word governments, academia and private enterprise being the main beneficiaries of these new data sources while those we are trying to help are being left behind.
In the end, if we really want to realize the promise of the data revolution, and use this to bring about sustainable change, then we need to think from the bottom up rather than the top down. How can we develop the capacities of the communities we seek to serve, including the most disadvantaged, so that they can participate fully in the new data revolution and be leading their own development rather than relying on the goodwill and analytical capacities of others.
A few months ago I was appointed “Learning Manager” for my office, responsible for leading an office learning plan and helping foster the creation of a culture of learning in the office as well as helping facilitate staff access to learning opportunities. This is not a full-time position, rather a set of additional responsibilities added on to my existing job.
The “Learning Manager” role is something that UNDP created for every office some years ago as a way of strengthening organizational learning in individual offices, and this plus the considerable wealth of online courses available to staff via the intranet on the “Learning Manager System” displays a strong commitment to foster learning by UNDP.
At the same time learning managers are often quite junior staff (I’m in the relatively rare position of being a learning manager and an actual manager too). Typically this role is lumped in with the Human Resources assistant position (or HR associate as they are called in UNDP) which is also typically a local staff recruitment. Last week I participated in an orientation/skills development course for new learning managers which was a great opportunity to speak to other learning managers and find out how they do their job.
What I heard was both daunting but also encouraging. Many new learning managers were struggling to get traction on learning in their offices due to resource constraints, mixed levels of support from managers, lack of focus on learning due to workload and the challenge of getting things done without any formal authority and on top of doing their “regular” job (and I might add unrealistic expectations from the organization on what a learning manager can physically manage to do).
But I also encountered a highly motivated and resourceful group who were finding different ways to achieve results in challenging circumstances. The shared challenge that all were trying to address is how to maximize office learning with limited time and money and no formal authority. I’m sharing here some of the ways, both strategic and tactical that learning managers are getting the job done.
1. One of the key challenges is making the case for learning with the head of office and management team. Different approaches used for this include making the case for learning as an investment in office productivity (appeal to logic), reminding that it is part of the “rules” and measured in the office scorecard and comparing how the office is doing to similar offices e.g. in the same region (appeal to authority), or emphasizing the effect it will have on staff morale and creating goodwill in the office as well as helping the staff at a personal level to deal with changes in the organization (appeal to emotion).
2. Another challenge is balancing the roles of facilitator and enforcer. Learning managers are expected to ensure that all staff do their mandatory online trainings (ethics, gender, security etc.) and that the office has a learning plan and that individuals have learning goals in their performance appraisals – yet don’t have the authority to make people do this, especially those who are reluctant. Ultimately though the most fulfilling role for the learning manager, and probably the one that achieves the best learning results is to foster a learning culture by responding to people’s needs, interests and aspirations and acting as a facilitator and coach to help people learn rather than trying to force them to do the compulsory things they may not be enthusiastic to do.
3. At a tactical level when budgets are tight it is often not cost-effective to send individuals on external training courses out of the country, and local opportunities may be limited. However you can benefit from extensive expertise and experience that is already in the office. Example approaches to this include i) organizing a “skillshare” session where staff members share a skill they have (possibly from a previous job) with the rest of the office, either as a training course or as a coach ii) have staff who do go on external trainings or who go on work travel to debrief the office on their learning as a routine event or requirement iii) taking advantage of visitors from HQ or regional offices and asking them to carry out a training or briefing as part of their visit iv) inviting speakers from local partners.
4. Pooling resources – e.g. sharing learning opportunities with other UN agencies or with government and NGOs. This could be by organizing joint trainings or by having a reciprocal arrangement to allow people from other organizations to join trainings organized by the office in exchange for being able to send people to their trainings, and routinely sharing information on learning events with one another.
5. Make use of online resources – this can involve using online courses developed by the organization or licensed through an external provider for example in UNDP staff have access to a wealth of UNDP and externally developed courses through their Learning Manager System. Other options include use of MOOCs (massively open online courses), webinars or other online and remote learning opportunities.
6. Mentoring and coaching – setting up individual peer-to-peer learning exchanges within the office or between offices in the same region. This can be valuable as it provides ongoing support rather than just an episodic training. One more sophisticated way to do this is to use the self-assessment peer-assist methodology (from Collison and Parcell’s Learning to Fly) where offices (or individuals) self-assess their learning needs against a set of criteria and then are paired up according to needs and strengths.
7. Organizing regular learning events or learning days (e.g. once per month) where staff devote time to learning to ensure learning is regular and recognized. Other similar approaches are sending out weekly TED talks, articles., presentations or other short pieces of interest to stimulate learning without consuming much time.
8. Some offices seek to regularly send staff on “detailed assignments” or give them “stretch assignments”. these are short-term opportunities to take on a more challenging assignment to fill in for a temporary vacancy or a colleague on extended or maternity leave either within the same office or in another office. These may also be used in place of hiring external consultants for specific needs e.g. preparations for a major UN event. These provide on the job learning that can be particularly helpful for national staff who have deal with the catch-22 of needing international experience needed to move to an international posting.
9. Find ways to reward learning by publicly acknowledging those who have completed learning activities (such as having them receive an award from the head of office at a staff meeting) or those who have contribute to sharing their knowledge and skills. There was also some discussion on the pros and cons of “name and shame” for those who don’t complete mandatory trainings, although I’m not personally in favour of this.
10. Network of learning managers – perhaps the most powerful way of sharing good ideas, learning opportunities or even just to get moral support is through networking between learning managers in different offices. Having access to experience and advice from other offices is an excellent way to improve learning whether by sharing templates and examples, or helping share resources or by providing feedback on potential courses or trainers. Perhaps the most valuable support though is in sharing advice on how to get management support and how to motivate learners.
I’d mentioned in a previous blog post about the “UN Transformation Network“, an informal community of like-minded UN employees and consultants whose aim is to connect people and have them learn from and support one another in transformational change. A major activity of this network this year has been the Developing Transformative Leaders Course which has been both designed and delivered through the network and in which I’m a participant and part of the organizing team.
We’ve had a lot of interesting sessions on leadership and innovation as part of the course which has been running over the past 6 months some of which I’ve reflected on the blog, but I wanted to share the outcome of an insightful but also very practical knowledge sharing exercise from the last session where we looked at leading without authority. In the UN we’re often called upon to manage inter-agency task forces or cross departmental groups which bring together quite different interests and which are often voluntary in nature and where the leader or coordinator has no formal authority over the group and relies on good will and skills of facilitation, engagement and negotiation to keep things moving.
Below is a write-up of some of the useful tips from the discussion put together by Patrick McNamara the lead facilitator of the course and Sam Martell, Political Affairs Officer, Department of Political Affairs at United Nations and fellow course participant. I hope you find them useful.
In the Transformative UN Leaders course last week, we explored what works when leading a multi-agency task force. The group came up with insights on delegating, influencing others and getting results when one might not have direct authority to demand results.
We had a rich discussion of possible solutions that included carrots and sticks ranging from “name and shame” to ensuring recognition for individual and team achievement: “you are a star – I will let others know.” We also explored the unique cultural aspects of leading in the UN context and how to create support to achieve success. Here is the case:
“You are leading a multi-agency task force with 20 colleagues from 11 agencies. They are, for the most part, there on a voluntary basis. You have a deliverable required in your key results that can only be accomplished by the task force collaborating. What strategies will you use to influence the task force members in order to achieve the objectives on time? “
Using a collaborative problem-solving technique (small-group and large-group dialogue) we came up with these possible solutions and thought you might find them useful.
(UN) CULTURAL CONSIDERATIONS
• In the UN, there are many soft controls and few hard controls so it can be more effective to “carrots” rather than “sticks” to motivate
• Understand the organizational culture and drivers of the work
• Think about what works on you and how it might apply to others
• Which is the right lever to play when (soft or hard)
• Do not make it burdensome to participate/ pay attention to workload and share in the successes
• Not everyone will be fully committed or deliver the same level of contribution – just deal with it
• Make sure aspirations are in line with commitments and ability to commit (avoid the scenario where everyone shares bold ideas but no one is ready to take ownership to implement them)
• Understand the cultural background of individual group members.
• Ask participants: Why are you here? What do you want to contribute?
• Create a co-chair position
• Create a smaller core group to drive the deliverables
• Delegate, set targets, create peer pressure
• Hold each other accountable (name and shame, “I hold your office accountable”)
• Ask for a commitment
• Align responsibility with tasks and functions (and workload) to minimize burden
• Ensure accountability for responsibilities
- Transparency (who involved, who accountable)
- If not accomplishing, someone else from agency chosen
- Ensure there are consequences
• Individual relationships and commitment – then check-in publicly at next meeting to ensure each step is completed. Or, one-to-one in person tracking of progress
• Create an atmosphere where people want to be there
• Build trust relationships “you are a star” and let others know about people’s contributions
• Share recognition for team achievement
• Play to the strengths (and interests) of individuals in assigning tasks
• Publicly and repeatedly recognize / thank people
• Thank participants bosses and organizations
• Get them to do stuff they’re good at and care about
• Working in voluntary teams needs to be more collaborative so you need to listen/ respect their work and inputs/ ensure they are maintaining ownership more than in regular teams
• Search for like-minded people to create support
• Surround yourself with successful people
• Find an external actor to exert pressure on the group so they commit to the deliverable
• Create space for the project (and for it to move forward) Top-level champion can help
In the UN we are used to making big audacious goals to change the world, whether it be halving child mortality, eradicating extreme poverty or empowering the poor to have a say in how their government is run.
At the same time, by ourselves we have limited means to achieve these bold goals, so rely a lot on our power to convene and persuade others to do what is needed. The problem is confounded by the fact that for some of these problems we have a fairly clear idea who needs to do what and how, but for many of them even when we have ideas and some evidence there is no blueprint for success (for example think about the now much discussed idea of reducing inequality – there is a growing consensus around the importance of this, but we don’t even agree on how to measure it let alone what approach we should take to achieving it).
But if you don’t have challenging goals then you have no sense of direction, and no way of knowing if you are on the right track, and if your goals are too modest and simple – then you are probably not trying hard enough to do what we are trying to do, which is ultimately make the world a better place.
Let’s drill down a bit into how we work in development agencies to try to make these goals a reality. Given the importance of the goals themselves and the amount of money and effort required to achieve them then there is an increasing focus in “managing for results”. This is both understandable and for the most part welcome since if we are to make the case for aid, then we need to be able to prove whether or not it works (or more likely when and where), and if we are to ensure that our projects are being managed well and that all partners are accountable for delivering their part of a complex puzzle then we need systematic tools to monitor how we are doing both to report on whether we on track and spending money wisely, and also to flag problems and make course corrections when needed.
There are also a number of critiques of the results focus and Results-Based Management, some of which I’ve aired before on this blog but there is a particular challenge I’ve seen time and again in aid work that isn’t a flaw in the approach itself, but rather in terms of how we apply it.
When we develop our results chains or log frames for a project we invariably end up with a workplan of discrete activities with budgets and responsibilities assigned to them. We usually have some type of monitoring framework with indicators and baselines to accompany it and this perhaps includes some specific research, evaluation or data collection tools to keep it up to date. If we’ve done a good job our plans will also identify some assumptions that we consider need to be met in order for the activities to deliver the outcomes we are expecting, or if we are getting fancy we might even have an articulated “theory of change” that more clearly explains the link between the activities and the desired outcomes.
So far so good. But then we get to execution.
In many, many projects I’ve seen the focus of monitoring shifts quickly to implementation – have we carried out our activities as planned?, have we spent our budget? and we hope to whether the activities delivered the outputs we were expecting. But once we are deep in the day-to-day management (and monitoring) of execution we tend to forget about the end goal. We start to care more about whether we delivered our training workshop and spent our budget than whether we actually built capacity, or whether that capacity is performing the role we originally intended.
If we are then asked whether our project is successful we can confidently assert it is because we were successfully able to carry out all our activities and spend all our budget and have something visible to show for it. But in doing that we often fail to cross check our outputs with the desired outcomes and impact. And if there is a gap between outcomes and where we expected to be, we often tend not to focus enough on understanding why – in particular looking to see whether our assumptions and theory of change were correct, or if circumstances have changed so that what seemed right at the beginning no longer hold true.
Looking at why our well executed activities didn’t lead to our desired outcomes is difficult, which is why we do it less than we should. In particular it’s easy to escape behind the assumptions – particularly those of the type “This assumes that the [name external partner] will effectively carry out complementary activity [X] and provide additional financing [$Y]”. But far from being able to blame lack of success on others not doing their part; reducing uncertainties around external assumptions in the logframe should be considered a key success factor for a project and something to be regularly monitored. In reality the path to success is rarely linear, as we can’t be sure our theory of change is correct or doesn’t need to be adapted to context, and we can’t be sure that what circumstances don’t intervene that require us to change tack.
A couple of practices from audit and evaluation that are intended to foster systematic learning and improvement inadvertently contribute to this. In both audits and increasingly in evaluations there is a requirement to develop and implement a “management response” which outlines how the project or office that is being reviewed will take action to implement the recommendations of the review. This seems eminently sensible as it holds managers accountable for ensuring they read, consider and act on findings from an external review. But the negative side of this (and I’m basing this on several experiences) is that the response is usually a list of actions to implement, and the measure of success is whether they are adequately implemented, not whether they actually solved the shortcomings that the audit/evaluation identified. In other words they fall into the precise trap that carrying out an external evaluation is designed to avoid.
So what to do about this? We need to find ways to shift our internal accountability mechanisms away from monitoring and rewarding implementation of activities and spending of resources, or even delivery of outputs to the contribution to outcomes and impact. To help achieve this we also need to focus more on developing and challenging our assumptions or theories of change, and designing projects to minimize the external factors that are a risk to delivery of results, or perhaps even better build our programmes to be more adaptive to changes in external influences which we have little ability to control, something we can only do if we are not too tied to rewarding unthinking but efficient delivery of our existing workplans.
At a basic level what is called for is to keep a focus on the end goals we are trying to achieve, even when we are bogged down in the minutiae of delivery, or at least to keep raising our heads up above the fray to keep asking ourselves whether or not our execution still makes sense in the context of where we want to go and where we are right now.
I was meeting with a KM team from another UN agency a couple of days ago when the conversation turned to two interesting and related questions
1. What is the relationship between knowledge management and monitoring and evaluation?
2. To what extent should the focus of knowledge management be about improving the use of academic or scientific knowledge in development work?
Many organizations, especially in the UN are linking monitoring and evaluation with knowledge management both in terms of content and in terms of organizational structure. This is both an opportunity and a challenge. On the one hand information from monitoring and evaluation processes is a critical input to knowledge management processes, and similarly knowledge management tools and techniques can help support better monitoring and evaluation. At the same time there are subtle differences between these approaches (see my past blog post comparing KM and evaluation for more details). A major difference is that monitoring and evaluation has a greater focus on accountability – is the project on track, were the outputs delivered, was the money spent well, did the project have the desired impact. Knowledge management focuses more on learning and reflection and on how to share what is learned with other projects. These are in fact very complementary, and they require some overlapping skills such as the ability to collect and analyze data and summarize and interpret it – but again knowledge management practitioners put a greater emphasis on “soft” skills such as understanding human psychology and group dynamics, networking skills and skills in both interpersonal and mass communication. Another challenge is that if the KM team is seen by staff across the organization as part of the “results” team or performance monitoring team then this can mean that people are less likely to trust them with their stories of failures and setbacks in case that be “used against them” but a large part of learning requires candid reflection on positive and negative experiences in a safe environment.
In practical terms having M&E and KM sit and work together can be very effective to get a more complete picture of organizational knowledge and how to leverage it, but this only works if the KM people are allowed to function like KM people and their different role and skill set is respected rather than being seen as part of an overall monitoring and accountability system. Similarly KM people need to work closely with human resources, communications and technology teams across any organization so the structure and working methods need to allow for that (I hope to write a future blog post about the plusses and minuses of locating the KM function in different parts of an organizational structure including communication, staff development, IT, executive office, programmes etc. –unsurprisingly there is no “best” approach as each has it’s advantages and disadvantages).
Taking the second question – an increasing refrain heard in aid agency strategic plans, or government plans for that matter is that they need to be more “evidence-based”. On one level this is a no brainer – if you have knowledge about a problem, how it is caused and what strategies are effective in tackling it, then why wouldn’t you use it?
A more interesting question might be to look at why available evidence isn’t it being used, or what if any are the limitations of what you can determine from research that can be applied in addressing problems in the real world.
Research and scientific method is very powerful, and woefully underused in identifying what types of technical interventions work well in development and under what conditions. It makes sense to use experimental research techniques to determine which interventions are most effective and also to tweak their design to improve their efficiency both at a general level and at a local level to adapt them to context.
But in every project there a certain amount of “knowledge” that is needed to implement them successfully that isn’t easily measured through a scientific approach and which can’t be implemented in a standardized fashion across different contexts. (see an early blog of mine “The truth is out there, or maybe not” for more discussion on the limits of applicable scientific knowledge). The most important of these are politics, culture and personalities (i.e. the actual people involved in implementing the project or who are critical to its success). Dealing effectively with these issues requires a combination of local knowledge, experience, qualitative research and insights, peer assistance and advice and flexible adaptation. This is where knowledge management techniques such as communities of practice, peer-assists, after-action reports and even lessons learned databases and expert rosters come into their own. Similarly innovation tools such as human centred design and rapid prototyping can also be put to use to address the aspects of project design and implementation that don’t readily lend themselves to rigorous research, or for which standardized approaches can easily be designed.
Again, soft KM techniques, personal judgment and expertise and hard science should not be seen as competing approaches but as complementary ones. The challenge is figuring out how to combine them effectively, especially which approach to use and when, and what to do if they generate seemingly different conclusions. For me, this is an area we could do to think more about. Yes, to doing more and better research on development approaches, and yes to doing more to put the conclusions in the hands of decision makers and persuading to use them in their decisions – but at the same time we also need to think more about how to tackle the soft side of the “science of delivery” such as how do we adapt approaches to make them successful in a local context taking account of politics and power as well as culture and social norms – and how do we manage the people side of the project effectively, and how do we continually adapt our programmes to deal with the changing situation on the ground, as well as ensuring that we are constantly learning from each new experience and incorporating that learning into our future programmes. This is the “art-meets-science” of delivery where we still have a lot to learn.