Some Perils of Public Sector Planning
I’ve just gotten mostly through with a major strategic planning exercise which made me think back on the many, many planning processes I’ve been involved in over my career. I thought I’d share a few of the recurring challenges in how we develop and use our plans in large public sector organizations such as the UN (which I also saw in DFID and the EU when I worked there, but that was a loooong time ago so I’m sure they’ve fixed it all by now ).
1. Over ambition – in most areas of our work it’s easy to see lots of things that need to be or could be done, especially when you are new and eager to please. In the context of trying to “do more with less” then it’s especially tempting to try to add everything you know that needs doing, you want to do or is suggested to you in your plan (like my own workplan for the first year in my current job). One way to avoid this is to try to realistically map out how much work effort is required for each, and to prioritize carefully to pick those things which are likely to have the biggest impact for the time/money invested.
2. Underestimating the time it takes to get stuff done – related to the above, things just take longer than you think they do. Apart from the actual working time it takes to do something, major reasons that things take longer than you expect is “waiting time” i.e. the amount of time you are waiting for other people or processes. The major culprits in this are i) length of time taken to get feedback/comments on your work that are needed at various stages ii) procurement and hiring processes – these always take a lot longer than you think they should and there are often unforeseen complications iii) personal calendars – when you need to wait for something because a key person is on leave or off sick or travelling iv) approvals and sign-offs – things often get stuck on some busy higher-ups desk.
3. Undervaluing the contributions of others – Often your work requires inputs of others – if these are financial or material inputs they might already be in someone else’s plan, but time contributions from others for commenting and technical inputs are often not budgeted or even communicated in advance to the people you need them from. If you expect others to spend significant time providing inputs or support to your workplan it’s important to let them know so they can plan for this, and also include this in your assumptions and risks as your priority might not be their priority (and it’s also good to ask others what they are assuming you will do for them that they are not aware of).
3. Planning yourself into a straightjacket – complicated projects might require detailed timelines and budgets together with flowcharts of activities and dependencies, but it’s good to be careful as to what exactly you will be held accountable to and how far you want to commit to specific things to others including managers, donors, executive boards etc. It’s fine to have a detailed plan to keep yourself on track – but its good to avoid being micro managed by others, especially if your detailed workplan is built on some guesswork and subject to extensive change based on experience, it’s good to have enough leeway not to have to go back for approval for small changes to the plan. This also applies in budgets i – it’s useful to avoid needing to get approvals for every small change in budget within an overall envelope, especially where actual budget costs are heavily dependent on items with a lot of potential variation such as for specialized consultants and expertise – getting budget changes approved, especially if they need to go back to donors can cause large delays.
4. Poorly thought out indicators – it’s a good thing that we are now required to include indicators in our plans but I’ve seen a lot of poorly thought out ones. Everyone knows indicators should be “SMART” but there is a tendency either to i) choose indicators that are really just measures of inputs or activities e.g. report produced, money spent that don’t really look at what you are trying to achieve or ii) which are at too high a level of impact which can’t be measured easily or which won’t show results within the timeframe of the project or iii) indicators which sound like a good assessment of progress (like “number of national plans that fully follow gender mainstreaming standards”) but for which there is no existing or easily created method of regular monitoring and so either they won’t get reported, or it will be a one-off exercise that requires additional costs itself to collect.
5. Focus on the activities rather than the purpose – a workplan with specific activities is good to ensure we know what to do and whether we are on track – but we need also to review our activities against our intended outputs and outcomes to see whether we are still on track to achieve them or whether the activities need to be changed to achieve our desired goals. Once we have developed our plan we often focus our monitoring on whether we followed the plan and are on schedule rather than whether we are making progress towards where we want to be in terms of outputs and outcomes, and whether the activities are still actually appropriate.
6. Developing a plan to look nice on the shelf. Sometimes plans are developed in response to demands from others rather than as something we actually intend to use ourselves to help us do our work e.g. because a donor or our board has asked for it or to get a funding allocation. If you are going to take the time to develop a plan, it’s important to make sure you can/will actually use it yourself to manage better rather than having parallel processes to monitor yourself and to report to others.
6. Not leaving space for the unexpected. We often end up planning 100% of our time not accounting for delays, but also not allowing space to deal with unexpected developments, new opportunities, demands or emergencies. In reality there are always unexpected developments that will need urgent response or which will provide opportunities to advance our work. Not allowing space for these will either mean delays in our plans as we take time away to deal with emerging issues, or lost opportunities and poor response when we fail to address new developments because they weren’t in our plan.
7. Not planning for failure (or success). We often plan assuming that our pilot projects or new initiatives will be successful, but they will not always work. we need to plan in time for reflection and revision of our plans or even ways to drop or scale back unsuccessful initiatives. But similarly if an approach is very successful then there might be an opportunity and even demand to scale it up. But if our resources are already 100% committed we won’t have the capacity to scale up or hand over. Similarly if we don’t think about what might happen if something is successful, we also might plan a pilot in a way that is not easily scaled e.g that relies on non scalable factors such as key personalities.
8. Or maybe we are just using the wrong planning tools in general. Much of the work we do is in trying to influence complex adaptive systems and maybe some our current linearly focused tools just aren’t up to the job. That could be the subject of a whole other blog post but a couple of references on this are i) a previous blog I wrote on “Who’s afraid of complexity in aid?” and Duncan Green’s latest blog on “What to do when you don’t know what’s going to happen?”
I’m sure there are many other pitfalls that I’ve missed. What are your lessons learned?