It ain’t what you do (it’s the way that you do it)
A KM colleague from another organization recently asked me my advice on setting up an “strategic information system” to help better monitor the work of the organization, as this was a priority of the head of the organization.
In the aid world where there is an increasing emphasis on measuring and demonstrating results, this kind of system is increasingly popular. And indeed, who wouldn’t want to have some kind of dashboard that enables you to look across the organization and see how you are doing and whether or not you are on track.
The typical kind of dashboard will allow you to track key external statistics (such as poverty levels, child mortality etc.) on maps and then overlay those with information on projects, project spending, and project results (often outputs such as number of wells dug, trainings done, supplies delivered, but sometimes if you are lucky outcomes).
BUT often these dashboards miss an important thing. It isn’t just what you do that gets results.
A couple of years ago a very interesting study on UNICEF programme performance looked at information and knowledge management as one of the strategic capabilities the organization needed to have in order to manage its programmes well. In this three basic types of information/knowledge were identified as being critical:
1. Knowledge about the situation (in this case the situation of children and women). This can be data about the current situation, ideally disaggregated by sex, region, age and other key characteristics. But it could also be information about the underlying causes of something for example how do attitudes and values impact women’s empowerment, or the latest knowledge on the epidemiology of a disease.
2. “Know-how” on how to address the situation, This can be both technical knowledge such as on how to manage the logistics for a cold chain, or which targeting schemes for cash transfers are the most efficient at reaching the poorest, or which kind of incentives work best for keeping children in school. But it can also be more tacit know how such as on how to persuade skeptical governments or politicians to try out a new approach, or how to win over local community leaders, or how to deal with unexpected security problems, or seize on unexpected opportunities to advance your programmes. It also includes those little things that experienced people know how to do to get things done, but which don’t usually appears in the scientific literature, nor in the programme guidance.
3. Knowledge about organizational performance i.e how efficiently (and if measurable how effectively) ) are the programmes being implemented. This could include things like whether the planned outputs have been delivered, whether the budget has been spent as planned, or indicators on how well the office is managed (e.g. when was the work plan signed, how long does it take to fill vacancies, how many outstanding audit observations are there).
And as you can now see, the big missing step in many strategic information systems is that they don’t take into account the middle step – the “know how”. They often assume in one way or another that if you monitor the situation and monitor the plan you will know how well you are doing and be able to correct things when they are off track.
One might argue that if a programme plan is developed based on a sound evidence based problem analysis and clearly articulated in some way such as in a log frame or a theory of change then that largely takes care of the “know how”. Some programme approaches are even built on very elaborate standard systems models that look in detail at barriers, bottlenecks or causal chains which can be closely monitored and contain an implied understanding of the whole system in which the programme operates.
But the problems with this assumption are that i) the articulated theory of change might be incorrect, so you might be efficiently implementing an ineffective programme ii) the situation is changing so a valid analysis now might not still hold part way through implementing the programme and related to these reasons iii) the project itself is usually part of a larger complex system within which the relationship of the interconnecting parts is not fully predictable and so while the likely immediate impact of an intervention might be known its knock-on effects, positive or negative are not. An example of this would be that increased media communication about the benefits of vaccination might lead to more parents bringing their children to health clinics, but might also lead to a broader backlash against foreigners telling people what to do that could affect the programme long-term.
A larger objection is that all projects, no matter how well modeled are run by people with different technical skills, but also personalities, and the interactions between the various actors can be as important as the technical role they play. Few models take account of this and assume that success is largely based on technical competence in implementing a particular approach.
The challenge with adding the critical know-how component to strategic information systems is that “know-how” is much harder to map and monitor. But a few things that can be done are:
1.Introduce some element of tracking and reviewing the effectiveness of interventions into programme monitoring i.e. try to spot then there is a disconnect between an efficiently running programme that is following the plan, but not achieving the desired impact. This can be episodic such as through evaluations, but finding ways to track this through regularly monitored indicators is also important.
2. Ensure that “know-how” systems such as communities of practice, capturing and sharing lessons learned, or tools such as peer-assists are present to help support programmes to have access to the kind of know how they need.
3. Programme implementation and monitoring needs to be sufficiently flexible that when programmes go “off-track” there is the possibility not only to fix efficiency problems but also to consider modifying or changing the approach, or even modifying the overall programme objectives themselves – and on an ongoing basis not only after 3 years at a mid-term review.