The truth is out there (or maybe not)
In the organization where I work, like in many other development organizations, there has been a lot of push over the past few years on “evidence-based” policies and programmes. So when I tell people I work on Knowledge Management, they often imagine that I’m working on strengthening academic research, or on building massive all-encompassing databases full of peer-reviewed scientific knowledge.
Although I am working on some databases – this isn’t what I actually do most of my time – nor despite what some of my professional colleagues think – is this what I think we should be doing.
Development is a complex business, if it weren’t we would have gotten further along in solving the world’s problems before now. One common reason cited as to why we haven’t done better is that we don’t have enough data, and we don’t have enough evidence.
A number of remedies are commonly proposed to help address this:
1. Collect more statistical data – more surveys, more administrative data collection. More recently we have started to say that we need more real time data collection.
2. More research – more academic studies, more randomized controlled tests, more papers published, papers published more quickly.
3. More evaluation – we need to more systematically evaluate more of our programmes to understood what worked and what didn’t and what lessons we can learn. We need to use better evaluation techniques.
4. More, bigger and more open databases – we often acknowledge that a lot of research has already been done or data collected, but that it is not easily available as it is stuck behind paywalls, fragmented and not well disseminated or easily searchable. To address this we strive to make big well-organized mega-databases that are the preeminent knowledge sources on their particular topic, and advocate for more free access to data and research.
Guess what – I actually agree that all these things are worthwhile. I mean how couldn’t I? BUT – too many people seem to believe that if we keep collecting more and more data, do more and more research and evaluations and make more and more comprehensive databases, then we will have everything we need to do evidence based development work. Basically, if we look hard enough, the truth is out there…
There are a couple of reasons why I don’t agree with this:
1. There are limits to how much evidence you can collect
2. There are other important dimensions to knowledge that are actionable, yet tend to get overlooked when we take too strong a focus on “evidence”
Firstly the limits of what knowledge you can collect. In developing county contexts in particular, it can be difficult and expensive to come by high quality, timely and relevant data. Existing data collection systems are often weak, and while they can be developed there are still limits in terms of accessibility of marginalized populations and cost of developing surveys to them to such an extent that they can provide the data needed to answer many of the development policy concerns we have.
Similarly for research, there are a large number of potential questions that we would like to address, but availability of data, costs, time and limitations in the research methods themselves mean that there are a lot of questions that can’t be answered in a sufficiently timely manner for the development of policies and development programmes.
Evaluation also is limited in that it can be very costly, yet only tell you part of what you need to know in terms of whether a programme was effective and why.
One particular challenge for any knowledge related work is that of generalizability. To what extent can the results of a study of evaluation be generalizable to other contexts and other timeframes and how much do they tell you about what you should do and what will work in the future.
Another important limitation of “evidence” is that even when it exists and is fairly clear (which for the reasons stated above frequently isn’t the case), it often isn’t sufficient to motivate policy makers, politicians, families etc. to take action. Any findings or recommendations also need to be contextualized to the local culture and to the power relations of the situation where you are trying to use the evidence. People often choose to interpret evidence in a way which supports their current beliefs, are not necessarily going to use peer-review in a reputable journal as their benchmark on whether to trust the source, and may not accept advice they don’t lie that they perceive may weaken their current influence or power.
None of this means that data, research and evaluation aren’t needed. But it does mean that they are not enough. So what’s missing?
An important aspect of knowledge transfer and change is personal relationships. Most people don’t have time or the skills to examine all the available evidence first hand. This means they rely on the opinion of others whom they trust. Similarly standard methods for collecting, storing and disseminating research often have little impact with people being too busy to seek out the evidence they need, or to even develop the skills to do so. Again people frequently ask others rather than access the evidence directly themselves.
Also there is a whole range of knowledge that isn’t captured by research, that of personal experience. Often you can understand a situation, and describe it to share it with others, but you can’t back it up with scientific research (a trivial example is that I’m pretty sure I know the quickest way to walk to the station in the morning- but I have neither measured it nor timed it). It might be that it would be too expensive and difficult to prove it through research, or that by the time you know the answer, it would already be too late. Some knowledge is in the form of skills or even instinct which doesn’t easily lend itself to being formally captured at all. This type of knowledge is known in the business as tacit knowledge. Here is a handy diagramme that explains the difference between the two (link to original).
So, in order to take advantage of the part of knowledge that lies below the surface (the part which isn’t “evidence” in the formal sense), then you need to take other approaches. These can involve using tools to try to capture some of what is currently hidden to make it more shareable (tools such as after action reports, end of assignments reports, self-reflection exercises, lessons learned, story telling etc.) and through approaches that make it easier for people with shared knowledge interests to find each other, trust each other, share with each other and collaborate (through approaches such as knowledge fairs, communities of practice, social networking, cocreation).
In fact I find that the most interesting, and promising work I do in the area of knowledge management is not about evidence at all – but is about the social dimension to knowledge. What I need to do is make a better case for this with my colleagues – but then I’m sure they are going to ask me to show them the evidence!