KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Not all knowledge is evidence, not all good advocacy is evidence based

with 6 comments

I’ve got a few blog posts in the hopper right now – but while I finish them, I thought I’d share something I wrote for an internal blog about a year ago – relating to our fixation on making our advocacy “evidence based”. I’d still stand by this today. The theme will be familiar to regular readers of this blog…. let me know what you think.


I’ve been in a number of  global meetings lately where the issue of being more “evidence-based” has come up. I’m greatly encouraged by the strong emphasis on making better use of evidence in its programmes and its advocacy, and that in order to do that we also need to be better at generating it, acquiring it and sharing it both internally and externally.

I’ve also heard quite a few misconceptions about what evidence is and how and when it should be used and how far we can rely on it to inform our work.

Evidence is not the same thing as Knowledge – Evidence is usually taken to mean “hard” demonstrable, measurable things. Evidence comes from direct observations, surveys, experiments and evaluations and the like. Evidence is crucial to advancing scientific learning as well as on an everyday level to know how things are going such as through programme monitoring. Knowledge (i.e what we know) is internalized learning – in this sense we only know something demonstrated by evidence if we have internalized it- i.e. we “believe it”. Similarly there are things we know (and act on) for which we don’t have strong evidence – often this knowledge comes from learning and direct experience – even if this is not documented and measured. Much important learning is not documented as evidence – that’s why we often ask for someone else’s advice – someone who “knows”, someone who has done it before.

Not all good advocacy is “evidence-based” – Evidence-based advocacy has been interpreted by some to mean advocacy that uses data, charts, includes report citations etc. to show the strength of the evidence on which a particular argument is based. However it’s probably fair to say we all know people who are unimpressed by numbers and so even if the argument is made more concrete by using them for some audiences this will be a poor method of persuasion for others. A weaker definition of evidence based advocacy would be that the argument we are using to persuade is informed by and supported by available evidence, and is not contradicted by it – but that the evidence itself is only used if that is helpful in making the case with the particular audience. I sometimes jokingly refer to this as “evidence-supported” advocacy. It’s also worth mentioning that part of effective advocacy is understanding and taking into account the interests, needs and prejudices of the person you are trying to persuade – issues such as the political situation in country, a person’s background etc. in this case you might well stress certain evidence that appeal to the audience and downplay or even omit others. Possibly your whole appeal might be at an emotional level or about values and ideals rather than evidence at all (e.g. all children ought to have a right to free education – beecause it’s the “right” thing to do). This isn’t evidence-based advocacy – but it might be good advocacy. What I think we should not do is advocate for things which are contradicted by available evidence – or where we don’t have some grounding either in evidence or in principle (e.g. in Human Rights principles).

Evidence does not equal truth – An obvious point, but evidence is based on fixed observations that are often partial, and new evidence emerges all the time often contradicting or muddying the conclusions we arrived at from past evidence. Just because we have evidence for a particular model or theory doesn’t make it true. We also need to be aware of personal biases in interpreting evidence – in particular people tend to interpret evidence in a way that is supportive to their existing way of thinking.

So what does this mean for our work?

1. When we talk about strengthening knowledge management we shouldn’t only be talking about strengthening our evidence base – we need also to talk about learning and the tacit knowledge that rests in people’s experience.

2. When we talk about strengthening advocacy we need not only to be talking about evidence-based advocacy – but rather strengthening advocacy as a whole including appropriate and more effective use of evidence in it.

3. We need to keep an open mind – what we know as an “evidence-based” approach today may well be contra-indicated in the future. We therefore need to keep learning and be prepared to change our minds if the situation warrants it.

Any comments?


Written by Ian Thorpe

March 1, 2011 at 10:05 am

6 Responses

Subscribe to comments with RSS.

  1. Amen. Psychology tells us that when making decisions, people use “objective” information secondarily to build a rationale for what is felt at an instinctual, emotional level. If we ignore this, much will continue to be lost in the abstraction and over-intellectualization of aid work.

    Kudos to all those who are challenging the dominance of quantitative statistical information as the sole, authoritative source of knowledge in our sector. Indeed, let’s embrace much richer ways of thinking about this paradox of development.

    Jennifer Lentfer

    March 1, 2011 at 12:22 pm

  2. […] more: Not all knowledge is evidence, not all good advocacy is evidence based AKPC_IDS += […]

  3. […] Not all knowledge is evidence, not all good advocacy is evidence based « KM on a dollar a day Ian Thorpe looks at the differences between knowledge, evidence and truth as elements of decision making in the social sector. (tags: philanthropy) SHARE THIS POST    Share Tweet ← Previous post […]

  4. Hi Ian, My role is work alongside southern partners and program staff to advise on strategy and where useful bring best practice and training on advocacy. While I agree with you generally, I think in many contexts the move from “shouting to counting” has been useful. That said, what you count and how you present that will always be based on the context. Also the way you collect evidence can increase or decrease its impact.

    I also notice that we can fall into the trap of promoting a kind of inductive approach to evidence gathering – i.e. “something is wrong here, let’s go get proof”. When in the end, maybe there is a solution already out there, that needs to be studied and understood better and promoted effectively. I am not sure how open “development programing” is to more emergent, searching processes.

    I am interested in sharing the tacit knowledge and wealth of experience between southern partners. We need to really celebrate and not take for granted the way people organize, the “social technology” behind influencing. Storytelling across contexts can spark off innovation in ways that formal approaches simply cannot. Also warnings about potential emerging risks and traps are most effective not from “us” but when shared between people on the frontline.

    Our real task is thinking through what our role is in facilitating this, and how best to allocate scarce resources. That is the hard part, as your blog title suggests.

    J Gunter

    March 2, 2011 at 9:34 am

  5. Excellent take on the matter


    May 23, 2011 at 1:38 am

  6. Any other articles similar to this one?


    May 23, 2011 at 1:39 am

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: