Too much learning by doing?
“Action without study is fatal. Study without action is futile.” – Mary Beard
There was an interesting twitter discussion earlier this week on #NPcons (not for profit consultants tweet chat) about the relative merits of thinking and doing and the sense that not for profits have a tendency to take action, but not take the requisite time to think about things first.
This echoes a discussion that has been taking place in house about whether we as an organization depend too much on “learning by doing” and don’t reflect enough on the latest thinking and developments in social science and development studies from outside the organization.
I think its a fair criticism to say that too much programme design and implementation isn’t taking into account the latest available knowledge. But it’s important to recognize that the type of knowledge that is needed in programme design is manyfold. In addition to looking to see whether plans are informed by the latest scientific research, we also need to look to see whether there is any relevant prior experience both within the country and in similar contexts. We also need to look at the “political economy” – basically how far can we reconcile what we believe is technical correct with what we can convince others to do.
Perhaps the real problem is not “learning by doing” so much as doing without any learning at all. Symptoms of this would be basing what we do only on our own ideological views or our past experience and instincts and ignoring evidence and learning from others. It would be failing to incorporate ongoing learning into what we are doing now if it doesn’t fit our views (because we already know the right thing to do and how it is going to work out). It would be the idea that doing something, anything (and being seen to do it) is better than doing nothing at all.
In a sense though I think there is a bit of a false dichotomy between thinking and doing. In development work you can’t really separate the two, and they shouldn’t be in competition. Instead you need to find ways to integrate the two such that doing is informed by thinking and thinking is informed by doing.
You should think before you do something of course. But also you also need to think about what you are doing while you are doing it and afterwards (what worked and why, and why what worked in theory might not have turned out exactly as you expected in practice). And if you are involved in development programming it’s not enough to study and research just because you want to know – and the idea that study can be neatly separated from experience is a misleading one. To be useful learning needs to be focussed on how to improve policy and take action. Both the design of research – but also how it is written up and communicated need to take into account how it can be used to not only inform but also influence decision making on the ground.
Once we have arrived at the stage of having an evidence informed plan – then we need to get into the real business of learning by doing. This means we need to build in suitable mechanisms to monitor and evaluate our programmes and to reflect on them and learn from them as part of the programme design and implementation. To do this we might need to design the programme and then evaluate it around an explicitly identified theory (or theories) of change. It also means doing things like collecting suitable baselines, using appropriate methods to be able to determine the outcome and impact of the programme vis a vis other interventions or changes in the environment. It means seeking feedback from partners and beneficiaries, and it means self-reflection on the experience (including using tools such as after-action reviews). Finally it means feeding back whatever insights were gained from the project into the broader body of knowledge of the organization you work in or better the relevant field of development such that this can be used by others.
This last step of feeding back our experience to others is crucial, but often overlooked. Not all programmes are rigorously evaluated or generate bulletproof data on impact – but with some effort on design and monitoring and some reflection afterwards they should all generate learning which can help inform others. If we did this then we would really be learning by doing – and this would be a very good thing.