Archive for November 2010
While this blog is mainly about knowledge management, those of you who follow me on twitter will know that I’m also very passionate about improving aid effectiveness or #smartaid.
Over on Good Intents (an excellent blog for those interested in smart aid and donor education) Saundra recently asked “What is Smartaid?”
In my mind at least smart aid has quite lot to do with knowledge management. While this doesn’t cover everything that goes into smart aid, there are a few areas where good knowledge management can make a significant contribution to smarter aid.
If I were to summarize the link between knowledge and good aid in a single word it would be “learning”. Improving aid is as much a journey as it is a destination.
Here are a few areas where knowledge and learning is critical to improving aid:
1. Collecting Evidence – before starting a program you need to collect evidence in order to understand the problem you are trying to address and what are some of the possible approaches to tackling it.
This includes – collecting data about the situation you are addressing it, but also about the local context including cultural and political factors. It includes looking at existing research to help explain a situation and its underlying causes and looking at evidence about the effectiveness of different approaches to tackling the situation. This should include not only “scientific” research, but also getting inputs from key beneficiaries and actors.
2. Reflection and feedback – learning lessons from our own experience. Even with all the evidence in the world, nothing judges your actions better than what happens in reality. It is important to look at your programmes while you are implementing them to see if everything is working according to plan, whether there are any things that could be going better, whether there are any unintended consequences, and even whether there are unexpected benefits. If so what can you learn from this – can you make changes to improve the project, are there general lessons you might apply to other circumstances? To be useful this feedback process needs to be continual (not just at the end), and also needs to include “objective data” on the situation, but also feedback from beneficiaries and partners and self-reflection.
3. Capturing and sharing what you know with others – part of good aid is sharing what you have learned with others so they don’t have to repeat your own mistakes – both the successes and the failures, and an honest account of how much or little we know about what we did that was successful and why (touting our successful projects as models that should be immediately adopted by others isn’t helpful if we really want to get results). Ideally we should also synthesize experiences and compare them in order to be able to help identify those things which are more easily transferable and those which are not.
4. Something that underlies all this is networking and collaboration. Although data, research papers and formal documents can be helpful, a lot of learning and sharing is done through formal and informal networks and through person to person contact, so it is important to put the effort into making the people side of knowledge capture and sharing work if we really want to get improved aid. All aid projects are run by individuals after all, not by reports. Similarly making sure that knowledge is conveyed in a way that is meaningful and useful for others is as important as capturing the knowledge itself. Knowledge is only useful if it can be applied in practice which means in needs to be relevant, contextualized, understandable and actionable by those who can use it to inform and improve their work. It’s not enough to collect knowledge “because we want to know” we need to take the steps necessary so it is actually used.
5. We need to listen – to our data, ourselves, to the experiences of others and most of all to our “beneficiaries”. If collect relevant knowledge through the above steps but decide because of our own hubris that we don’t need to listen to what we hear (either the “not invented here” syndrome on using the good ideas of others or the “head in the sand” syndrome of not accepting bad news) then the knowledge we have serves us no purpose.
In conclusion – both knowledge management and for smart aid share the fundamental assumptions that we don’t know everything, and that it is always possible to learn and improve on what we are currently doing, and that through better sharing, co-ordination and collaboration we can build on what has gone before. So to “do” smart aid we need to recognize our fallibility, and be continually open to look for, try out and adopt improvements, new ideas and new ways of working if and when they are better than the old.
My colleague Mark recently started a personal blog (which incidentally helped finally get me off the fence to start blogging too). He recently posted a piece “guidance and when to ignore it” that really resonated with me, and got me thinking about both the value and the pitfalls of regulation and guidelines and how we develop and use them. I’m referring here mainly to the world of aid and development – but I think the same ideas might have a broader application too.
Of course rules and guidelines do perform useful functions: they are good to ensure standardization and consistency, when we know the right way or best way to do something – they are especially good for assembling and operating things, and good to deal with compliance issues such as legal and administrative requirements. They can be a good way to capture and share good practice and try to ensure it is being followed on the ground.
Rules are less good when we don’t know the best way to do something, or how the rules will be interpreted and implemented by the people they are intended for. They are also frequently not good when done quickly in the aftermath of an unforeseen crisis in order to “make sure this never happens again”.
Here are some of the problems that we face with rule writing:
- The rules can’t cover every eventuality. Trying to cover every eventuality is a fruitless activity since there will always be some exceptional circumstance that we weren’t able to predict, and the more we try to map out every eventuality then the more complicated, and potentially confusing the rules will become.
- Do we actually know the best way to do things (even if we think we do). Although guidance can be a good way to ensure people follow well founded good practice, there is a risk to putting things into guidance where we don’t really have strong evidence that we know the best approach since at best we close off possibilities to improve procedures by not allowing variation, at worst we institute sub-standard procedures as institutional practice.
- They can stifle innovation and improvements due to compliance. Related to the above point – if procedures are monitored and reported on for compliance, and reputations are influenced as much by whether the follow the rules as whether they improve programmes, then people will not risk stepping out of the box to try something new, for risk of sanction.
- Having rules can put us on autopilot. In other words, having everything written down step by step makes it easy for us just to follow procedure, without thinking about whether what we are doing makes sense in the situation we face. Also if we follow the rules then surely we are not responsible if something goes wrong – after all we did what we were supposed to do.
- The cost of compliance can be worse than the risk we are trying to avoid. If regulations are overly onerous, or are in place to address risks that are in reality extremely unlikely, then the cost of implementing a rule (in terms of money, people’s time and lost opportunities) could far exceed the benefits. This often happens in reaction to a crisis (accounting scandals, financial crises, terrorist threats?) and in the development world frequently as a result of a bad audit. What can be worse is that the rules put in place in these circumstances may have more with being seen to do something, rather than them actually being effective, even against the risks they are supposed to address.
- Unintended consequences. Guidelines – like anything theoretical put into practice, can have consequences not foreseen at the time they were written. People misinterpret the rules, or find ways to work around them to get their work done. The more complicated the rule, the greater chance it will not be implemented as intended. Sometimes the response when this is discovered is to make the rule yet more detailed and complex to try to close any loopholes and misinterpretations – often this has the opposite effect from that intended.
So what can be done about this? I don’t have all the answers – but here are a few suggestions:
- Get input and feedback on new rules and guidelines from those who will be implementing them to see if they are useful, feasible and that they are understood correctly.
- Build ongoing mechanisms to provide feedback and have discussion on adequacy of guidelines from the perspective of those using it.
- Don’t do manuals and guidelines if we don’t really know what we are recommending works/is the best way – it’s maybe better to be clear that this is a good practice, tool or guideline to help inform decision making – but be clear that it’s not ”the law”. Differentiate in guidance between “must dos” and “good practices to consider”.
- Whenever a new guideline is being considered, look at whether the cost imposed is merited given the risk being addressed or the potential benefit gained. Verify this in practice once the rule is in place.
- Once in a while do a thorough review of guidance to check its all consistent and that it is all really necessary. Try to make a conscious effort to simplify, streamline and delete on a regular basis to avoid that the system of rules becomes ever longer and more complex.
- Have a helpdesk or some other kind of ongoing means of communication to keep rule makers and rule followers in touch with each other to get feedback on how well rules are understood, how they are interpreted and how well they work in practice.
- When using manuals and guidelines we have to make sure we don’t go into autopilot, but still think about what we are doing and whether we should follow the guidelines or make a reasoned decision not to follow it. The organizations where we work need to support managers to take this kind of accountability on themselves and not be punished for doing the right thing in practice over the right thing in law.
What do you think?
I frequently hear about research or pundit opinions on how the younger generation (the “net generation“, “digital natives“, “millennials” etc.) think differently from people in my own generation (“Generation X”) and that this change in thinking will radically change the workplace, especially how people communicate and collaborate.
I’m happy to hear about the imminent transformation of the workplace to be more wired and more social, less hierarchical, more innovative and more knowledge/expertise based. This is of course highly desirable in my line of work in order to get people to adopt new technology tools such as social media, and in particular to change their behaviours, and to make the organizational culture more collaborative an supportive towards knowledge sharing. But I also have some qualms about how the generational argument is being used in many workplaces by senior leaders and human resources managers.
While it’s true to say that each successive generation is more familiar and comfortable with new technologies and new ways of working – on an individual basis this isn’t entirely determined by age. I’m sure you can all think of “older” people who are very collaborative and comfortable with new technologies and new ways of working – and also younger people who are not. This is in part because learning to use new tools, or working differently is about both familiarity and attitude. It’s true that on average that people can learn faster and are more open-mined when they are younger – it’s also true that a large part of this is also personal disposition and that some people are naturally more conservative than others about change whatever their age.
Grouping people by generation leads to some lazy arguments in the workplace on how to manage change, some of which although common are potentially harmful. These include:
- If you want to do innovative work you should get a young person to do it. A real quote I heard “We should get one of those web people under-40 with ripped jeans to do this”. While “on average” younger people might be more innovative, assuming that they are just because they are young, or that people who are older are not capable of innovating is potentially harmful. Turning ideas into successful innovation also requires a combination of talent, luck and skill and at least the last of these is something that develops with experience.
- Change will come when the current younger generation become managers: If you want to change the workplace, it’s just too hard to change the attitudes of the current crop of managers, but over time as the younger generation reaches management positions then change will be possible, even inevitable. This is a doubly harmful conclusion in that it provides an excuse for not making change now (since it’s clearly just too difficult with the current dinosaurs!), but also because who is to say that by the time the younger generation gets older they won’t lose some of their interest and willingness to change the status quo. As Roger Daltry might have put it “I hope I die before I get old, or else I might start making American Express commercials!”. After all at some point people learn to adapt to the current system on its terms rather than trying to change the system to suit them – at least if they want to be successful in their current environment.
So, instead of making sweeping generalizations about people based on their generation, maybe its better to look at people as individuals, each with different aptitudes and propensity to handle and lead change, and to nurture and reward those which have the ability to make a difference whatever their age.
If we want change now (“and if not now .. when” – as a an old sage once said), what we need to do is find ways to foster innovation and collaborative behaviours in all employees, so we don’t have to wait for a new generation of leaders to start working differently.
This also means recognizing and nurturing current innovators in our midst, not as outliers pr special cases, but rather as exemplifying ways of working that are open and available for everyone to use and that are recognized and rewarded by organizational leaders. If many of these innovators are younger and in more junior positions then that’s great – and we have to take care to nurture and rather crush their entrepreneurial spirit so it is honed rather than blunted by our existing bureaucracies.
Incidentally the video above was created fairly close to my birth – yes it’s that long ago ;-)
A large part of my work is on introducing communities of practice and social networking into the organization, and while we have a number of enthusiastic supporters and already have a few very active communities, like in many other organizations we face some internal skepticism, especially when it comes to getting the resources we need to do this well.
An important part of trying to do this well, and also to convince internal constituents is to learn from the experiences of others. Thanks to connections on Twitter with Giulio Quaggiotto (@gquaggiotto) I was able to get in touch with the World Bank and get them to share their experience with UNICEF.
A couple of weeks ago we were lucky enough to have Maggie Tunning present “Scoop”, their social networking platform via webinar and explain how this had been rolled out in the Bank. Giulio who had also worked on this project joined the webinar, although he has since left the Bank to join UNDP’s KM team. The talk was followed by a lively question and answer session with the audience.
- The platform was developed in Elgg, an open-source content management system. It was chosen as it is a powerful tool with a strong developer base, and that allowed flexible development. The development team worked closely with a group of “super-users” and made frequent small enhancements to the site based on feedback rather than making less frequent major releases. This way they could iteratively improve the tool and also make sure they were responding to user needs as they evolve (noting that it can be hard to pindown exact requirements for collaboration systems since users often don’t know what works best until they try things out).
- Staff were not forced to join or use the platform, and it was not advertised widely through the Bank’s formal communication channels, instead there was a deliberate strategy to let it spread virally, promoting it through informal means including social events. Even so they have had over 7,000 staff sign up to join the platform, around 70% of which are in headquarters with 30% in the field, but with more and more field members are joining over time (update: over 10,000 staff have now visited Scoop, as a result of the platform being linked from the Bank’s intranet).
- The team made efforts to reach out to each new member individually (all 7,000 of them!) to get to know them and support them to use the platform.
- Users can create their own groups and have created over 400 interest groups on both work related topics, and on shared social interests. They meet with potential user groups to advise on how Scoop might be used, and on some occasions have recommended groups to use other tools if Scoop isn’t suitable for the needs expressed by potential users. Allowing use for social as well as business groups, as well as advising people when not to use the toll were important ways of improving the credibility and attractiveness of Scoop.
- The Scoop team uses awards to recognize those with particularly noteworthy contributions, and encourages and recognizes leadership from staff in the field as well as in headquarters.
- An important aim of Scoop is to speed up and broaden the participation in information and knowledge sharing. It will be important to document examples of how this has worked in practice (to help demonstrate) value and the Scoop team have already begun to do this. A number of questions from our audience were on this – in particular concrete evidence of the improvement of programmes (although I imagine it is a bit too soon to know this – but as always in development there is a demand for immediate results)
- The platform does not directly link to other corporate IT platforms such as the ERP or the organization’s document management system but works as a standalone system. It was commented that this could be a limitation in terms of integrating with existing business processes, but also a strength in that it wasn’t bound by them. Such integration is now being considered.
- The Bank is currently working on a mobile version of the site for people to access via mobile devices or in low bandwidth environments.
- The Bank plans to develop a version of Scoop that will allow external partners to participate. This will also grow organically driven by needs for existing bank users to network and collaborate with people outside the Bank rather than being explicitly tied to the Bank’s partnership strategy, or being imposed upon partners.
- The timing of the initiative was important to its success coming as it did with a major commitment by top leadership in the Bank to greater openness and collaboration, and the Bank’s open data initiative, as well as a new KM strategy. That said, they did need to overcome a number of challenges including: 1) fear about security and people wasting their time online 2) legacy tools – lots of turf wars 3) the organization not being familiar with agile development 4) some doubts as a result of previous KM initiatives (quite a few of these seem familiar to us too!).
Overall there are number of relevant lessons from the Bank’s experience that we can learn from in taking forward our own work on communities and social networking, and luckily we are already doing some of them!
I just wanted to quickly share a couple of great resources on knowledge management tools and techniques relating to development.
Firstly the Knowledge Sharing toolkit developed by FAO, CGIAR and the KM4Dev community is a terrific resource on different knowledge sharing tools and methods in wiki format. I’m pleased to share the news that UNICEF has now joined the toolkit as a sponsoring organization.
This means that we have agreed to help maintain the site, including helping to write and edit content, and supporting the great work of the developers and participants in developing and improving the site. It also means that we plan to use it as one of our main tools to help support Knowledge Sharing in UNICEF (Why reinvent the wheel when someone else has already done it – instead we can help in our own small way to make a better wheel)
The toolkit has been developed as a public wiki which means that it is open to anyone to view, and can be edited by anyone who has joined the site as a member.
As a bonus, I also just found this excellent resource recently published by IFAD and IDRC “Knowledge Sharing methods and tools a facilitators guide” that gives detailed tips on how to facilitate different types of knowledge sharing meetings and events.
In the organization where I work, like in many other development organizations, there has been a lot of push over the past few years on “evidence-based” policies and programmes. So when I tell people I work on Knowledge Management, they often imagine that I’m working on strengthening academic research, or on building massive all-encompassing databases full of peer-reviewed scientific knowledge.
Although I am working on some databases – this isn’t what I actually do most of my time – nor despite what some of my professional colleagues think – is this what I think we should be doing.
Development is a complex business, if it weren’t we would have gotten further along in solving the world’s problems before now. One common reason cited as to why we haven’t done better is that we don’t have enough data, and we don’t have enough evidence.
A number of remedies are commonly proposed to help address this:
1. Collect more statistical data – more surveys, more administrative data collection. More recently we have started to say that we need more real time data collection.
2. More research – more academic studies, more randomized controlled tests, more papers published, papers published more quickly.
3. More evaluation – we need to more systematically evaluate more of our programmes to understood what worked and what didn’t and what lessons we can learn. We need to use better evaluation techniques.
4. More, bigger and more open databases – we often acknowledge that a lot of research has already been done or data collected, but that it is not easily available as it is stuck behind paywalls, fragmented and not well disseminated or easily searchable. To address this we strive to make big well-organized mega-databases that are the preeminent knowledge sources on their particular topic, and advocate for more free access to data and research.
Guess what – I actually agree that all these things are worthwhile. I mean how couldn’t I? BUT – too many people seem to believe that if we keep collecting more and more data, do more and more research and evaluations and make more and more comprehensive databases, then we will have everything we need to do evidence based development work. Basically, if we look hard enough, the truth is out there…
There are a couple of reasons why I don’t agree with this:
1. There are limits to how much evidence you can collect
2. There are other important dimensions to knowledge that are actionable, yet tend to get overlooked when we take too strong a focus on “evidence”
Firstly the limits of what knowledge you can collect. In developing county contexts in particular, it can be difficult and expensive to come by high quality, timely and relevant data. Existing data collection systems are often weak, and while they can be developed there are still limits in terms of accessibility of marginalized populations and cost of developing surveys to them to such an extent that they can provide the data needed to answer many of the development policy concerns we have.
Similarly for research, there are a large number of potential questions that we would like to address, but availability of data, costs, time and limitations in the research methods themselves mean that there are a lot of questions that can’t be answered in a sufficiently timely manner for the development of policies and development programmes.
Evaluation also is limited in that it can be very costly, yet only tell you part of what you need to know in terms of whether a programme was effective and why.
One particular challenge for any knowledge related work is that of generalizability. To what extent can the results of a study of evaluation be generalizable to other contexts and other timeframes and how much do they tell you about what you should do and what will work in the future.
Another important limitation of “evidence” is that even when it exists and is fairly clear (which for the reasons stated above frequently isn’t the case), it often isn’t sufficient to motivate policy makers, politicians, families etc. to take action. Any findings or recommendations also need to be contextualized to the local culture and to the power relations of the situation where you are trying to use the evidence. People often choose to interpret evidence in a way which supports their current beliefs, are not necessarily going to use peer-review in a reputable journal as their benchmark on whether to trust the source, and may not accept advice they don’t lie that they perceive may weaken their current influence or power.
None of this means that data, research and evaluation aren’t needed. But it does mean that they are not enough. So what’s missing?
An important aspect of knowledge transfer and change is personal relationships. Most people don’t have time or the skills to examine all the available evidence first hand. This means they rely on the opinion of others whom they trust. Similarly standard methods for collecting, storing and disseminating research often have little impact with people being too busy to seek out the evidence they need, or to even develop the skills to do so. Again people frequently ask others rather than access the evidence directly themselves.
Also there is a whole range of knowledge that isn’t captured by research, that of personal experience. Often you can understand a situation, and describe it to share it with others, but you can’t back it up with scientific research (a trivial example is that I’m pretty sure I know the quickest way to walk to the station in the morning- but I have neither measured it nor timed it). It might be that it would be too expensive and difficult to prove it through research, or that by the time you know the answer, it would already be too late. Some knowledge is in the form of skills or even instinct which doesn’t easily lend itself to being formally captured at all. This type of knowledge is known in the business as tacit knowledge. Here is a handy diagramme that explains the difference between the two (link to original).
So, in order to take advantage of the part of knowledge that lies below the surface (the part which isn’t “evidence” in the formal sense), then you need to take other approaches. These can involve using tools to try to capture some of what is currently hidden to make it more shareable (tools such as after action reports, end of assignments reports, self-reflection exercises, lessons learned, story telling etc.) and through approaches that make it easier for people with shared knowledge interests to find each other, trust each other, share with each other and collaborate (through approaches such as knowledge fairs, communities of practice, social networking, cocreation).
In fact I find that the most interesting, and promising work I do in the area of knowledge management is not about evidence at all – but is about the social dimension to knowledge. What I need to do is make a better case for this with my colleagues – but then I’m sure they are going to ask me to show them the evidence!