Two ways to get quality knowledge
Whenever I was working on communities of practice or on lessons learned, or anything relating to external knowledge sharing I would inevitably get the same question from senior managers “How do we ensure that what is being shared is high quality?, who is vetting it?”. Even in internal communities of practice I would be asked whether we ought to screen questions to make sure they are relevant and not trivial, and responses to make sure that they were sufficiently evidence based.
I understand this concern – you don’t want people to waste their time, or worse be misled and take inappropriate action based on faulty knowledge. And at the back of the minds of many was also the potential risk to our reputation of putting out something that might draw criticism.
So how do you ensure that the knowledge you are sharing is of a high quality? I’m going to look at two approaches:
1. The “old-fashioned way” – i.e. all content is reviewed and edited/cleared by a small group of “experts”. Their authority to review comes either from their position (i.e. reviewed by headquarters because it’s their job to set global standards and its assumed they know better than you – or by the head of a department because he/she is “accountable”) or through “recognized experts in the field of study”.
This approach is widely used, but it has its limitations: most contributions will not live up to their standards; people might well be intimidated to contribute if they feel their contributions could be judged unworthy, yet it’s already hard enough to get people to share; material produced and cleared is likely to conform to the biases and preferences of the reviewers; the time of the reviewers is a bottleneck; what the reviewers are looking for might not be the same thing that the potential users need.
2. The way of the “amazon” – i.e. quality is reviewed by a network of peers, either conservatively (and time consumingly) before something is shared widely, or quicker, but more riskily – after something is shared, or after a lighter quality review. The advantages here are that the reviewers are the consumers and so they are best placed to identify what is most useful and can also add to the conversation making the whole process more dynamic and allowing the best quality material to “float to the top”. The challenge is that to get there some of the exchanges may be of poor quality or relevance but have still taken up people’s time. This also requires the central power structure to give up control to the “workers” which means sometimes the exchanges will not follow the direction of the official party line.
Of course it’s not that one method is intrinsically right and the other wrong. Historically the first had precedence, now it seems the balance of opinion is favouring the latter. But which approach is most useful depends to some extent on the type of knowledge you are dealing with (see this previous post on the difference between a community of practice and a help desk which are two specific examples of each of the above two approaches and some guidance on when each one works best). It might also depend on the type of organization you are in.
But it’s also good not to think in terms of one or the other, but how to combine both and allow them to complement each other. It’s possible to have both “expert” reviews and peer/consumer reviews of lessons learned or other knowledge products – just like at moviefone.com you can read both the critics feedback and the viewers feedback (guess what I took the kids to see this weekend). It would be good if in our knowledge management systems we could find better ways to incorporate both of these approaches too.