Knowledge management lessons from job interviews
I’ve just completed a training course in “Competency Based Interviewing”. I’ve been trained in this before some years ago, but took the latest version of the training since it will soon be mandatory for anyone who sits on a job panel to have done the course, so it was time for me to brush up my skills in this area.
For those not familiar with the approach, the basic premise behind competency based interviewing is that traditional technical based job interviews often fail to elicit information that gives you a realistic indication of how well someone will do in a new job. This is because doing a job well can be as much about behaviour as it is around technical knowledge. The personal or behavioural characteristics needed to do a job well are called competencies, and for competencies its not enough to know them in theory, but the best predictor of whether you have them is whether you can describe how you used them in the past. Examples of competencies might be interpersonal communication, negotiation skills, organizing and planning work etc.
So, to take the example of negotiation skills, a typical interview might ask you how you negotiate or how you would go about a hypothetical negotiation, whereas in a competency based interview you would be asked to describe a real negotiation you were involved in and what you actually did, and what actually happened.
Another important feature is that these behaviours can change over time based on experience – but often also require a concerted effort of self reflection, feedback and commitment to action.
Another feature about competencies is that while people naturally have different levels in them, they can also be learned through experience and conscious effort to improve, so the interview also looks for evidence of self-reflection and learning from what didn’t work and then whether this was applied in the future.
The use of competency based interviewing and testing is backed up by a fair bit of research and experience, and is the norm in many large organizations, including in UNICEF (in fact these were adopted by many private sector organizations back in the 1980s).
What is a little more surprising for me is that we often struggle to apply a similar approach when looking at aid organizations, or even more specifically at programming approaches or practices. There seem to be a few key lessons which we could use to also assess the capabilities of teams and organizations, as well as to particular approaches or practices in development. In short:
1. One of the best predictors of whether something will work is whether it worked in the past. (Not whether in works in theory)
2. Why it worked in practice (or didn’t) might be best analyzed not by the theory of the approach that was followed but by what actually happened and how this contributed to the success of failure. This requires self-reflection AND feedback from others.
3. An important element of improvement is looking at what didn’t work and extracting lessons from it. BUT extracting the learning isn’t enough – for a lesson to be really learned it then needs to be applied successfully in practice (otherwise it is still hypothetical learning).
4. Another element of the interview process that applies well to assessing an approach or team is observe first, assess later. That is collect all the observations you can first before you make an assessment. In job interviews first impressions can count for a lot, but they can also be misleading, so in competency based interviews you are trained to observe and record only, and then assess only once all the data has been collected. This is good advice for programme assessments too.
5. Many eyes are better than one. Interviews have a panel of interviewers because each panelist may see different things and come to different conclusions, so having several people helps get a more complete picture of a candidate. Similarly they should compare data first before comparing assessments so as not to unduly influence each other. Also great advice for programme assessments – use multiple viewpoints – compare observations first, then draw conclusions. Of course many assessment methodologies embody this -BUT often the assessment is already creeping into the researchers mind before the evidence is formally analyzed, and this inevitably leads to (unconscious) bias in the assessment.
I’m sure there are many other lessons too, but there were a few take-aways for me that are useful for work on identifying and applying lessons learned.
If you are interested to learn more about competencies and competency based interviewing here is an old guide from UNICEF (undated but I’d probably place it around 2005). The current guide, with our updated framework and example questions is not available online -probably for the obvious reason of not giving candidates too much of a jump on possible questions! But the old guide gives you a good sense of what the apporach is about.