This post was inspired by the blog series ‘What kind of ‘data revolution’ do we need for post-2015?’ on post2015.org - and now cross posted there too! - http://post2015.org/2013/12/10/a-bottom-up-data-revolution-for-post-2015/
The High Level Panel’s report on the post-2015 development agenda called for a “data revolution”. It’s already clear from this set of blog posts that there is both a strong enthusiasm about the new possibilities of data analysis to support the implementation and monitoring of a new development agenda, but also a wide range of interpretations of what this means, who will be the principal actors and who will benefit.
Often the benefit is seen in terms of having access to better statistics, real-time monitoring and feedback, big data analysis and open transparent data on aid and government spending. These provide a treasure trove of data to support better monitoring and evaluation of development interventions so that aid agencies can design better programmes and donors can allocate resources more efficiently and researchers can better test out their development theories.
But I’d argue that the most significant and also most challenging part of the data revolution will come from the bottom up. While aid transparency can help with accountability to funders or even to partner governments, the really interesting area where improved accountability is needed is with respect to those who the aid is intended to help.
One promising area is in soliciting feedback and ideas for development projects directly from the communities where they are implemented. Both new technologies (such as SMS or online surveys) and old technologies (public opinion polls, paper questionnaires, interviews) can be used to help collect information on both the preferences of project beneficiaries and their levels of satisfaction with the services they are being provided. This is helpful both as a means of improving programme design to make it more effective, but also conferring the important right of giving the poor a voice (“nothing about us without us”). See “listening to the people we work for” for more on this idea.
But people don’t always tell you what they really want or really think. Sometimes you also have to observe them and see how they act, or even try to “walk a mile in their shoes” to better understand the lives they lead, the challenges they face, the choices they make and why they make them. Ethnographic studies have been with the development world for a long time, and the notion of “human centred design” is also not new, but a data revolution can help expand the use of these techniques and make them easier to do and more cost-effective. Use of “big data” to observe behaviour patterns such as use of mobile phones, transport or health services can help us understand much more about how people are really making choices. Similarly use of remote sensing devices, hand-held cameras and recorders and other tools can help scale up ethnographic research and participatory evaluation, including giving individuals and communities the tools and skills to “document themselves” and share their own stories.
But an even stronger step that is still in its early stages is to empower citizens in developing countries not only to be able to express their views or share their lives, but to provide them with the tools and skills to take advantage of the data revolution themselves. At a simple level this can mean helping them have access to and the skills to make use of the data they need to make individual decisions (such as choosing between schools or health centres or make healthy nutrition choices). But a more ambitious goal would be to help them develop the skills they need to be able to mobilize an advocate for their own interests making use of the data that is out there (and often about them) rather than relying on the goodwill and decisions from others with stronger technical skills and better financial resources to invest in using data.
Here the open data revolution is a good starting point to empower citizens, but in reality most citizens, especially those in developing countries lack the capacity to make effective use of this data. Instead open data may create a new “digital divide” between those who have the ability to collect and analyze the new data and those who don’t with rich word governments, academia and private enterprise being the main beneficiaries of these new data sources while those we are trying to help are being left behind.
In the end, if we really want to realize the promise of the data revolution, and use this to bring about sustainable change, then we need to think from the bottom up rather than the top down. How can we develop the capacities of the communities we seek to serve, including the most disadvantaged, so that they can participate fully in the new data revolution and be leading their own development rather than relying on the goodwill and analytical capacities of others.
A few months ago I was appointed “Learning Manager” for my office, responsible for leading an office learning plan and helping foster the creation of a culture of learning in the office as well as helping facilitate staff access to learning opportunities. This is not a full-time position, rather a set of additional responsibilities added on to my existing job.
The “Learning Manager” role is something that UNDP created for every office some years ago as a way of strengthening organizational learning in individual offices, and this plus the considerable wealth of online courses available to staff via the intranet on the “Learning Manager System” displays a strong commitment to foster learning by UNDP.
At the same time learning managers are often quite junior staff (I’m in the relatively rare position of being a learning manager and an actual manager too). Typically this role is lumped in with the Human Resources assistant position (or HR associate as they are called in UNDP) which is also typically a local staff recruitment. Last week I participated in an orientation/skills development course for new learning managers which was a great opportunity to speak to other learning managers and find out how they do their job.
What I heard was both daunting but also encouraging. Many new learning managers were struggling to get traction on learning in their offices due to resource constraints, mixed levels of support from managers, lack of focus on learning due to workload and the challenge of getting things done without any formal authority and on top of doing their “regular” job (and I might add unrealistic expectations from the organization on what a learning manager can physically manage to do).
But I also encountered a highly motivated and resourceful group who were finding different ways to achieve results in challenging circumstances. The shared challenge that all were trying to address is how to maximize office learning with limited time and money and no formal authority. I’m sharing here some of the ways, both strategic and tactical that learning managers are getting the job done.
1. One of the key challenges is making the case for learning with the head of office and management team. Different approaches used for this include making the case for learning as an investment in office productivity (appeal to logic), reminding that it is part of the “rules” and measured in the office scorecard and comparing how the office is doing to similar offices e.g. in the same region (appeal to authority), or emphasizing the effect it will have on staff morale and creating goodwill in the office as well as helping the staff at a personal level to deal with changes in the organization (appeal to emotion).
2. Another challenge is balancing the roles of facilitator and enforcer. Learning managers are expected to ensure that all staff do their mandatory online trainings (ethics, gender, security etc.) and that the office has a learning plan and that individuals have learning goals in their performance appraisals – yet don’t have the authority to make people do this, especially those who are reluctant. Ultimately though the most fulfilling role for the learning manager, and probably the one that achieves the best learning results is to foster a learning culture by responding to people’s needs, interests and aspirations and acting as a facilitator and coach to help people learn rather than trying to force them to do the compulsory things they may not be enthusiastic to do.
3. At a tactical level when budgets are tight it is often not cost-effective to send individuals on external training courses out of the country, and local opportunities may be limited. However you can benefit from extensive expertise and experience that is already in the office. Example approaches to this include i) organizing a “skillshare” session where staff members share a skill they have (possibly from a previous job) with the rest of the office, either as a training course or as a coach ii) have staff who do go on external trainings or who go on work travel to debrief the office on their learning as a routine event or requirement iii) taking advantage of visitors from HQ or regional offices and asking them to carry out a training or briefing as part of their visit iv) inviting speakers from local partners.
4. Pooling resources – e.g. sharing learning opportunities with other UN agencies or with government and NGOs. This could be by organizing joint trainings or by having a reciprocal arrangement to allow people from other organizations to join trainings organized by the office in exchange for being able to send people to their trainings, and routinely sharing information on learning events with one another.
5. Make use of online resources – this can involve using online courses developed by the organization or licensed through an external provider for example in UNDP staff have access to a wealth of UNDP and externally developed courses through their Learning Manager System. Other options include use of MOOCs (massively open online courses), webinars or other online and remote learning opportunities.
6. Mentoring and coaching – setting up individual peer-to-peer learning exchanges within the office or between offices in the same region. This can be valuable as it provides ongoing support rather than just an episodic training. One more sophisticated way to do this is to use the self-assessment peer-assist methodology (from Collison and Parcell’s Learning to Fly) where offices (or individuals) self-assess their learning needs against a set of criteria and then are paired up according to needs and strengths.
7. Organizing regular learning events or learning days (e.g. once per month) where staff devote time to learning to ensure learning is regular and recognized. Other similar approaches are sending out weekly TED talks, articles., presentations or other short pieces of interest to stimulate learning without consuming much time.
8. Some offices seek to regularly send staff on “detailed assignments” or give them “stretch assignments”. these are short-term opportunities to take on a more challenging assignment to fill in for a temporary vacancy or a colleague on extended or maternity leave either within the same office or in another office. These may also be used in place of hiring external consultants for specific needs e.g. preparations for a major UN event. These provide on the job learning that can be particularly helpful for national staff who have deal with the catch-22 of needing international experience needed to move to an international posting.
9. Find ways to reward learning by publicly acknowledging those who have completed learning activities (such as having them receive an award from the head of office at a staff meeting) or those who have contribute to sharing their knowledge and skills. There was also some discussion on the pros and cons of “name and shame” for those who don’t complete mandatory trainings, although I’m not personally in favour of this.
10. Network of learning managers – perhaps the most powerful way of sharing good ideas, learning opportunities or even just to get moral support is through networking between learning managers in different offices. Having access to experience and advice from other offices is an excellent way to improve learning whether by sharing templates and examples, or helping share resources or by providing feedback on potential courses or trainers. Perhaps the most valuable support though is in sharing advice on how to get management support and how to motivate learners.
I’d mentioned in a previous blog post about the “UN Transformation Network“, an informal community of like-minded UN employees and consultants whose aim is to connect people and have them learn from and support one another in transformational change. A major activity of this network this year has been the Developing Transformative Leaders Course which has been both designed and delivered through the network and in which I’m a participant and part of the organizing team.
We’ve had a lot of interesting sessions on leadership and innovation as part of the course which has been running over the past 6 months some of which I’ve reflected on the blog, but I wanted to share the outcome of an insightful but also very practical knowledge sharing exercise from the last session where we looked at leading without authority. In the UN we’re often called upon to manage inter-agency task forces or cross departmental groups which bring together quite different interests and which are often voluntary in nature and where the leader or coordinator has no formal authority over the group and relies on good will and skills of facilitation, engagement and negotiation to keep things moving.
Below is a write-up of some of the useful tips from the discussion put together by Patrick McNamara the lead facilitator of the course and Sam Martell, Political Affairs Officer, Department of Political Affairs at United Nations and fellow course participant. I hope you find them useful.
In the Transformative UN Leaders course last week, we explored what works when leading a multi-agency task force. The group came up with insights on delegating, influencing others and getting results when one might not have direct authority to demand results.
We had a rich discussion of possible solutions that included carrots and sticks ranging from “name and shame” to ensuring recognition for individual and team achievement: “you are a star – I will let others know.” We also explored the unique cultural aspects of leading in the UN context and how to create support to achieve success. Here is the case:
“You are leading a multi-agency task force with 20 colleagues from 11 agencies. They are, for the most part, there on a voluntary basis. You have a deliverable required in your key results that can only be accomplished by the task force collaborating. What strategies will you use to influence the task force members in order to achieve the objectives on time? “
Using a collaborative problem-solving technique (small-group and large-group dialogue) we came up with these possible solutions and thought you might find them useful.
(UN) CULTURAL CONSIDERATIONS
• In the UN, there are many soft controls and few hard controls so it can be more effective to “carrots” rather than “sticks” to motivate
• Understand the organizational culture and drivers of the work
• Think about what works on you and how it might apply to others
• Which is the right lever to play when (soft or hard)
• Do not make it burdensome to participate/ pay attention to workload and share in the successes
• Not everyone will be fully committed or deliver the same level of contribution – just deal with it
• Make sure aspirations are in line with commitments and ability to commit (avoid the scenario where everyone shares bold ideas but no one is ready to take ownership to implement them)
• Understand the cultural background of individual group members.
• Ask participants: Why are you here? What do you want to contribute?
• Create a co-chair position
• Create a smaller core group to drive the deliverables
• Delegate, set targets, create peer pressure
• Hold each other accountable (name and shame, “I hold your office accountable”)
• Ask for a commitment
• Align responsibility with tasks and functions (and workload) to minimize burden
• Ensure accountability for responsibilities
- Transparency (who involved, who accountable)
- If not accomplishing, someone else from agency chosen
- Ensure there are consequences
• Individual relationships and commitment – then check-in publicly at next meeting to ensure each step is completed. Or, one-to-one in person tracking of progress
• Create an atmosphere where people want to be there
• Build trust relationships “you are a star” and let others know about people’s contributions
• Share recognition for team achievement
• Play to the strengths (and interests) of individuals in assigning tasks
• Publicly and repeatedly recognize / thank people
• Thank participants bosses and organizations
• Get them to do stuff they’re good at and care about
• Working in voluntary teams needs to be more collaborative so you need to listen/ respect their work and inputs/ ensure they are maintaining ownership more than in regular teams
• Search for like-minded people to create support
• Surround yourself with successful people
• Find an external actor to exert pressure on the group so they commit to the deliverable
• Create space for the project (and for it to move forward) Top-level champion can help
In the UN we are used to making big audacious goals to change the world, whether it be halving child mortality, eradicating extreme poverty or empowering the poor to have a say in how their government is run.
At the same time, by ourselves we have limited means to achieve these bold goals, so rely a lot on our power to convene and persuade others to do what is needed. The problem is confounded by the fact that for some of these problems we have a fairly clear idea who needs to do what and how, but for many of them even when we have ideas and some evidence there is no blueprint for success (for example think about the now much discussed idea of reducing inequality – there is a growing consensus around the importance of this, but we don’t even agree on how to measure it let alone what approach we should take to achieving it).
But if you don’t have challenging goals then you have no sense of direction, and no way of knowing if you are on the right track, and if your goals are too modest and simple – then you are probably not trying hard enough to do what we are trying to do, which is ultimately make the world a better place.
Let’s drill down a bit into how we work in development agencies to try to make these goals a reality. Given the importance of the goals themselves and the amount of money and effort required to achieve them then there is an increasing focus in “managing for results”. This is both understandable and for the most part welcome since if we are to make the case for aid, then we need to be able to prove whether or not it works (or more likely when and where), and if we are to ensure that our projects are being managed well and that all partners are accountable for delivering their part of a complex puzzle then we need systematic tools to monitor how we are doing both to report on whether we on track and spending money wisely, and also to flag problems and make course corrections when needed.
There are also a number of critiques of the results focus and Results-Based Management, some of which I’ve aired before on this blog but there is a particular challenge I’ve seen time and again in aid work that isn’t a flaw in the approach itself, but rather in terms of how we apply it.
When we develop our results chains or log frames for a project we invariably end up with a workplan of discrete activities with budgets and responsibilities assigned to them. We usually have some type of monitoring framework with indicators and baselines to accompany it and this perhaps includes some specific research, evaluation or data collection tools to keep it up to date. If we’ve done a good job our plans will also identify some assumptions that we consider need to be met in order for the activities to deliver the outcomes we are expecting, or if we are getting fancy we might even have an articulated “theory of change” that more clearly explains the link between the activities and the desired outcomes.
So far so good. But then we get to execution.
In many, many projects I’ve seen the focus of monitoring shifts quickly to implementation – have we carried out our activities as planned?, have we spent our budget? and we hope to whether the activities delivered the outputs we were expecting. But once we are deep in the day-to-day management (and monitoring) of execution we tend to forget about the end goal. We start to care more about whether we delivered our training workshop and spent our budget than whether we actually built capacity, or whether that capacity is performing the role we originally intended.
If we are then asked whether our project is successful we can confidently assert it is because we were successfully able to carry out all our activities and spend all our budget and have something visible to show for it. But in doing that we often fail to cross check our outputs with the desired outcomes and impact. And if there is a gap between outcomes and where we expected to be, we often tend not to focus enough on understanding why – in particular looking to see whether our assumptions and theory of change were correct, or if circumstances have changed so that what seemed right at the beginning no longer hold true.
Looking at why our well executed activities didn’t lead to our desired outcomes is difficult, which is why we do it less than we should. In particular it’s easy to escape behind the assumptions – particularly those of the type “This assumes that the [name external partner] will effectively carry out complementary activity [X] and provide additional financing [$Y]”. But far from being able to blame lack of success on others not doing their part; reducing uncertainties around external assumptions in the logframe should be considered a key success factor for a project and something to be regularly monitored. In reality the path to success is rarely linear, as we can’t be sure our theory of change is correct or doesn’t need to be adapted to context, and we can’t be sure that what circumstances don’t intervene that require us to change tack.
A couple of practices from audit and evaluation that are intended to foster systematic learning and improvement inadvertently contribute to this. In both audits and increasingly in evaluations there is a requirement to develop and implement a “management response” which outlines how the project or office that is being reviewed will take action to implement the recommendations of the review. This seems eminently sensible as it holds managers accountable for ensuring they read, consider and act on findings from an external review. But the negative side of this (and I’m basing this on several experiences) is that the response is usually a list of actions to implement, and the measure of success is whether they are adequately implemented, not whether they actually solved the shortcomings that the audit/evaluation identified. In other words they fall into the precise trap that carrying out an external evaluation is designed to avoid.
So what to do about this? We need to find ways to shift our internal accountability mechanisms away from monitoring and rewarding implementation of activities and spending of resources, or even delivery of outputs to the contribution to outcomes and impact. To help achieve this we also need to focus more on developing and challenging our assumptions or theories of change, and designing projects to minimize the external factors that are a risk to delivery of results, or perhaps even better build our programmes to be more adaptive to changes in external influences which we have little ability to control, something we can only do if we are not too tied to rewarding unthinking but efficient delivery of our existing workplans.
At a basic level what is called for is to keep a focus on the end goals we are trying to achieve, even when we are bogged down in the minutiae of delivery, or at least to keep raising our heads up above the fray to keep asking ourselves whether or not our execution still makes sense in the context of where we want to go and where we are right now.
I was meeting with a KM team from another UN agency a couple of days ago when the conversation turned to two interesting and related questions
1. What is the relationship between knowledge management and monitoring and evaluation?
2. To what extent should the focus of knowledge management be about improving the use of academic or scientific knowledge in development work?
Many organizations, especially in the UN are linking monitoring and evaluation with knowledge management both in terms of content and in terms of organizational structure. This is both an opportunity and a challenge. On the one hand information from monitoring and evaluation processes is a critical input to knowledge management processes, and similarly knowledge management tools and techniques can help support better monitoring and evaluation. At the same time there are subtle differences between these approaches (see my past blog post comparing KM and evaluation for more details). A major difference is that monitoring and evaluation has a greater focus on accountability – is the project on track, were the outputs delivered, was the money spent well, did the project have the desired impact. Knowledge management focuses more on learning and reflection and on how to share what is learned with other projects. These are in fact very complementary, and they require some overlapping skills such as the ability to collect and analyze data and summarize and interpret it – but again knowledge management practitioners put a greater emphasis on “soft” skills such as understanding human psychology and group dynamics, networking skills and skills in both interpersonal and mass communication. Another challenge is that if the KM team is seen by staff across the organization as part of the “results” team or performance monitoring team then this can mean that people are less likely to trust them with their stories of failures and setbacks in case that be “used against them” but a large part of learning requires candid reflection on positive and negative experiences in a safe environment.
In practical terms having M&E and KM sit and work together can be very effective to get a more complete picture of organizational knowledge and how to leverage it, but this only works if the KM people are allowed to function like KM people and their different role and skill set is respected rather than being seen as part of an overall monitoring and accountability system. Similarly KM people need to work closely with human resources, communications and technology teams across any organization so the structure and working methods need to allow for that (I hope to write a future blog post about the plusses and minuses of locating the KM function in different parts of an organizational structure including communication, staff development, IT, executive office, programmes etc. –unsurprisingly there is no “best” approach as each has it’s advantages and disadvantages).
Taking the second question – an increasing refrain heard in aid agency strategic plans, or government plans for that matter is that they need to be more “evidence-based”. On one level this is a no brainer – if you have knowledge about a problem, how it is caused and what strategies are effective in tackling it, then why wouldn’t you use it?
A more interesting question might be to look at why available evidence isn’t it being used, or what if any are the limitations of what you can determine from research that can be applied in addressing problems in the real world.
Research and scientific method is very powerful, and woefully underused in identifying what types of technical interventions work well in development and under what conditions. It makes sense to use experimental research techniques to determine which interventions are most effective and also to tweak their design to improve their efficiency both at a general level and at a local level to adapt them to context.
But in every project there a certain amount of “knowledge” that is needed to implement them successfully that isn’t easily measured through a scientific approach and which can’t be implemented in a standardized fashion across different contexts. (see an early blog of mine “The truth is out there, or maybe not” for more discussion on the limits of applicable scientific knowledge). The most important of these are politics, culture and personalities (i.e. the actual people involved in implementing the project or who are critical to its success). Dealing effectively with these issues requires a combination of local knowledge, experience, qualitative research and insights, peer assistance and advice and flexible adaptation. This is where knowledge management techniques such as communities of practice, peer-assists, after-action reports and even lessons learned databases and expert rosters come into their own. Similarly innovation tools such as human centred design and rapid prototyping can also be put to use to address the aspects of project design and implementation that don’t readily lend themselves to rigorous research, or for which standardized approaches can easily be designed.
Again, soft KM techniques, personal judgment and expertise and hard science should not be seen as competing approaches but as complementary ones. The challenge is figuring out how to combine them effectively, especially which approach to use and when, and what to do if they generate seemingly different conclusions. For me, this is an area we could do to think more about. Yes, to doing more and better research on development approaches, and yes to doing more to put the conclusions in the hands of decision makers and persuading to use them in their decisions – but at the same time we also need to think more about how to tackle the soft side of the “science of delivery” such as how do we adapt approaches to make them successful in a local context taking account of politics and power as well as culture and social norms – and how do we manage the people side of the project effectively, and how do we continually adapt our programmes to deal with the changing situation on the ground, as well as ensuring that we are constantly learning from each new experience and incorporating that learning into our future programmes. This is the “art-meets-science” of delivery where we still have a lot to learn.
“Every great cause begins as a movement, becomes a business, and eventually degenerates into a racket.” – Eric Hoffer
If you are a die-hard fan of a band you are quite likely to say – yes, they are great, but their best work came before they were famous, before they “sold out”.
If you are an early adopter or promoter of a new idea –knowledge management, social media, cash transfers, mobile phones for development, innovation etc. etc. then you probably feel the same once everyone has jumped on the bandwagon.
The thing is, most new things of potential merit whether ideas, technologies, bands, fashions, political ideologies, you name it go through a similar cycle.
Generation of something new – Promotion by a small dedicated following who really “get it” and who like it because of its uniqueness, while everyone else is either critical, sceptical or totally unaware of its existence, growing buzz shared by a wider “early adopter group” (what Gladwell would call the “mavens”) – adoption and promotion of the idea to a more mainstream audience that uses its originality as a marketing tool but also smooths off the edges – co-option of the idea by the mainstream often including dumbing it down, and diluting it in order to make it more acceptable, and ensuring that a profit can be made from it – low quality clones are created – finally it either becomes so mainstream it is no longer noticed, or it jumps the shark and disappears from view.
So what happens when something moves from being an emerging idea to an everyday occurrence? One aspect is that the first movers and early adopters all complain about how dumbed down and commercialized their original idea has been, and how it isn’t any good any more, or how the late adopters don’t really understand it and are just following like sheep. Like a club that isn’t cool any more now that everyone goes there.
But here’s the thing. Unlike with a favourite hidden restaurant or eclectic band, if you do have a really great idea for technology, or development, or promotion of human rights or for participation – then your goal should be for word to be spread as widely as possible – not just to the cool kids. And if you want your idea to spread, then you also need to be prepared for it to be adapted and owned by others, for it to be “dumbed down” so it can be more ready accepted, and for it to be commercialized so that it can be financially sustainable. Your idea is no longer your idea – it is no longer pure – and it’s probably “less good” than it was – but the difference is it has now become accepted and widely used.
This recent article in Salon that has been doing the rounds about the culture of TED talks and the oversimplification and marketing of “creativity” is a fine example. While its true that the notion of creativity has been oversimplified, packaged and sold – it seems strange to me to be dismissive of the fact that the recognition and focus on the importance of creativity has never been stronger, even if the content has been watered down to make it accessible to a wider audience.
Another related critique is that ideas that are initially revolutionary become appropriated by the existing hierarchy and thus become tools of the status quo rather than tool for change (or as Billy Bragg famously put it “The revolution is only a t-shirt away”). This is no doubt true, but at the same time in taking on parts of a revolutionary idea, the status quo and balance of power also subtly changes – the change may be more evolutionary than revolutionary – but it is real nevertheless. And often simple ideas and technologies have quite revolutionary impacts, but not necessarily the ones that were expected, nor do these occur in a short or predictable timeframe.
So what can a change activist do when you see your great idea taken up and messed up by others?
Maybe you should let it go, and accept that in order for an idea to be successful it will need to be taken up and adapted by others for both better and for worse (and worse for you might be better for someone else), recognizing the adoption of others is in fact one of the best measures of the quality of the idea. Maybe you can help adapt the idea, and yes, even dumb it down or commercialize it yourself to help it spread (as well as possibly to make a living out of it). That way you can also help do your best to ensure that the most critical (to you) parts of your idea are preserved.
You can keep pushing forward to further refine and develop your idea in order to improve and evolve it so it keeps being ahead of the curve – so it remains revolutionary or leading edge while everyone else is moving to where you were 5 years ago. Just remember though that in 5 years time you want them still to be following behind you.
Or if you need to you can do something else. Create or promote another different idea and help develop it and make it more practical and popular.
In conclusion: Don’t be upset if everyone starts using and adapting your idea, and if they figure out how to make a buck out of it. But that doesn’t mean you have to give up striving for your ideas. It’s good to keep looking for the revolutionary idea that might change the world, but to change the world you might have to be prepared to give up ownership of your idea as well.
I’m currently participating in a “UN Transformational Leadership course” which I’ve mentioned in past blogs. One of interesting self-discoveries I’ve made from this course is that sometimes we are the ones that create the barriers to change or to pursuing our big goals.
Through some introspection I realized that one of the biggest challenges I face is that I find it hard to say no when people ask me to help them. And the more I help people, the more people ask me to help, and the less focused I am on pursuing my own goals.
This desire to help others isn’t entirely altruistic. Like most people I want to feel valued, and like many I want recognition for what I do and for my expertise. I’m probably also a bit of a procrastinator. This has translated into me looking to help and advise others which has then translated into more and more external asks. Having a public blog and social media accounts and working in the field knowledge management also means that I get a lot of requests for help.
So what would be a good, healthy way of managing this, without stopping being helpful as at all? Here is a little framework I’m prototyping for myself. This time I’m asking for YOUR help to give me feedback and share your tips for dealing with “too many questions”.
To start I realized that my time is broadly divided into three major blocks 1. Work 2. Family time (as a husband and father of three) and 3. sleep. Those are all pretty important, and anything that doesn’t fit within those categories takes valuable time away from them, so I need a few good measures as to how to think about requests for my time. If I were a consultant, or worked for employer that allowed outside activities then an important category might be “do I get paid?” but I can’t get remunerated for what I do beyond my salary, and that doesn’t depend so much on how helpful I am.
So here are the questions I plan to use to screen requests for help:
1. Is it directly related to my job? Is it explicitly part of my workplan, my job description, or is it at least related to the broad purpose of my job. If not, does it at least contribute to the priorities of and mission of the UN?
2. Do I know you? How do I know you? – are we friends, colleagues or past collaborators? Is our current or future professional relationship likely to be mutually beneficial (even if not equal), have you helped me in the past?
3. Is your question interesting? Are you asking me something I’m curious about myself, or something that I’m passionate about either professionally or personally?
4. Is it something that I can easily answer? Is it related to my expertise? Easier questions are more likely to get answered than complex involved ones, but at the same time, very broad generic questions (tell me everything about KM) are less likely to be answered. It’s also not good to ask a question for something you could have easily researched yourself.
5. Will my input be useful and used?, and will I get any feedback on it? I can’t tell you the number of times I’ve been asked for inputs, never to hear what happened to them, or even what happened to the project they were requested for.
6. I hate to finish with this but — Is there something in it for me? Not money since I can’t take it. But will it help me develop my skills and knowledge? Will it help me in my current work? Will it help me make new connections? Will it help me develop my career or find my next job? Or at least do I get more than a thank you e-mail?
I’m going to try to run future queries through this filter and only take on those that score well on the above criteria. I’ll let you know how it goes. If I don’t respond to your e-mail don’t be offended, I want to help, I really do – but I also have to get stuff done.
P.S. Here is a sample of the types of questions I regularly get:
Can you help me get a job? Can you review my CV? Can you comment on my KM strategy?, Can you comment on my publication?, Can you tell me who in the UN (or elsewhere) is working on X? Can you tell me what software tool I should use for Y? Can you answer my (10 page) questionnaire? Can you tell me how to do Z knowledge management related task? Could you give me feedback/input on my project? Can you tell me where to find all the research on A? Could you help publicize my publication/project? Can you speak at our conference (and pay for your own travel)? [addendum since I wrote this bog: Can you meet me to check out/help me market my new must have KM related software]
In a future blog I’ll give my generic answers to some of these questions, so you can read those before asking me