KM on a dollar a day

Musing on knowledge management, aid and development with limited resources

Archive for October 2011

The #failure of development debate

with 5 comments

This is a late supplementary entry in J’s Second Aid Blog Forum on “Admitting Aid Failure” (my first entry was about  some of my own failures)

In the past I’ve blogged about the development hype lifecycle in which a new idea can slowly start to generate interest then in turn become hugely popular and oversold as the solution to everything, and then rapidly disappoint and become discredited, and only truly appreciated for what it has to offer (and not more) long after the initial interest has waned.

I also wrote in “Black and White” about how views on development issues often become polarized into for and against positions about a particular issue (The recent discussion on the Millennium Villages Project anyone).

I consider both of these to be important failures in how aid is discussed, priorities made, alliances built and funding allocated. In both cases both the volume and nature of attention is not in line with the merits or demerits of the approach or idea being discussed.  But  unfortunately these seem to typify a lot of  public aid communication but also aid debate among aid workers and development experts.

I also think we’ve witnessed a bit of both of these in this week’s discussions on failure. David Week’s detailed critique of the current interest in failure (fad surfing in the development boardroom) both dismisses it as a fad and for being wrong in one fell swoop.

In the end I think we need to see discussion of failure (or any other hot aid topic whether it be MVPs, microcredit, cookstoves, RCTs or whatever)  not as an unequivocal good, nor as a waste of time, nor as an aid fad – but a potentially useful approach to help improve discussion and learning around aid – not as a single approach that is preferable to any other, nor as something that stands alone – but rather an interesting approach that can help us  examine our work differently and can usefully be combined or complemented by other approaches. This doesn’t mean there is not a useful role for critique – but that we might progress further if promotion and critique of ideas be more nuanced and less polarized for and against – and if we admit that neither “side” is fully right, nor are we likely to know whether the realizable potential of something is the same thing as a heated intellectual deconstruction of an idea in theory.

In terms of the week’s discussion on failure – I don’t see it either as a magic bullet that will transform organizational learning and therefore aid, nor do I see it as a purely superficial fad with no value. Nor do we know if the current efforts will be sustainable and have an impact. Documentation and analysis of failure is an input to an organizational learning process, not the whole thing.

I see discussing failure not as an end in itself, but as a push or step towards a more rigorous approach to learning and quality in aid work. In many development organizations there is insufficient interest and incentive to look at quality improvement and learning from mistakes since the incentives lead people to tell as positive a picture as possible. While various improvement techniques exist (such as Kaizen, Six-Sigma and other tools described in David’s post) it’s very difficult to introduce them into the aid world when the culture and incentives do not provide the fertile ground needed for them to work.

To me it’s not a grave error that some of the failure reports that have been produced so far concentrate mostly on peripheral or design issues rather than fundamental failures if the alternative  is not to acknowledge failures at all. Starting with the simple  perhaps less politically challenging issues can be a good way to open up a dialogue and make a small change in the culture which create the space for a more fundamental appraisal later. It might also provoke an interest in some more rigorous methods. Eventually.

I for one am interested to see where these experiments in learning from failure will lead us. I don’t think we know yet, so let’s keep an open mind.

Written by Ian Thorpe

October 27, 2011 at 9:20 am

Posted in rants

Preoccupied

with one comment

(Warning: this is an off-topic blog post about something on which I have no real expertise, only opinion)

I’m fascinated by the rapid spread and widespread support for the #Occupywallstreet and similar protests worldwide. I’m fascinated because widespread inequality and unfairness is nothing new – so why is this movement taking hold now? I don’t know the answer but I have a theory:

Inequality has existed throughout human history, waxing and waning over time, punctuated by at times by revolution, but relentlessly persistent. Even when societies have been remade to be more equal, an elite inevitably emerges who is more privileged financially and more influential politically.

Inequality is not only natural but is probably central to human progress. The possibility of an improved life, greater safety and prosperity is a great motivator of human progress, but it also rewards those who are more talented and hard-working, lucky or already privileged. Technological and social advances have come mainly out of intellectual elites financed by capital from the most wealthy. Most people deep down accept that there are those who have power and influence and who have material wealth and those who don’t – or at least they are resigned to it enough not to actively seek change.

This acceptance of inequality is predicated on three things:

1. Success should be rewarded as an incentive to encourage competition and excellence. Those who earn more in general are also thought to deserve more.

2. If we work hard enough and smart enough, we can also join the elite, or at least our children will be able to.

3. Elites “take care of us” i.e. the net impact of the actions of elites is generally positive, even if they disproportionally benefit from it. I’m thinking here of things like job creation, economic growth, technological advancement, political stability, institutions – all things in which the elite play a significant role.

All of these have been challenged in recent times. The level of inequality, in particular the amount paid to CEOs in comparison with average workers has both massively increased, and the excesses of the wealthy are more transparent and visible to ordinary mortals – due to better data but also media reporting (and possibly reality shows that peer into the lives of the wealthy and celebrate excess). Through this we have become aware that the levels of inequality are not commensurate with the differences in talent and contribution, and also that excessive wealth is –  well –  excessive.

In reality social mobility was always harder than commonly believed, despite the visibility of a few exceptional individuals. But given increased cost of education, healthcare etc in the industrialized world, and social systems that help perpetuate inequality this is becoming harder, and people’s belief that they can succeed through talent and hard work is declining. This is particularly acute in the US where the myth of “the land of opportunity” is strongest and where the “American Dream” sustains greater tolerance for inequality. If people stop believing that they and their children can become progressively better off  – well the results could be revolutionary.

Confidence in the wisdom, good intentions and competence of the elite have also come into question. Elites have helped bring about the financial crisis, environmental destruction and have failed to address global political insecurity and terrorism, making the rest of us feel less secure and confident in the future, while still doing (mostly) quite nicely themselves.

This puts the delicate balance of what is acceptable inequality to the test – and people are feeling frustrated that the system isn’t fair, isn’t delivering for them and that they have little opportunity to change things short of protesting. However agreeing that change is needed is quite a lot easier that agreeing what that change should look like, and what can be done to bring it about including how  to persuade or force the elites to go along with it.  It will be  interesting to see how the protests progress and whether they can coalesce around some common, and actionable goals (and IMHO while these will need to redress some of the current imbalances, realistically speaking they will not undo inequality altogether, but rather make it more tolerable).

These discussions may well seem to be a bit of a luxury to those in some developing countries where similar levels of inequality and lack of opportunity are commonplace and have been durable, and where leadership has not brought the kinds of benefits seen in industrialized countries. But there will be lessons to be shared from the success or failure of #occupywallstreet as well as from successful and unsuccessful attempts to address these same problems in the developing world.

Written by Ian Thorpe

October 19, 2011 at 12:06 pm

Posted in rants

KM triple fail. No Faire!

with 5 comments

[This post is a contribution to “The 2nd Aid Blog Forum: Admitting Aid Failure?” being curated by J of Tales from the Hood – although as it happens I was already planning to write this anyway. It’s more personal than my previous post on failure which is here.]

Last week the World Bank hosted the second Washington DC Failfaire: “A celebration of failure as a mark of innovation and risk-taking”.

I attended the fist ever Failfaire in New York organized and hosted by MobileActive.org which featured failures in ICT4Development projects. I was intrigued by this idea so much so that at the time wrote this blog and committed myself to try to organize a failfaire in UNICEF.

Of course that never happened…. so inspired by reading about last Thursday’s event on Linda Ratree’s blog and on Slate I’ve decided to share three of my own failures from my knowledge management work with UNICEF.

1. Making community sites overcomplicated in Drupal

Some years ago we first decided to set up a social networking platform to support our budding work on communities of practice. We got a very small sum of money to support this, and we approached our IT department to get their help. At this stage they were not ready to  help us develop  a web 2.0 platform. So we instead decided to get recommendations from some “young people” who were doing cool technology things on what we should do. We ended up developing a site with what at the time was a relatively new tool – Drupal – and which in the end never saw the light of day. A few of our mistakes:

  • We did not understand the tool and how it works or what it could do which put us at the mercy of our consultants to advise us.
  • We didn’t have a clear idea of what we wanted on our community site (so couldn’t easily communicate this to the consultants or our pilot users).
  • We worked with a consultant who wanted to  do cool stuff and showing what they could do to push the boundaries of Drupal. What we really needed was a simple site that would be easy to learn and use for people unfamiliar with web  technology with low bandwidth many of whom were not the web 2.0 generation. This meant we didn’t get the type of simple no frills platform we really needed.
  • We didn’t focus enough on maintenance, reliability and technical support for the site. This is critical when trying to get people to use something unfamiliar.
  • The tool didn’t integrate with other things people were used to using such as e-mail or the intranet and didn’t look like it was a “UNICEF” product. We thought this would make it easier for people – in fact it made it harder.
So in the end we abandoned this platform – but we did learn some important lessons which we applied when we developed our in-house platform through our IT department as an integrated part of the Intranet.

 

2. The end of end of assignment reporting

When key staff leave the organization, or even move on to another assignment there is always a risk to lose key institutional memory. Although people often do do some sort of handover note there was no formal guideline or practice for this across the organization. Some rather labour intensive and relatively costly approaches have been used on a selective basis such as debriefing retreats where key staff would go off somewhere for a week to write up their lessons before retiring, or interviews where a key staff member would be interviewed and this would be transcribed and written up.

We wanted a less labour intensive methodology that could be applied more widely without extra resources. The UN Department of Peacekeeping Operations (DPKO) does have such a system that is well established – so we decided to adopt and adapt the DPKO system. Unfortunately, although the tools were developed and piloted the system never took root.

One reason is that DPKO culture is very different from UNICEF culture – people are used to producing procedural notes such as handover reports, after action reports etc. possibly because such approaches are widely used by military organizations. People understand the need for detailed procedures and reporting and management support it and demand it from the highest level.

In UNICEF end of assignment reporting was done as a pilot project, with management approval in those offices where it took place, but without “enforcement” or top level pressure on staff who were leaving to comply. This meant a massive amount of time was spent following up with people who promised to produce reports but never delivered them with no real management sanction or peer pressure to make people complete the reports. Given other challenges people have when they move or leave this was understandably low down on their priorities. Another challenge was that there was no real commitment to  widely circulate or follow up on recommendations in the reports – which means that the incentive to complete them was limited.

The main lesson I took from this is that something like End of Assignment Reporting needs to be top down, rather than bottom up i.e. management need to decide that it is something they want to inform their decision making and insist on it taking place. Otherwise while completing such reports can be seen by everyone as valuable there is little incentive if there is neither a sense of it being mandatory or that it will be made use of.

3. Where is the UNICEF Failfaire?

Despite my initial excitement about the idea of the Failfaire, and the possibility of replicating it in UNICEF, I was never able to make it happen. Here’s some thoughts as to why.

  • ICT4Development as a sector is more experimental and less conservative than other longer established areas of development work. People are more willing to do new things (perhaps because comparatively less is known about what works), they are tolerant of failure,  more open to admitting their mistakes, and willing to learn and move on more quickly (not to say its easy – just that the people who work in this area are more apt to do this). Of course I tried to propose it for other areas of development!
  • Finding funding is always tricky – we saw an opportunity, but ended up pitching the idea to a donor that is probably among the most conservative and who were hard to convince of the idea (although they did not dismiss it entirely).
  • We tried to formalize it into a full or multi-day event with official high level sanction within the organization (because that’s how wee usually do things), yet even in my first blog I’d surmised that the event’s informal nature was one of the critical factors in its success. Our management was (probably rightly) cautious to trying out something totally new  and potentially risky without seeing how it worked on a smaller scale first, especially when this type of thing isn’t (yet) part of our organizational culture.

But all is not lost. Erica Kochi and Chris Fabian from the UNICEF innovation team presented at the first Failfaire and have been brave and good natured enough to keep the torch burning on this one. Hopefully they will make it happen in UNICEF some time soon.

And before I left, the first documented case study of a failure was published widely on the intranet, kindly offered up by one of our country representatives. This created a bit of buzz and people have started talking about how we should do more to learn from our failures. A big lesson here is that cultural change can take a long time, longer than you think – but that doesn’t mean it’s not happening – you sometimes need to be patient.

Written by Ian Thorpe

October 17, 2011 at 1:15 pm

Do our projects need really simple reporting?

with 5 comments

I was lucky enough yesterday to attend a presentation at UNDP by Peter van der Linde (@petervdl) and Frodo van Ostveen (@frodo1977) of Akvo.org

They were explaining their  “Really Simple Reporting” tool/platform for grass roots real time reporting on aid projects to an audience from UNDP’s Knowledge Management and Web communication teams, to which I managed to get myself an invite (yes you can see the side of my head in this picture!).

Akvo have developed an online platform that allows local partners at project sites to directly share updates on their projects which are then aggregated and can be visualized on a common website, and which can be linked with financial data.

They talked in more detail about a pilot project they are running with the Netherlands Government using their platform to collect information on projects they are funding in the Water sector. Here the government funds water projects with a number of Dutch/International NGOs who in turn fund projects with a number local partners in a wide range of locations. The platform is a way of collecting together reporting from across all of these – to show what is happening on the ground. This complements financial and project information which the Dutch Government now makes public as a signatory of the International Aid Transparency Initiative (IATI) but which are very valuable, but also somewhat dry.

Some of the characteristics of the Akvo system are:

  •  Reporting is spontaneous, done by project workers across all of the work rather than by communication professionals cherry picking select examples.
  •  Updating can be done simply using a web interface, by e-mail or cellphone. This can also include images and other media as well as text.
  • The reporting can aggregated and visualized and linked to funding data, and geotagged.
  • The platform is open source so code can be freely shared. Akvo finance their work on the site by operating it as a service and providing support and training.

One of the nice features of using this approach is that there is a constant stream of updates coming from the projects without having a very heavy bureaucratic reporting structure which makes it easier to get timely updates as well as reducing some of the burden on local partners in complex reports. The updates provided are also in the form of “authentic” stories that demonstrate real progress and challenges which makes them more interesting and readable (and potentially more appealing to donors especially taxpayers or individual contributors). Given the project sites provide updates directly onto the site without being edited or filtered through headquarters also adds an element of transparency – especially when linked to transparent financial reporting.

During the pilot phase the Dutch Government are still requesting traditional reports however – so for the moment it is still an additional burden -but the longer term aim is for it to replace periodic project reports this saving the often quite extensive burden of writing and analyzing these. It’s also worth stressing that this system can’t and doesn’t replace formal monitoring and evaluation mechanisms.

Above transparency and grassroots supporting there are other potential benefits once could imagine. For example if multiple donors and intermediary NGOs were to adopt the same platform it would reduce the burden of local partners having to complete reports in multiple different formats for each funder. Another local (and global) benefit would be the ability for project sites to learn from each other by seeing how other sites were progressing, learning from any innovations they were developing and from how they were overcoming shared challenges. similarly from a central level it could be a good way to identify and scale up new approaches or to help flag common problems.

The presentation and the Q&A got me thinking about how such a platform might be used within the UN.  On the one hand it could be a good way to better demonstrate real and tangible progress from the UN’s work in a human way by showing impacts on real lives.

At the same time, while some of the UN’s work does go through NGO to local partner channels, a lot of the UN’s resources also go into technical assistance and policy advice and advocacy which can have a  large impact, but whose direct human impact is less easily captured and linked to the funding provided. Some thought would be needed on how to adapt this approach to easily capture these impacts in a way that still tells a story.

I could also imagine taking this system a step further in order to  collect feedback from beneficiaries (or at least from country counterparts such as the national and local governments that the UN is helping). It could be used to get them to talk about how the UN assistance has helped them in their own words.

I’d say there is certainly strong potential for carrying out a pilot of this to see how it could work within the context of UN development assistance. I’m sure there would be many practical and political challenges, but it could be a great way of opening up the “black box” of how UN assistance works and what is the impact of this on the ground.

(Photo “borrowed” from Frodo van Ostveen – see original on his blog post about this presentation and other meetings they had in New York)

Written by Ian Thorpe

October 13, 2011 at 11:43 am

Posted in Uncategorized

Innovations and Lessons Learned on Social Policy work – Volume 2

leave a comment »

Here’s a quick blog post to push one of the last things my team was working on before I left UNICEF, which was just released last week (congratulations to the team!).

They’ve just released a second “compendium” of innovations and lessons learned  featuring 18 of the more interesting case studies reported by UNICEF country offices on their experiences in social and economic policy work. See the compendium together with other recent publications on innovations and lessons learned on UNICEF’s website here.

These innovations and lessons learned are particularly interesting since they show how investing comparatively small amounts of money in analytic policy work combined with strategic, targetted advocacy can yield large benefits for children, especially the most marginalized.

They feature what is known as “upstream work” where the aim is to provide strategic technical inputs together with careful advocacy with government, parliamentarians, media  and other groups to help influence government and sometimes donor policy and spending. Here, small inputs can have large impacts, influencing the actions of those who have the most resources and ability to improve th elives of children, but who did not have the knowledge, capacity or priority to doso.

Note: as regular readers of this blog will know, I’m not a fan of best practices. Rather we prefer to document and share real-life practices and experiences that can be used to help inform or inspire other programmes. This publication is an example of some of the experiences the team have documented. And its worth noting that while the team researches, edits, questions, reviews etc. from headquarters all these experiences are initially proposed and written up by country offices based on their reflections and experience, so congratulations and thanks to them for taking the effort to share their work.

Written by Ian Thorpe

October 10, 2011 at 9:48 am

Posted in Uncategorized

Open data – experience needed

with 7 comments

This is kind of a follow up to my previous blog post “Confessions of a recovering statistician” with some thoughts about the ongoing  move towards open data.

The open data movement is proceeding apace with more and more development data being made available publicly and better tools to manipulate and visualize it. The World Bank now makes all its development data available for free and allows datasets to be easily accessed through its API. More and more development agencies are now joining IATI and making their aid spending – and soon their project documents available in a standard format. The Center for Global Development now publish the datasets and methods used for all papers they produce so that the results can be  independently verified. Lots of NGOs, social enterprises and groups are crowdsourcing data directly from communities and individuals and making it publicly available whether it be Ushahidi deployments or community mapping. Soon we will have volumes and types of data not previously seen available to anyone to use and analyze.

This is a very good thing.

 

But, an interesting aspect of this is that while you might be tempted to conclude that data is now a more valuable resource than individual knowledge – I think it is actually the reverse.

Although anyone can download a dataset, manipulate it and create visualizations from it – not everyone has the skills to do it properly.  Analyzing data, making sense of it and knowing how to use it to inform decision making is a specialized skill – and not one that everyone masters. AS I mentioned in my previous blog – some kinds of analysis require modelling and other techniques which while they can be automated – need to be properly understood to be used properly. Similarly knowledge of the data sources, reliability and the context and important to be able interpret the data correctly.

As more data is available, this specialized skill will be in increasing demand, and the work of those individuals and organizations who can do this will be at a premium. At present there are just not enough people with this skills around, and it will take time for people to be trained in them.

Similarly as data becomes more readily available, this also puts a premium on the types of knowledge that cannot easily be boiled down into data points – areas such as experience, social networks and interpersonal skills (Or as Einstein put it “Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.”). Even interpretation of data and turning that into politically feasible policy recommendations requires not only technocal knowledge but also experience and judgement.

In a way the benefit of open data is that it frees up time and effort spend just trying to collect or get access to data, and allow us to spend more time analyzing, interpreting,  thinking and ultimately doing – and those people and organizations who are better equipped to do these tasks will be the ones that will prosper.

One potential negative side effect of opening up data to all, is that there will be a boom in poorly done, misleading secondary analyses and attractive but inaccurate data visualizations, and conclusions will be drawn and decisions taken that will be based on faulty analysis. On the other hand these analyses will be able to be reproduced, checked and corrected or countered by others. In the shorter term instead of getting no data and inf0rmed analysis on a topic we might instead get multiple analyses with different conclusions competing with each other. But the benefits of this debate and self-correction mechanisms mean that in the long run those who analyze data will also be more accountable for what they do and that reputations of the analysts will be built which help the better analyses to rise above the poorer ones.

And it’s important to remember that even experts make mistakes – but these can now be corrected by “The crowd”. And in this case the crowd isn’t the general public – but rather those who have the required experience and technical skills – but who are not sitting in the organization which collected or produced the data in the first place. This way more expert eyes on a data set can both produce new analyses and validate those that have already been produced.

Written by Ian Thorpe

October 5, 2011 at 1:14 pm

Posted in Uncategorized

Confessions of a recovering statistician

with 4 comments

Since I’ve just started my new job, I’m not ready yet to blog about the KM aspects of it, so instead I’m writing about what I’ve learned from my past experience.

I have a confession to make. I am, by academic training at least, a statistician. That said, it’s a very long time since I worked as one. But even though I don’t work in the area of statistics any more, I gained a few useful perspectives from my academic training and my early career in the UK government statistical service which I wanted to share. Here are a few thoughts:

  1. Data and the ability to analyze and understand it are very powerful. They allow you the possibility to gain amazing insight into the world, and sometimes to gain insight into the world’s problems and how they work and occasionally how to address them.
  2. Despite this, numeracy is often underrated in many educational systems (it certainly was the UK when I was growing up) even among the political elite. Too many opinion leaders and policy makers shamelessly admit that they don’t understand official statistics or graphs and charts – yet the same people would be much less willing to admit that they  didn’t really master reading or writing.
  3. Basic statistical literacy is not actually that hard. It’s not difficult to learn how to understand fractions and ratios and how to read data tables or understand graphs and charts, or how to use them effectively (and honestly) in communicating statistical data – if more effort was placed on teaching them and they were more valued.  And just this basic understanding could help avoid many incorrect interpretations of data and the faulty decisions which emanate from them.
  4. BUT – some aspects of statistics ARE highly specialized and require experts to do them. Examples include designing sampling schemes for surveys, developing experimental designs that allow you to test hypotheses and econometric modelling. Even then, it’s not uncommon to see errors and disagreements either in design or interpretation of the results – so it’s good to use experts for expert work and to have some mechanism to peer-review the work (even better if you can publish both your methods and datasets to anyone who has the right skill set can check it). See this notable illustration  of the need to check your methods on the cost effectiveness of deworming.
  5. Users of statistics, such as politicians and journalists often forget that statistics are usually an estimation of the situation in real world – not the literal truth – whether due to the statistic being based on a sample or being measured indirectly or incompletely. This means it is our best guess of the real situation – but not reality itself, and as such it is subject to error and only as good as the approach taken and the quality of data used. Ideally any estimate should be accompanied by a standard error that gives you an idea of how accurate that estimation really is – but this is rarely given or used.
  6. Some things are even worse – they are based on models. Maternal mortality figures are an example. Bill Easterly recently commented on the use of “inception statistics”  – a model with a model within a model when looking at stillbirths. Often this might be the only way to estimate something – but we need to be wary in interpreting and explaining the results and aware of the implications of the (sometimes heroic) assumptions made and the sensitivity of the indicators to them.
  7. You treasure what you measure – we often seek to identify measurable quantifiable indicators to help monitor progress, whether it is development goals, or process indicators for our projects. But it’s important to remember that when we put this numbers into our frameworks and set up means of collecting them, we risk to focus on improving the numbers themselves, rather than on the underlying issues we are seeking to address. This becomes all the more the case when rewards, personal or institutional, are based on hitting the numbers.
  8. Understanding statistical data requires not only statistical expertise but also contextual knowledge to interpret it. It’s often tempting to start going beyond what the numbers themselves say to suggest explanations for what they mean – but unless you have a good understanding of the specific context (culture, politics, biology etc.) then “common sense” assumptions and explanations might well be inaccurate or just plain wrong.
  9. Interpretation is not free of conscious or unconscious biases. People often look to data to find confirmation of their existing beliefs, rather than impartially considering all possible explanations – the famous “confirmation bias”.
  10. Data can be great to persuade people – but you have to be good at writing and talking about it too – both to explain it accurately and in plain English, but also to use it persuasively. Good data with poor explanation – especially for those audiences who are not data literate – is a poor persuader.
  11. Sometimes you have to take a position if you want to take action even when you don’t have all the facts: Those who understand statistics and produce them are often rightfully cautious about how they are explained and interpreted for some of the reasons above. But taken at face value this can lead to paralysis. To use data to take action you need to strike a balance between seeking the most complete and reliable information, and taking timely and politically pragmatic action with incomplete data.
You will notice that some of these, especially the last three might be seen to be contradiction with each other. Statisticians and policy makers often find it difficult to see eye to eye on the appropriate use of data in decision-making.  There is a trade-off between knowing enough and taking action – and also a reality that data is not the only thing that can and should factor into decision-making – both due to the limitations of what data can currently do, and the need to factor in other less tangible knowledge such as experience, culture and politics. But if policy makers can become more attuned to data and both understand and respect it more and know how to use it more effectively, and if statisticians can better explain what they do, explain what their research means and acknowledge and work with the needs of policy makers then there is a lot of potential for decision-making to be not data driven, or data ignorant – but data informed.

Written by Ian Thorpe

October 3, 2011 at 2:04 pm

Posted in Uncategorized

%d bloggers like this: