Complexity and Knowledge Management Navigators…
Sorry for the slow blog posting over the last week…just back from meetings in Los Angeles; one quick note, Venice Beach…surreal! Back to the blog, which is a follow-on to ‘Why can’t you sell KM?‘
I love conversations on how to calculate the value of KM projects. I’ve lost count of the number of times that someone has told me their story of how they measured KM and/or then sold KM on the back of that measurement.
I’m going to be honest, the vast majority of these conversations leave me feeling unsatisfied, even worried. The stories are told by good people, with honourable intentions, but they are fooling themselves and the people around them. There have even been some conversations where I have been left to ponder the immortal question posed by Ben Kenobi, “who is the greater fool, the fool or the fool who follows him?”
A good example to highlight the points I am about to make was a discussion concerning the measurement of an online community of practice. This CoP had been built as a good idea (their words, not mine) and its benefits were being sold to me via measurement tools that calculated the value of community interaction in retrospect; think about a problem being presented and the solution being offered by someone in the community, the measurement being the energy and resource savings arising from the knowledge surfaced, shared and applied via the community. This data was then being used to ‘sell’ the roll-out of three more Communities of Practice across the organisation.
This, for me, had the potential to be a beautiful mind trick. One potentially exposed by a simple question, “how confident are you in this data”? The silence, followed by the words, “what do you mean” spoke volumes and, ruling out issues with discourse, should, in my opinion, have thrown investment in the expansion plans into question: “Post hoc ergo propter hoc” (it is the consequence because it came after the action – or the problem with inductive reasoning (I have data and it conforms to my belief because it shows benefits after I implemented a CoP)).
This should serve as a warning to Knowledge Managers looking to bring credibility to the field. It is bad science, bad management and just plain wrong to fit the cause to the data in retrospect and believe it to be true without considering alternatives. What is the probability that the meeting between two people in a CoP was actually down to a cold snap during the month of July (the time my imagined exchange takes place), which stopped the person who asked the question from taking ‘sunshine liberty’ that day? Perhaps there is enough evidence in the CoP output data to suggest that the CoP is more productive on cold days and therefore CoPs are of most benefit in cold climates. I know, I’ll roll it out across our Scandinavia operations! Or was it all just a moment if randomness…the mind boggles.
Too often managers predict that a system will work based on personal bias. They design and build said system, based on that personal bias. They then collect the data to measure the success of said system, again based on that personal bias, and use the data to validate their prediction.
The CoP prediction, as an example, is based on interactions in a multi-variant non-linear, complex, system. This means that the CoP manager will need to conduct an analysis that examines their ‘observed’ outcome against all other possible causalities, both in isolation and in conjunction with other alternatives. Consideration should be given to the number of possible causalities for the observed outcome and the number of times that the observed outcome occurred against the number of possible alternatives – how confident (in percentage terms) are you that your prediction, your observation, is causing change within the data? We see confidence levels a lot in the data presented during political polling during election time. A sample is taken and compared against the general population. The size of the sample, in relation to the general population, determines the confidence level (the number in small print) of the data being presented (usually circa 95% (+/- 5%) which can be taken as a high confidence level – generally, forget anything under 90% (+/- 10%)).
Here is some food for thought:
…What is the probability that the people who connected via the CoP have found each other anyway (e.g. what is the size of their respective personal networks, were they already in each others network, were they visible to each other outside of the CoP)?
…Did they find each other quicker as a result of the CoP and how do you know?
…Would someone else outside of the CoP been able to provide the same answer quicker (just consider the 1-9-90 rule).
…Was the answer actually the ‘best-fit answer’ and how do we know?
The challenge of induction, causality emergent from existing data, is a particular problem and one plaguing ‘big data’ at the moment. Managers might build arguments for pragmatic or social validity, acceptance by others across the general KM population that points towards transferability, but the argument is only as good as the underlying methods and supporting data/information.
The bottom line, getting away with poor measurement practice is all well and good in the short term, but Knowledge Managers and consultants alike are all too often accused of selling ‘buzzwords’. We have to move away from poor science, and poor decision-making based on poor science, if we are to have long-term impact.
In the world of resilience Jedi mind tricks, such as these are a call to the Dark Side and we all know how that turns out…at least until episode VII in 2016.
Check out our next KM Course (Resilient Knowledge Management Practice) in Slough, August 12th – 16th 2013 (Stage 3, Advanced)
Check out ‘Operation Punctuated Equilibrium’ (Resilient Knowledge Management Practice) in Edinburgh, October 24th – 25th
www.punctuatedlearning.com (a real time simulation environment)