Complexity and Knowledge Management Navigators…
The following images and text are excerpts taken from Griffiths et al. (2010) The knowledge core: A new model to challenge the Knowledge Management Field, International Journal for Knowledge Systems Science, Vol. 1, No. 2, pp. 1-16 [published Aug/Sept 2010]
This blog is designed to provide a visualisation and overview of the model used in the case study discussed in my earlier blog.
Model construction overview
Carter et al. describe key principles in the construction of system boundaries; only include those elements or relationships that cause an impact upon the process; include elements that are inherently controlled by the system or its user, but similarly it is important to remove those elements that cannot be controlled by the system or user. Yolles suggests that this approach dissolves uncertainty, where system boundaries should avoid cutting across processes by either including or excluding them from the system’s whole. Carter et al. develop this position stating that this approach removes uncertainty when examining the effect of elements upon the system. Carter et al. also suggest that a useful description is needed, in which the open or closed, or partial open/closed processes are clear to the user (an open process being one that interacts with the environment and a closed process being one that is insulated from the environment). This would seem to be supported by Senge who discusses the need for systems that are generative in nature. He suggests that these convey ‘what causes the patterns of behaviour’ (p. 53), which in turn allows the user to understand how changes to these patterns can produce different behaviours within the system. Senge promotes this approach over the ‘responsive processes’ (those which examine patterns of behaviour), or ‘reactive processes’ (those which examine events). Therefore the model sets out to demonstrate the 16 Critical Success Factors broken down into 4 functions and 12 enablers that we discussed in our first paper (Griffiths & Morse, 2009), along with an element of environmental interaction.
Meadows suggests that Systems Thinking determines a weighting towards the whole and not towards myths or perceived major factors – which could inhibit success through a failure to identify a limiting factor, having true influence over the process. With this being the case the model does not take into account the frequency of findings discussed in our meta-analysis, as limiting factors would seem to be situationally embedded and cannot be represented within the blended Theory of Change Model (an approach applied in the main paper cited at the start of the blog).
The main paper suggests KM to be a system of processes that interacts with the environment to produce its whole. This interaction would seem to suggest that it informs and is informed by the situated environment and would appear to require representation within the flows of the model process. Leonard posits that knowledge needs to be maintained in order to be of value and Markus suggests that knowledge reuse is of importance to the viability of knowledge as a value creating resource. This suggests the need for a KM tool that is designed to create a loop as opposed to a linear chain. McElroy reinforces this, stating that KM is a complex open system, influenced by complexity and System theory, which constantly interacts with its environment. Chowdhury links Bandura’s Social Learning Theory to demonstrate that human behaviour develops in a ‘continuous reciprocal interaction between cognitive, behavioural and environmental determinants (p. 5). This seems to underpin the need for a loop, where the system both influences and is influenced by the environment through its actions, and is demonstrated in diagram 1 as the flow through and around the model in a cyclic relationship.
Handzic et al. conducted narrative research into current KM models and suggested that many to be deficient in their use of double-loop feedback. Handzic et al. Support the link between knowledge and learning (discussed in the main paper) and consequently observe this omission as a critical flaw in the field. The need for a feedback loop is also discussed by Meadows who suggest that where systems experience situated failure it can often be directly attributed to structural behavioural issues. Meadows suggests that a feedback loop is required in order for the model to flex and overcome issues of situated failure. We also identified this, where we observe reflection or testing as one of our 16 CSFs (Griffiths & Morse, 2009). This also satisfies the need for a double loop approach to modelling, as suggested by Argyris & Schon, where the governing variables and applied strategy are constantly challenged.
Feedback loops have been criticised for not providing an ongoing testing process, where proposed solutions are fed back into the process and continuously tested to determine effectiveness against other alternative solutions (Blackman et al.). Blackman et al. link their theory back to the work of Popper to suggest that double-loop thinking fails the falsifiability test, in that is identifies when a system works, but fails to identify when it doesn’t. However the Theory of Variety Attenuation suggests that variety overload can break down the system (Schwaninger). It could also be said that solutions are effective until a flaw is identified through application, at which time an optimised solution should be implemented. This could be linked to value and context, this was discussed in Griffiths & Morse (2009), where we cite the work of Hori et al. in overcoming issues such as variety overload through the following formula: Representational Context [Artefacts] + Conceptual Context [Existing in the mind] + Real world context [Situated Application] = Value.
Checkland suggests that defined arrows and boxes demonstrate a certainty in the process, which Soft Systems research at the stage of this paper is not able to offer. Checkland believes that visual representations of the proposed solution should reflect the volatility of the Action Research Process. However, this research is attempting to move towards a paradigm that can be viewed as ‘what really exists’ in an attempt to overcome uncertainty in the field. With this being the case the model is represented at the point of research conducted to date. This divergence from Checkland’s approach to SSM would appear to be supported in the Logic Modelling space, where Theory of Change Models are represented with defined flows that reflect the certainty of the creator at that time (Knowlton & Phillips).
The Knowledge Core has been designed to demonstrate the interaction between the system and the environment. It has also been structured to demonstrate the interrelated support of the four main functions, which provide the parameters of the bounded whole. The enablers are demonstrated to be interlinked, but volatile, in that they are not stationery and will move according to the need of the function and the demand of the situated environment.
It is proposed that in order for an organisation is to create value it must look at the whole, being the bounded functions of ‘Capturing & Storing’, ‘Sharing’, ‘Creating’ and ‘Applying’. From this position it would seem possible to enquire in to the efficiency and effectiveness of the function through the engagement of the enablers.
The K-Core tool has been developed from this model and includes a KM maturity model, Feedback tool and Enquiry tool, including interview questions and document analysis guidelines. The tool has been successfully used in consulting processes in Asia and Europe. A case study in Quintiles, a Clinical Research Organisation, will be made available in July. An an anonymous case study in a chemical research MNC can be accessed through my previous blog.