I am currently working on two very different advisory projects; one with a government, looking at societal knowledge capacity building, and one with a membership driven organisation, looking at processes that are ultimately driven by a need to sustain and increase membership over the coming years. We are using the same basic model with both and, though they are very different in almost every way possible, they share a common golden thread; they will start and be sustained by conversations. So, for my 100th blog post, I want to reflect on this golden thread and my thinking from the past year…
For better or worse, I am a systems thinker. I believe that we rely on good models to navigate the complex internal and external environments that organisations find themselves in; environments created by our actions and the actions of others. I believe that even within complex environments there are disturbances (conditions) that exist, which, if not accounted for, will cause failure within the system – read my blogs and articles on the importance of HR for an example of what I mean here.
I believe that the system has to respond to variety in the environment with variety in the system design; that means we also have to analyse the cost for not acknowledging distortions within the system that require our response. This can be analysed through Failure Mode and Effects Analysis - dissatisfaction with KM, anyone?
I’ll use HR as an example again: We know that organisations transact in a Knowledge Economy; we know that this is driven by innovation (products or services); we know that people are key to that process; therefore, how severe will the system failure be if we do not acknowledge HR strategy, policy and processes within the system then we have to accept that the system is open to failure.
One of the most interesting ongoing debates that I have engaged in over the last two years has been my opposition to the techno-centric view of KM; the view that KM should be exclusively about the collection, collating and presentation of knowledge resources to inform the decision-making process. We are sold on the idea that KM can enable the decision-making process; governed by algorithms that determine the media presented to us (see variety amplification and also ‘availability heuristic‘), software decides what we see, making us reliant on the skills of the designers and programmers to ensure that we receive the right information – it could be argued that we are being tightly coupled to a centralised view of the norm. There are lessons from history here: Centralised decision-making was attempted in Chile (Project Cybersyn) by Stafford Beers in the 1970; he failed and retired to Wales – some would say that he couldn’t finish the project due to the government being overthrown, but in reality, his grand design for centralised decision-making was flawed. The problem with centralised decision-making is that it attempts to control deviation from the norm and all too often misses local intelligence (variation) that is crucial to the decision-making process. Some will say that it is about distributed decision-making and getting the right information to the person at the coal face to enable the best decision based on the best available intelligence – I would love to know how many organisations actually enable that level of distributed decision-making; in my experience it is usually more about centralised control. The information has to be ‘sorted’ or prioritised in order to suppress information overload (variety attenuation), which takes us back to the dependence on the software that serves us and the amplification of the right information. Then the real problem emerges; the system is only as good as the information available within the system…and surprise, surprise, we are back to people!
People, complexity and decision-making….I’ve heard the argument that 911 could have been prevented, this was widely discussed in Taleb’s book, ‘Black Swan’; if only we had planned for the potential of the outlier…in one example, we might have removed cutlery from aircraft that could potentially have been used as a weapon in a hijacking. Now, in hindsight, what do we do, remove cutlery from the system and in doing so, lower the risk of a similar event. The problem is that we don’t seem to respond to all the anomalies within the system, even when we know the potential consequences. For example, I recently flew out of Orlando airport and Outback Steakhouse (a restaurant in the departure area) provides you with an ineffective plastic knife and fork for you to eat your steak with (broke the knife twice!), no problem, we’re lowering the risk in the system – lessons learned and all that. Except, if you fly business class, which I’m sure no respectable terrorist would ever think to do, you are provided with metal cutlery on board the aircraft – obviously people who pay for a business class ticket are far too civilised to think about hijacking a plane!
Rant over and back to the point of the blog…
Software solutions require people to input what they know into the system and as such they exist as a barrier to this ‘ideal’ when it comes decision-making processes; pure and simple, people decide when and if they will share what they know. Ultimately, decisions are all too often influenced by the ‘affect heuristic’; how do I feel, do I like it or not? Evidence goes out of the window and the decision is taken on a feeling.
Where am I going I hear you ask… People ARE the system; they don’t exist outside the system, they are the lifeblood of the system and connecting people is core to the KM process. In both the projects I mentioned at the outset of this blog, we are working to get the various parts of the system talking to each other; the operative word being, talking. The solutions both organisations seek already exist, they just don’t know it because the system is fogged. Simple conversations defog the environment, existing component solutions that can then be adapted for the whole become apparent and organisations save money. Am I oversimplifying a very complex process? Perhaps, but ultimately, in my experience, every KM project we have been involved in has relied on the power of conversation. And you know something, I would wager that the initial conversation, the crucial decision-making point of the project, was driven by the affect heuristic; evidence is often secondary at this point – do you like what is being proposed to you and, even more so sometimes, do you like the person who is ‘selling’ the proposition.
Can it be that the success or failure of what we do is actually driven by a single feeling in a single moment of time? Do you think we scenario plan the impact of that decision at that time? We make fundamental decisions every day based on a gut feeling of like or dislike; we are human, we are fallible and we have to ask whether that will ever change?
Reflecting back on this past eighteen months, the keystone for what we do is conversation, whether ‘selling’ solutions or passing on our insight into the complexity of this phenomenon we call KM. The heart of what we do, how we think, is captured in the KM M-Model, but, when all is said and done, the keystone to any solution we have offered has been built on defogging the system and enabling conversation.
Concluding this 100th post, I want to return to a question that often comes up; ‘how do I sell KM?’ Well, it starts by understanding the needs of the organisation, the needs of the individual and the processes that bind the two together – we built the M-Model to help stimulate that thinking. The rest, well that comes down to your ability to sell the challenge and the solution; and for that you need powerful conversations.
They certainly don’t solve everything, but they certainly help!