Be a loser

Last night I held an Olympic gold medal in my hands. Not mine, alas, it was one (Sydney, 2000) of five won by Sir Steven Redgrave. I was attending an event where he was the guest speaker and the medal was passed around the audience. It was a great speech and brought about the usual discussion and comparisons between business and sporting success.

There is one tip I wish would come up more often at these events: It’s good to lose.

Successful sports men and women are fiercely competitive – winning is the name of the game after all. But even if you are the champion of champions, the reality is that, over the course of your career, you will lose more often than you win. Like it or not (not, mostly) you have to become a good loser. A good loser sees every loss as a learning opportunity – it exposes a weakness that can be fixed or eliminated. Being a good loser does not mean that you have to like losing – quite the opposite is preferable – but it does remove the fear. If you fear losing (as most bad losers do) you will never take the risks needed to develop your strengths and get to the top of your game.

Whenever I see an organization, team or project that is risk-averse, it is usually because the people involved fear failure. I have seen examples where more money has been spent on risk analysis than was needed to just go ahead, install a piece of technology and see what happens. This is particularly true for knowledge and collaboration systems, and is why the most successful implementations tend to start small and spread organically. Planning a massive centralized knowledge system will be expensive and the value difficult to predict, making it high-risk in the eyes of the investors. Whilst the central team is still arguing over a standard taxonomy plan, business units can get simple collaborative tools up and running with minimal cost and effort.

Being risk-averse prevents success. It is taking risks, and sometimes losing, that generates a fresh brew of knowledge. And having a system (up and running, not on the whiteboard) to capture those insights will help turn them into successful actions.

I have only experienced one company that really behaves like a sports person and understands the benefits of losing – Microsoft. Love or hate the company, once a path has been chosen Microsoft is relentless at doing whatever it takes to be successful, and is not afraid to fail publicly en route.

Knowledge is personal

We interpret and act on the same information differently, depending on the context. This is the challenge we face when trying to record knowledge in written format for re-use – writing it down makes it static and knowledge is never static. Take the following simple and effective example, provided by Nick Stodin:

Take a bunch of vegetables, chop them up and throw them in a pile in the back yard. What do you have? Compost.

Take a bunch of vegetables, chop them up and drop them on your kitchen floor. What do you have? A mess. What do you do with it? Clean it up.

Take a bunch of vegetables, chop them up and put them into a wooden bowl. What do you have? A salad. What do you do with it? Eat it.

Data = ‘a bunch of vegetables’. Information is created when we make some judgements: vegetables on the floor should not be eaten and vegetables in a bowl can be eaten, vegetables that are not eaten can be used as compost (but first discard the packaging). Does that make it knowledge? No. Information-based decisions are consistent, knowledge-based decisions are variable – they change when presented with different contexts. For example, if you are starving you are probably less fussy about food that drops on the floor. If the vegetables are in a bowl with maggots crawling over them you might not want to eat them… then again, you might. For those who don’t want to eat maggots, each will have a personal threshold at which point hunger decides maggots don’t look so bad after all. The minute you try to record knowledge you make it static and impersonal. Hence it stops being knowledge and becomes information.

Last week I was reading an article in NewScientist magazine* that highlighted this point with a great example:

A trolley train comes hurtling down the line, out of control. It is heading towards five people who are stuck on the track. If you do nothing they face certain death. But you have a choice: with the flick of a switch, you can divert the trolley down another line – a line on which only one person is stuck. What do you do?

The choice (judgement) is to save five people but lose one. In this context, ugly as the decision is, most people would agree with the information presented – they would flick the switch. (We are assuming all people involved are complete strangers unknown to you, of similar age, no bias etc.) Now let’s change the context:

This time you are standing on a footbridge overlooking the track. The trolley is coming. The five people are still stuck, but there’s no switch, no alternative route. All you’ve got is a hefty guy standing in front of you. If you push him onto the line, his bulk will be enough to stop the runaway trolley. You could sacrifice his life to save the others – one for five, the same as before. What do you do now?

Blimey (was my reaction when I first read it). The information is the same – five people are saved, one dies. Surely this is a straightforward rational decision, just like before? But it isn’t, the context is more personal – there is a world of difference between flicking a switch and physically shoving a human being to their death. How would you react? I can imagine all sorts of reasons entering my head to justify not pushing – it’s not my fault the trolley is coming, it’s not my fault the people are stuck on the track, it WILL be my fault that this person dies if I push him… Some people may find it easy to shove the guy, others may decide they couldn’t push him no matter what, others may initially recoil at the idea but then rationalise the options, say a few prayers, and do the deed… The decision is no longer so easy to make. The reaction is personal. Let’s throw in another spanner – one of the five on the track is your child. For those whose first reaction was ‘never, I couldn’t push someone to their death’, maybe that decision would be revisited in this new context…

Computers would not struggle with this scenario – they process information and the information is identical in both examples. But we don’t always want the rational choice, we want an appropriate solution that fits the circumstances as we interpret them. That’s why we rely on knowledge more than information. And knowledge is personal: easy to share, difficult to record.

*Example taken from NewScientist magazine article: ‘A moral maze‘ (subscription required to view full article)

This entry is filed under: Intelligence

Disaster in the Making

Good editorial in The NewScientist (10th Sept 2005, print ed. online requires subscription), highlighting the dilemma with natural disasters:

…Terrorist attacks may or may not take place, but some natural disasters are inevitable. We don’t know when they will happen, but happen they will… There is a clear mismatch between forecasting natural disasters at some indeterminate time in the future and the short lifetime of local and national governments in modern democracies… The Asian tsunami and the disaster in New Orleans show clearly that the political processes for handling disaster prevention are failing badly.

One option is to measure the success of diaster prevention. How many times has the Netherlands’ system of dykes protected it from disastrous flooding? How many lives have they saved, and how much money? In November 2002, when a 7.9 earthquake struck Alaska close to the trans-Alaska oil pipeline, there was no leak and no environmental disaster because the pipeline was built to withstand quakes. What value should we put on such a design?

The article goes on to mention that the Thames flood barrier has been rasied 80 times in 23 years in part thanks to the estimate that a serious flood in London could cost taxpayers £30billion. So it is possible to measure success by what does not happen… but it’s not easy. Look at the Y2K issue – the fact that nothing happened led to people criticising the amount of money spent on it. Maybe too much money was spent, but the whole point of Y2K was to prevent anything from happening. Sometimes, we can be a difficult species to please.

That fear factor is a real challenge though, and one that leads to bad (or manipulative) decisions. Witnessing terrorist acts evokes more fear than being told climate change is estimated to cause the sea-level to rise by 0.8cm per year…