Wikis in moderation

Had a great debate during a workshop last week. I was discussing how Web 2.0 technologies can be applied internally to improve knowledge sharing. Opinions (and expectations) about the use of tools such as blogs and wikis vary – from being rose-tintedly optimistic about their potential benefits through to being thoroughly pessimistic and convinced their use will be a disaster.

In this instance, the customer raised a valid and common concern – that innacurate information could be posted on a wiki page or blog and become relied upon, leading to misinformation being distributed internally and (worse) externally. It’s a fair comment. Information tends to be sticky – once we learn something, we hold onto it until we learn the hard way that it is no longer applicable or true. See previous blog post – Sticky Information.

But it is a mistake (and hence the debate) to assume that, without wikis and blogs, misinformation doesn’t occur. It does, through email and conversation. The difference is in the discoverability (or lack of). Wikis and blogs are transparent – published on a web site for all to see. If somebody publishes inaccurate information, it can be quickly corrected for all to see. The same is not true for email and conversation. But such corrections rely on people to moderate the digital conversation.

Wikis are a great way to democratise the sharing of information and knowledge, but do not consider them a labout-saving device. The reverse is usually the case. The most successful wikis balance open editing with moderators who keep a check on the quality and accuracy of information being published.

When 2 Wiki

During this year I have been running a series of workshops for customers wanting to explore what SharePoint Server 2007 can and can’t do. As people continue to debate what Web 2.0 means and whether or not it matters, it has been interesting to see its arrival within business. Questions about wikis and blogs have cropped up on a regular basis and there has been plenty of confusion about their uses (let alone what RSS means). To try and help, I have put together a short article outlining the similarities and differences between four web-based methods for publishing content: web pages, portals, wikis and blogs. The common factor uniting all four methods is that nothing more than a web browser is required to create and publish content.

Click Here to download the full article (2Mb PDF). A short description is provided below:

Please note that this post (and article) is not about defining precisely what is or isn’t a wiki or debating what software you should use. Microsoft Office SharePoint Server 2007 has been used to provide screenshots demonstrating each method. The article was written for customers who are in the process of evaluating or implementing SharePoint and, hopefully, its contents will be useful for others too.

Web Page

Traditional web content management systems were the first tools to make publishing content easy by separating the design of web pages from authoring and publishing of content within those pages. A web designer controls the design and management of web pages. Authors can edit the pages and enter content within pre-determined placeholders. The screenshot below shows a page being edited:

Web pages managed in this way make it easy to create and publish content in a consistent and controlled manner (the tools usually include workflow and version history for approving pages prior to publishing). Great for information that warrants being published in such a way, such as monthly newsletters or standard intranet pages, but not so good if you want flexibility. Changes to page design and layout require technical expertise and can be slow to implement.


Portal pages look like traditional web pages but behave very differently. Content is placed within web parts (a.k.a. portlets, iviews, gadgets and various other names) that can be added to and removed from the page. They can be connected together (e.g. a ‘contacts’ web part can be used to filter an ‘orders’ web part to display orders for a given customer) and they can be moved around the page.

Portal pages are the least intuitive to add content to because they often aggregate information from multiple different sources. But they provide the easiest and most effective way to build topic-based pages – the ‘one stop shop’ view on related information. They also encourage the collection and re-use of content and avoid the duplication that can occur with the other three methods.


Wikis are one of the new tools arriving on the scene. They are like a web page that contains just one single place holder for all content. The author enters content and controls its formatting and layout. The publishing process is simple – you click OK. Every time a page is edited, a new version is automatically created. You can review the entire version history for a wiki page, see what content has been added/deleted and, if required, restore a previous version meaning you don’t have to worry if you make a mess of the page. The screenshot below shows a wiki page in edit mode:

Wikis include a new convention – the use of square brackets [[ ]] to link to other wiki pages within the site. In the screenshot above, you can see four such links. This makes it very easy for authors to create navigation between pages – if you know the name of the page, just surround it in square brackets.

In the screenshot below, the same page but in published mode (i.e. after the author clicks OK). One of the links – New Ideas – has a dotted underline, indicating it does not yet exist:

The first person to click on a link to a page that does not yet exist will be prompted to create it (provided they have editing permission) and can immediately start entering content:

Wikis are proving ideal for situations that require the rapid entry and update of content. Leaving full control to authors encourages groups to share information and opinions and the simplicity of a wiki page makes it a quick and easy process. The downside is that such flexibility increases the manual overhead required to maintain the pages. Not a problem when the workload is distributed but if authoring control is desired you will probably prefer the traditional web page for publishing content. If you do decide to step out of the web page comfort zone, a wiki site has the potential to become your most effective knowledge management system to date…


Blogs have been a huge hit on the Internet – Technorati claims to be tracking over 80 million of them. But their benefits within business have not been clear.

A blog post in edit mode looks similar to a wiki page – it has a title and a single place holder for entering content. The author controls the design and layout of the content within the place holder. When a blog post is published, it includes a permalink (for easy bookmarking – when you go to a blog site, the page will automatically display the latest published posts) and a link for leaving comments.

Blogs carry the identity of their authors, which is probably why individually-authored blogs tend to be more successful than group-authored ones. They have two key beneficial uses within business. 1) They are great for master-apprentice mentoring. If you are learning a new skill or interested in a particular subject, being able to subscribe to an expert’s blog helps keep you up to date with their latest thoughts and opinions, and you can go through the archive of posts to catch-up on previous snippets worth knowing. 2) They are great for sharing news, as opposed to publishing it on a monthly basis – posting news (i.e. ‘new information’) as a blog enables people to provide feedback using the Comments feature and subscribe to the blog to keep track of the latest announcements and commentary.

Related blog posts:


  • Download this article (‘Out of the box – Web of Knowledge’)

Technorati tags: web 2.0; wikis; blogs; sharepoint; enterprise 2.0

Just Enough Taxonomy

On Microsoft’s Channel 9 network, there is an interesting podcast called ‘Just Enough Architecture‘, where the interviewee provides some good recommendations about the balance between how much architecture you need versus just getting on and writing software that does something useful.

The same debate could be applied to taxonomy, specifically the use of metadata properties to classify content.

For some reason, most companies who decide they want to improve how content is classified seem to want extreme taxonomy, swinging from not-enough taxonomy to too-much. The mantra may sound somewhat familiar:

One taxonomy to rule them all, one taxonomy to find them, one taxonomy to bring them all and, in the records management store, define them

Often starting with none at all (i.e. content is organised informally and inconsistently using folders), the desire is to create a single corporate taxonomy to classify everything (using a hierarchical structure of metadata terms). An inordinate amount of time is then spent defining and agreeing the perfect taxonomy (for some reason, many seem to settle on about 10,000 terms). Several months later, heads are being scratched as people try to figure out just how they are going to implement the taxonomy. Do they classify existing content or only apply it to new stuff? Do they have specific roles dedicated to classifying the content, rely on the content owners to do it, or look at automated classification tools. Do they put rules in place to force people to classify content and store it in specific locations that are ‘taxonomy-aware’. How do they prevent people bypassing the system, those who figure they can still get their work done by switching to a wiki or a Groove workspace or a MySpace site or a Twitter conversation? How do they validate the taxonomy and check that people are classifying correctly? What do they do if people aren’t classifying correctly, who don’t understand the hierarchy or have different meanings for the terms in use? What started out as a simple idea to improve the findability of information becomes a huge burden to maintain with questionable benefits, given there are so many opportunities for classification to go wrong.

This dilemma reveals two flaws that make implementing a taxonomy so difficult. The first is the desire to treat taxonomy as a discrete project rather than an organic one. Collaboration and knowledge management projects often share this fate. Making taxonomy a discrete project usually means tackling it all in one go from a technology perspective and then handing it over to the business to run ‘as is’ for ever more (i.e. until the next technology upgrade). Such projects end up looking like that old cliché – attempting to eat an elephant whole. The project team tries to create a perfect design that will deliver all identified requirements (and the business, knowing this could be their one chance for improved tools, delivers a loooooong list of requirements), implements a solution and then moves on to the next project. As the solution is used, the business finds flaws in their requirements or discover new ways of working enabled by the technology, but it is too late to get the solution changed. The project is closed, the budget spent.

An alternative approach is to treat taxonomy as an organic project or, for those who prefer corporate-speak, a continuous-improvement programme. Instead of planning to create and deploy the perfect taxonomy, concentrate on ‘just enough taxonomy’. A good starting point is to find out why taxonomy is needed in the first place. If it is to make it easier for people to find information, first document the specific problems being experienced. Solve those problems as simply as possible, test them and gather feedback. If successful, people will raise the bar on what they consider good findability, generating new demands waiting for IT to solve, and so the cycle continues.

The following is a simple example using a fictitious company.

Current situation: Most information is stored in folders on file shares and shared via email. There is an intranet that is primarily static content published by a few authors. The IT department has been authorised to deploy Microsoft Office SharePoint Server 2007 (MOSS)

General problem: Nobody can find what they are looking for (resist temptation to sing U2 song at this moment…)

Specific problems: Difficult to find information from recently completed projects that could be re-used in future projects; Difficult to differentiate between high quality re-usable project information versus low quality or irrelevant project information; Difficult to find all available documents for a specific customer (contracts, internal notes, project files)

Possible solution: Deploy a search engine to index all file folders and the intranet. Move all project information to a central location. Within the search engine, create a scope (or collection) for the project information location. Users will then be able to perform search queries that will return only project information within the results. Using ‘date modified’ as the sorting order will locate information from the most recent projects. Create a central location for storing top-rated ‘best practice’ project information. Set-up a team of subject matter experts to work with project teams and promote documents as ‘best practice’. The Best Practices store can be given high visibility throughout the intranet and promoted as high relevance for search queries.

Now that is a very brief answer outlining one possible solution. But the solution is relatively simple to implement and should offer immediate (and measurable) improvements based on feedback regarding the problems people are experiencing. There were two red herrings in the requirements that could have resulted in a very different, more complex, solution: 1. That MOSS was going to be the technology; and 2. The need to find documents for a specific customer. When you have chosen a technology, there is always the temptation to widen the project scope. MOSS has all sorts of features that can help improve information management and the starting point is often to replace an old crusty static intranet. But the highlighted problems did not mention any concerns about the intranet. That’s not to say those concerns do not exist, but they are a different problem and not the priority for this project. The second red herring is a classic. When people want to be able to find information based on certain parameters, such as all documents connected to a specific customer, there is the temptation to implement a corporate-wide taxonomy and start classifying all content, starting with the metadata property ‘customer name’. But documents about a specific customer will likely contain the customer’s name. In this scenario, the simplest solution is to create a central index and provide the ability for users to search for documents containing a given customer’s name. If that fails to improve the situation then you may need to consider more drastic measures.

Rejecting the large-scale information management project in favour of small chunks of continuous ‘just enough’ improvement is not an easy approach to take. The idea of having a centralised, classified and managed store of content, where you can look up information based on any parameter and receive perfect results, continues to be an attractive one with lots of benefits to the business – both value-oriented (i.e. helping people discover information to do their job) and cost-oriented (i.e. managing what people do with information – compliance checks and the like). But a perfectly classified store of content is a utopia. Trying to achieve it can result in creating systems that are harder to use and difficult to maintain when the goal is supposed to be to make them easier.

I mentioned that the common approach to implementing taxonomy has two flaws. The first has been discussed here – how to create just enough taxonomy. The second flaw is the desire to create a single universal taxonomy that can be applied to everything. I’ll tackle that challenge in a separate post (a.k.a this post is already too long…)

Reference: Just Enough Architecture (MSDN Channel 9). Highly recommended. There are plenty of similarities between software architecture and information architecture (of which taxonomy is subset). Don’t be put off by the techie speak, it debates the pro’s and con’s of formal processes and informal uses, and includes some great non-technical examples for how to find a balance.

Recent related posts:

Technorati tags: Taxonomy, Tagging, Information Architecture

When IT doesn’t matter

I’m currently reading ‘The Labyrinths of Information: Challenging the Wisdom of Systems‘ by Claudio Ciborra. I haven’t gotten very far through the book yet, it is written in an academic tone which always slows me down. But early on, I stumbled across a very interesting point of view.

IT architecture is popular topic right now. You can get enterprise architects, software architects, infrastructure architects, information architects… the list goes on. One of the focus areas for architecture is the adoption of standards and consist methods for the design, development and deployment of IT systems. All sounds very sensible and measurable.

But Claudio makes a simple observation that suggests such architecture doesn’t matter, in that it does not help an organisation to become successful. Instead, architecture is a simple necessity of doing business digitally. This argument concurs with Nicholas Carr’s controversial article (and subsequent book) ‘IT doesn’t matter

A sample from the book: (note – SIS refers to strategic information systems)

“…market analysis of and the identification of SIS applications are research and consultancy services that can be purchased. They are carried out according to common frameworks, use standard data sources, and, if performed professionally, will reach similar results and recommend similar applications to similar firms.”

So what do you need to do to become an innovative company? Claudio suggests:

“…To avoid easy imitation, the quest for a strategic application must be based on such intangible, and even opaque, areas as organisational culture. The investigation and enactment of unique sources of practice, know-how, and culture at firm and industry level can be a source of sustained advantage…

See, I have been telling those techie-oriented IT folk for years, collaboration and knowledge sharing are far more important than your boring transaction-based systems 🙂

…Developing an SIS is much closer to prototyping and the deployment of end-user’s ingenuity than has so far been appreciated: most strategic applications have merged out of plain hacking. The capacity to integrate unique ideas and practical design solutions at the end-user level turns out to be more important than the adoption of structured approaches to systems development…”

Sounds like an argument in favour of mash-ups and wikis to me. See also: Let’s make SharePoint dirty

Learning about versus Learning to be

Interesting article on CNET – A new crop of kids: Generation We – talking about how the latest generations are growing up adept and comfortable with technology from a very early age. Some snippets:

Gabriel, an intensely curious kid who’s about to turn 8, has been fascinated by everything from skateboarding and basketball to statistics about world extremes…. He likes to look up information about the subjects on Wikipedia with his mom and then turn to YouTube for short video clips… If he hears a likeable song in a YouTube video, he might visit Apple’s iTunes store to download the music, too.

“Driving home we’ll see a bird,” Kim said, “and then go to Wikipedia (at home) and look it up. Then once we’re online, he’ll say, ‘How about we go to YouTube?'”

Naturally, the world of business and media is fascinated with understanding how to market and sell to this new generation.

I’m interested in a different angle – how will their ability to learn be influenced and affected by these newer Internet technologies, and what will the effect be on their future?

It’s easy to assume that having the Internet is going to make our children a lot smarter a lot sooner… resources that were previously only accessible to the priviledged few are now available to all, instantly. But is that all we need?

In the book “The Social Life of Information” by John Seely Brown and Paul Duguid, the authors make a very interesting comment:

The web has made learning about easier than ever. But learning to be requires the ability engage in the practice in question

…and that could be the new challenge. There will be no shortage of people able to demonstrate how much they know about all sorts of subjects. But how many people will actually be able to practice what they ‘know’. At the moment, there are still no shortcuts to becoming skilled in practice – determination, patience and effort continue to be essential ingredients.

If we become used to having instant answers to questions, will it affect our stamina for the deeper level of learning required to move from knowing about something to actually being something?

An effect from moving away from agriculture and manual labour has been that, put simply, most people aren’t as fit as they would have been 200 years ago.

Will the effect of not requiring effort to learn about subjects send our brains in the same direction as our stomachs? I hope not.

Modernising intranets

Just over a week ago, Microsoft released to manufacture their latest range of products and services that fall under the Office brand, including a heftily revamped Microsoft Office SharePoint Server 2007 (MOSS). MOSS brings together and upgrades two former products – SharePoint Portal Server 2003 and Content Management Server 2002. For an overview of the history behind MOSS and an introduction to some of the new features, please check out the following blog posts:

This release comes at a good time. Information and communications technology have advanced rapidly over the past five years, but most of the benefits seem to have been realised in the consumer world. Internal business systems involving people have been much slower to evolve.

The central internal information system for most organisations is the intranet. And it seems that many organisations are coming round to the idea that the intranet is long overdue for a refresh. But whilst the IT department is ready to install new technology, there seems to be little focus on leveraging new features that mirror some of the biggest successes on the Internet. Instead, the requirements list is based on making some incremental improvements to the existing system – more structured document management, some extra workflow for web content publishing, a little more personalisation within the classic portal interface, some improved search results would be nice….

When I ask if people are familiar with internet services such as eBay, Flickr, YouTube and MySpace, the typical response is “oh yes, we don’t let our people access those sites from the corporate network” or “we don’t want anything like that – how on earth would we manage it?” or “our users aren’t interested in technology, I don’t think they have ever used those sorts of sites”… I usually start sighing when that last excuse is rolled out. It never ceases to amaze me how much and how often people underestimate each other.

Refreshing internal systems provides an opportunity to introduce new ways of working. You don’t have to be bleeding edge – let the Internet shake out what works and what doesn’t. And introducing new features doesn’t have to be expensive. Quite often, the opposite is the case. The best way to try out ideas is to keep systems simple – minimal up front design and see how ideas grow as people start to experiment.

A good example of this approach can be found within the BBC, as documented by David Weinberger: The BBC’s low-tech KM. And the man behind the process – Euan Semple – is now a free agent and advising other companies on how to adopt his approach. Don’t believe me? Book a session with Euan.

Other organisations are also beginning to wake up to the idea of bringing Internet trends to internal systems, as blogged by Jon Husband over on Wirearchy: Enterprise 2.0 – on its way to a workpace near you?

If you are going down the Microsoft route, MOSS includes a variety of features modelled around some of the best successes on the Internet – blogging, wikis, news feeds, social networking tools, easy publishing to team and individual sites, integrated instant messaging are just a few for starters. Why not give them a try instead of sticking with the traditional ways of working?

When helping people design new information systems, I always give out the same advice: “be careful what you wish for”. The more managed the environment, the more effort will be required to contribute to it and the less it will be used. The more rational the design, the less it will represent reality and the less it will be used (hint: people behave rationally when asked what they want – “I want all documents categorised so that I can search based certain properties” – and behave irrationally when it comes to the actually doing – “yuk, all these dropdown lists, I’ll just use the default setting for my documents”). These days the effect is magnified. Whilst you set up immigration procedures to prevent information from entering your intranet without adhering to strict management rules, your customers will just go search the Internet and find out more about your organisation (and/or your competitors) than your employees know. Is that a good result?

Technorati Tags: SharePoint; SharePoint 2007; Intranet Design

Hanging on to knowledge

Interesting blog post: When they leave, what goes with them discussing lost value when an employee leaves the organisation, and how knowledge locked up in email becomes inaccessible without context, even if it doesn’t get deleted. (Robert Scoble comments in his exit interview on how he left behind 1.5Gb of email that will likely be deleted and lost.) The author highlights the hidden cost of lost organisational knowledge and how collaborative tools can provide a more accessible location for storing and sharing expertise long after the original source has left. This has been one of the useful side effects from deploying Microsoft’s Windows SharePoint Services technology (a free add-on to Windows Server 2003 that provides collaboration sites for storing and sharing information). When Microsoft describes how SharePoint has proliferated through the intranet, with over 40,000 sites created, most I.T. managers react in horror at the thought of managing such a viral technology. But its not all bad news. Making it easy for individuals to set up their own sites encourages people to upload useful information that was previously tucked away on private hard drives or hard-to-find network shares. When I left Microsoft, I left behind a couple of SharePoint sites containing all my recent presentations given to customers, other useful material and links to related resources. Because Microsoft had a system in place that requires every site has at least two administrators – the owner plus a back-up – those sites remained accessible after I had gone.

And, as always, there is a non-technology solution for retaining (some) knowledge when its owner leaves the organisation – mentoring, harder to see and measure but easier to spread (given the right culture, never underestimate that particular challenge). Knowledge is inherently difficult to document because of the very context that creates it, but skills and expertise can be passed on person-to-person, enabling others to build their own knowledge pool. When Scoble left behind 1.5Gb of email at Microsoft, I’d question just how much unique value resided in that information store. Scoble’s legacy lives on in those he worked and communicated with – they will continue to grow and improve Channel 9 and blogging. The bigger cost to Microsoft is the lost value from losing such a well-connected and passionate blogger – Scoble may have shared his expertise but that doesn’t necessarily make him easy to replace. Accessing and understanding his email won’t solve that particular problem

Related blog posts:

MS Knowledge Network

Last week, at the SharePoint conference (and also the CEO Summit), Microsoft announced a new technology – Knowledge Network (KM) for Microsoft Office SharePoint 2007 (MOSS). What follows is an overview based on the session I attended (the presenter clearly stated we could blog at will 🙂 ) with some personal comments added in. I have not installed the product and do not have access to it. Knowledge Network is currently a closed beta, so you won’t find it on the list when MOSS beta 2 is released. I played with a very early prototype of the client-side technology 2 or 3 years ago, when it was still in MS Research and I was still at Microsoft, but the product has changed significantly since then.

KN is focused on enterprise social networking, automatic discovery and sharing of undocumented knowledge and relationships. (I copied that off the opening slide…) So what does that really mean?

The solution involves client and server elements. The KN client is installed locally and analyses email content to create an individual profile of keywords and contacts (colleagues and external contacts). The user can review the profile – for example, set privacy levels on information (e.g. choosing only to share external contacts with your direct team) and remove information you don’t want published. The user then chooses to publish the profile and it is uploaded to the KN server, i.e. this is an opt-in model (until some evil being in I.T. enforces publishing through group policy…). As multiple user profiles are published to the KN server, they are aggregated to create expertise information and form a social network (i.e. the more profiles published, the richer the network). The MOSS search service indexes the information created by the aggregated profiles and it is returned within search results.

When a person (seeker) queries for people – who knows what/whom – the results are ranked by social distance to the seeker, expertise and relationship relevance (e.g.results grouped as ‘my colleagues’, ‘know my colleagues’… <– this is similar to the LinkedIn method of being able to link to people who are linked to people you know.) The KN server also includes a feature called ‘anonymous brokering’ – it is based on the privacy field in the KN profile manager and allows people to share information with the system, but only on demand.

If you are seeking expertise, you can submit a query and will be returned a set of people results without the contact information. If you want to contact one of the experts, you click on the link to send a message via the KN broker. The KN broker forwards the email to the expert (under its own email account) and the expert can choose to accept or decline the request. If the request is accepted, the seeker and expert are hooked up.

This feature demo’d well, but I suspect the technical implementation will be far easier than the cultural implementation. In smaller organisations, it will not be difficult to guess who the expert is even without the contact information. The culture of the organisation must make it acceptable to say no to requests without fear of penalty, otherwise everyone will just say yes (or not publish their expertise) and the service becomes irrelevant. There are some configuration options, such as how many times an individual can contacted with requests during a time period – useful, but again needs to be within a culture that allows experts to say no.

The individual profile is created client-side with no server involvement. In effect, the profile is an index of the content within Outlook (keywords and contact information are extracted from emails within Outlook folders and also contact lists in IM (I’m assuming that means Live Communication Server)). After the initial profile is published, incremental updates are sent at an interval defined in the configuration. The default is 14 days. <- this is a concern. Whilst people will likely be thorough in reviewing their profile prior to the initial publish, I suspect the novelty will soon wear off and they will start to accept the defaults for incremental updates. This could lead to sensitive information being published onto SharePoint without the source user realising.

KN is designed for Microsoft environments – it requires MOSS to install (Windows Server 2003, .NET framework 2.0, SQL 2000 or 2005) and requires Active Directory and Exchange for name resolution (contact information and DLs). The client will need to be running Windows XP with Service Pack 2 or later and Office 2003 or later. The product group are already recommending that Outlook be configured with cached Exchange mode to minimise processing impact against the Exchange Server whilst inboxes are analysed to create/update the profile.

From a deployment perspective, KN is another shared service being added to MOSS (joining Excel services, Forms server, and Indexing/Search). As soon as a user elects to publish their profile to the server, the KN profile management web service takes over to calculate the expertise information and social network.

The session closed with a healthy Q&A that raised some interesting issues. A couple of specifics: Only the body of emails are indexed, not the attachments. The question was asked as to whether or not the ‘Deleted items’ folder was indexed, and I didn’t hear a clear answer. It poses an interesting challenge – how do you determine which emails contain relevant information. I delete irrelevant stuff immediately, but I delete everything eventually unless it has particular sentimental value. And that leads on to the age old challenge of auto-generating social networks based on emails we send/receive and searches we perform – how to determine when expertise is being shared versus discovery and learning versus spam (corporate as well as external) versus answering on behalf of etc. The product group are more than aware of this challenge. When asked why not mine stuff other than email (documents, IM conversations etc.) they responded that email is the richest in terms of tacit information as well as being the most pervasive source. They acknowledged that the challenge in calculating strength (relevance) was hard enough and adding data sources adds complexity, and decided the return was not worth the investment in this version (i.e. look out for extensions in the future…)

Closing notes:

Historically, organisations have been reluctant to deploy social software tools – IM being the most recent example of irrational fears over-riding business benefits (see related post: when will IM come of age). Knowledge Network will face similar challenges, as concerns over productivity drains, privacy and culture-fit bubble to the surface. That all said, I’m glad Microsoft has finally entered this space. The power of social networks have become well documented over the past 5 years, and failure to understand them is one of the primary reasons why most KM systems fail. This will be a v1 technology and will have all sorts of flaws and challenges. But it’s a great start and this sort of capability is long overdue.

For more information:

  • Microsoft product team blog for Knowledge Networks
  • Craig Randall has posted his thoughts (he attended both sessions at the SharePoint conference, I just went to the session covering the details)

If you’re interested in learning more about the potential value of social networks, there are plenty of books on the subject but here are three I would recommend for starters:

Update: An overview has been posted up on Microsoft’s web site: complete with screenshots.

Collaboration Types – Part 1: Forms

Collaboration Type covers three areas: the form of collaboration (e.g. teams vs peer groups vs 1:many training); collaborative activities (creating documents, generating ideas, making decisions); and collaboration size (the differences between small groups and very large ones)  Read More

Knowledge without people

…is an oxymoron. You can’t have knowledge without people or, to put it another way, why would you want knowledge without people? Can you imagine an organisation that contains no people? That doesn’t sell anything to anyone, or buy anything from anyone, or do anything for anyone? What will we all be doing? Wondering around Second Life looking for virtual people to talk to? 🙂

Most knowledge-based systems that fail do so because they try to eliminate the human element of the system. Someone has a vision of the ultimate knowledge database, containing all the information you could ever need. No need to chance the opinions of those pesky humans with their biases and irrational behaviour. Let the computer provide the answers. It’ll be rational, reliable and it will present the results in a nice tidy chart…

I thought we had moved on from such ridiculous notions until I picked up the Saturday Times and read the headline “Pick a doctor by computer ‘fiasco’“. (Full article is available online). It seems that some bright spark in the NHS believed that, in order to prevent bias and favouritism in the allocation of placements for junior doctors, a computer database should replace interviews in the selection process. You can read the full article on line and a more detailed follow up article “Online selection of new doctors ‘grossly unfair’“. In wanting to take out the human element – the biased interviewer – it seems the system was designed to allow self-interviews instead. Of course, when we interview ourselves, we are completely unbiased and rational… (I can recommend some useful ‘brain’ books if you actually think that last statement is true.). Spot the flaw in the system:

The applicants fill in a form online. It is divided into six sections with each requiring two answers, of up to 75 words… passing exams counts for only one sixth of the total possible score and is valued equally with, say, how convincingly an applicant can argue that he or she matches the General Medical Council’s Principles of Good Medical Practice, or how persuasively he or she can pretend to leadership or teamwork qualities… No interviews will be conducted, nor will answers be checked.

Beautiful. Just when we though the tyranny of numbers had finally been outed… Do you think the designer of this system got their job by being matched to it by a computer? I think not. Imagine political leaders being selected using such a form. When it comes to describing their own leadership skills, what would Stalin have written compared to Gandhi? And who would the computer have awarded the most points to? The mind boggles…

I wonder just how many placements in the past have been suspected of bias in favour of the “old boy” network. Surely the best approach is to find a way to handle the exceptions rather than replace the entire system. For starters, if the “old boy” network really does exist, it will still be alive and well. Of course, nobody received help with filling in their forms… did they?