Same but different

A while back I posted a video Microsoft had commissioned from Common Craft: SharePoint in Plain English. Here’s that video again:

Not long after, Jack Vinson posted a video on his KM blog, from IBM explaining Lotus Connections. Not as slick as Common Craft but looks kind of familiar:

Which made me wonder, has Google got anything similar? Oh yes, and wisely created by none other than Common Craft too:

Three vendors with three products/services touting a similar story. What are the differentiators that make you choose one over the other?

Side note: whilst many seem to be copying Common Craft’s style of presenting, I don’t see any coming close. The Common Craft web site is well worth checking out.

References:

More on Idea Markets

In a previous blog post – Web Naivety – I cited an example of tapping into social networks (well, people in general really) to spot new business opportunities. The example given was a company that encouraged employees to submit new product ideas and then democratically vote on them. The end result was a new product, that would have been lucky to survive the traditional management approach to business development, that now accounted for 30% of total sales.

Here’s another example. Reported in the New York Times – Google’s Lunchtime Betting Game. The article is worth a read in full, and covers some insights from experts in prediction markets. Here’s the short version:

Google is encouraging employees to go online and place bets on a prediction market. Whether its questions about what Google might be up to or its competitors, play money – Goobles – is used to track results. Google has set aside $10,000 of real money per quarter to convert Goobles into bonuses. You could argue its another example of Data as Currency. In this example, Google acquires insights that are more likely to be well-reasoned (useful when challenging conventional wisdom and expected outcomes) and the whole process is turned into a fun activity with potential rewards all round for being right.

Wikis in moderation

Had a great debate during a workshop last week. I was discussing how Web 2.0 technologies can be applied internally to improve knowledge sharing. Opinions (and expectations) about the use of tools such as blogs and wikis vary – from being rose-tintedly optimistic about their potential benefits through to being thoroughly pessimistic and convinced their use will be a disaster.

In this instance, the customer raised a valid and common concern – that innacurate information could be posted on a wiki page or blog and become relied upon, leading to misinformation being distributed internally and (worse) externally. It’s a fair comment. Information tends to be sticky – once we learn something, we hold onto it until we learn the hard way that it is no longer applicable or true. See previous blog post – Sticky Information.

But it is a mistake (and hence the debate) to assume that, without wikis and blogs, misinformation doesn’t occur. It does, through email and conversation. The difference is in the discoverability (or lack of). Wikis and blogs are transparent – published on a web site for all to see. If somebody publishes inaccurate information, it can be quickly corrected for all to see. The same is not true for email and conversation. But such corrections rely on people to moderate the digital conversation.

Wikis are a great way to democratise the sharing of information and knowledge, but do not consider them a labout-saving device. The reverse is usually the case. The most successful wikis balance open editing with moderators who keep a check on the quality and accuracy of information being published.

When 2 Wiki

During this year I have been running a series of workshops for customers wanting to explore what SharePoint Server 2007 can and can’t do. As people continue to debate what Web 2.0 means and whether or not it matters, it has been interesting to see its arrival within business. Questions about wikis and blogs have cropped up on a regular basis and there has been plenty of confusion about their uses (let alone what RSS means). To try and help, I have put together a short article outlining the similarities and differences between four web-based methods for publishing content: web pages, portals, wikis and blogs. The common factor uniting all four methods is that nothing more than a web browser is required to create and publish content.

Click Here to download the full article (2Mb PDF). A short description is provided below:

Please note that this post (and article) is not about defining precisely what is or isn’t a wiki or debating what software you should use. Microsoft Office SharePoint Server 2007 has been used to provide screenshots demonstrating each method. The article was written for customers who are in the process of evaluating or implementing SharePoint and, hopefully, its contents will be useful for others too.

Web Page

Traditional web content management systems were the first tools to make publishing content easy by separating the design of web pages from authoring and publishing of content within those pages. A web designer controls the design and management of web pages. Authors can edit the pages and enter content within pre-determined placeholders. The screenshot below shows a page being edited:

Web pages managed in this way make it easy to create and publish content in a consistent and controlled manner (the tools usually include workflow and version history for approving pages prior to publishing). Great for information that warrants being published in such a way, such as monthly newsletters or standard intranet pages, but not so good if you want flexibility. Changes to page design and layout require technical expertise and can be slow to implement.

Portal

Portal pages look like traditional web pages but behave very differently. Content is placed within web parts (a.k.a. portlets, iviews, gadgets and various other names) that can be added to and removed from the page. They can be connected together (e.g. a ‘contacts’ web part can be used to filter an ‘orders’ web part to display orders for a given customer) and they can be moved around the page.

Portal pages are the least intuitive to add content to because they often aggregate information from multiple different sources. But they provide the easiest and most effective way to build topic-based pages – the ‘one stop shop’ view on related information. They also encourage the collection and re-use of content and avoid the duplication that can occur with the other three methods.

Wiki

Wikis are one of the new tools arriving on the scene. They are like a web page that contains just one single place holder for all content. The author enters content and controls its formatting and layout. The publishing process is simple – you click OK. Every time a page is edited, a new version is automatically created. You can review the entire version history for a wiki page, see what content has been added/deleted and, if required, restore a previous version meaning you don’t have to worry if you make a mess of the page. The screenshot below shows a wiki page in edit mode:

Wikis include a new convention – the use of square brackets [[ ]] to link to other wiki pages within the site. In the screenshot above, you can see four such links. This makes it very easy for authors to create navigation between pages – if you know the name of the page, just surround it in square brackets.

In the screenshot below, the same page but in published mode (i.e. after the author clicks OK). One of the links – New Ideas – has a dotted underline, indicating it does not yet exist:

The first person to click on a link to a page that does not yet exist will be prompted to create it (provided they have editing permission) and can immediately start entering content:

Wikis are proving ideal for situations that require the rapid entry and update of content. Leaving full control to authors encourages groups to share information and opinions and the simplicity of a wiki page makes it a quick and easy process. The downside is that such flexibility increases the manual overhead required to maintain the pages. Not a problem when the workload is distributed but if authoring control is desired you will probably prefer the traditional web page for publishing content. If you do decide to step out of the web page comfort zone, a wiki site has the potential to become your most effective knowledge management system to date…

Blog

Blogs have been a huge hit on the Internet – Technorati claims to be tracking over 80 million of them. But their benefits within business have not been clear.

A blog post in edit mode looks similar to a wiki page – it has a title and a single place holder for entering content. The author controls the design and layout of the content within the place holder. When a blog post is published, it includes a permalink (for easy bookmarking – when you go to a blog site, the page will automatically display the latest published posts) and a link for leaving comments.

Blogs carry the identity of their authors, which is probably why individually-authored blogs tend to be more successful than group-authored ones. They have two key beneficial uses within business. 1) They are great for master-apprentice mentoring. If you are learning a new skill or interested in a particular subject, being able to subscribe to an expert’s blog helps keep you up to date with their latest thoughts and opinions, and you can go through the archive of posts to catch-up on previous snippets worth knowing. 2) They are great for sharing news, as opposed to publishing it on a monthly basis – posting news (i.e. ‘new information’) as a blog enables people to provide feedback using the Comments feature and subscribe to the blog to keep track of the latest announcements and commentary.

Related blog posts:

Links:

  • Download this article (‘Out of the box – Web of Knowledge’)

Technorati tags: web 2.0; wikis; blogs; sharepoint; enterprise 2.0

Just Enough Taxonomy

On Microsoft’s Channel 9 network, there is an interesting podcast called ‘Just Enough Architecture‘, where the interviewee provides some good recommendations about the balance between how much architecture you need versus just getting on and writing software that does something useful.

The same debate could be applied to taxonomy, specifically the use of metadata properties to classify content.

For some reason, most companies who decide they want to improve how content is classified seem to want extreme taxonomy, swinging from not-enough taxonomy to too-much. The mantra may sound somewhat familiar:

One taxonomy to rule them all, one taxonomy to find them, one taxonomy to bring them all and, in the records management store, define them

Often starting with none at all (i.e. content is organised informally and inconsistently using folders), the desire is to create a single corporate taxonomy to classify everything (using a hierarchical structure of metadata terms). An inordinate amount of time is then spent defining and agreeing the perfect taxonomy (for some reason, many seem to settle on about 10,000 terms). Several months later, heads are being scratched as people try to figure out just how they are going to implement the taxonomy. Do they classify existing content or only apply it to new stuff? Do they have specific roles dedicated to classifying the content, rely on the content owners to do it, or look at automated classification tools. Do they put rules in place to force people to classify content and store it in specific locations that are ‘taxonomy-aware’. How do they prevent people bypassing the system, those who figure they can still get their work done by switching to a wiki or a Groove workspace or a MySpace site or a Twitter conversation? How do they validate the taxonomy and check that people are classifying correctly? What do they do if people aren’t classifying correctly, who don’t understand the hierarchy or have different meanings for the terms in use? What started out as a simple idea to improve the findability of information becomes a huge burden to maintain with questionable benefits, given there are so many opportunities for classification to go wrong.

This dilemma reveals two flaws that make implementing a taxonomy so difficult. The first is the desire to treat taxonomy as a discrete project rather than an organic one. Collaboration and knowledge management projects often share this fate. Making taxonomy a discrete project usually means tackling it all in one go from a technology perspective and then handing it over to the business to run ‘as is’ for ever more (i.e. until the next technology upgrade). Such projects end up looking like that old cliché – attempting to eat an elephant whole. The project team tries to create a perfect design that will deliver all identified requirements (and the business, knowing this could be their one chance for improved tools, delivers a loooooong list of requirements), implements a solution and then moves on to the next project. As the solution is used, the business finds flaws in their requirements or discover new ways of working enabled by the technology, but it is too late to get the solution changed. The project is closed, the budget spent.

An alternative approach is to treat taxonomy as an organic project or, for those who prefer corporate-speak, a continuous-improvement programme. Instead of planning to create and deploy the perfect taxonomy, concentrate on ‘just enough taxonomy’. A good starting point is to find out why taxonomy is needed in the first place. If it is to make it easier for people to find information, first document the specific problems being experienced. Solve those problems as simply as possible, test them and gather feedback. If successful, people will raise the bar on what they consider good findability, generating new demands waiting for IT to solve, and so the cycle continues.

The following is a simple example using a fictitious company.

Current situation: Most information is stored in folders on file shares and shared via email. There is an intranet that is primarily static content published by a few authors. The IT department has been authorised to deploy Microsoft Office SharePoint Server 2007 (MOSS)

General problem: Nobody can find what they are looking for (resist temptation to sing U2 song at this moment…)

Specific problems: Difficult to find information from recently completed projects that could be re-used in future projects; Difficult to differentiate between high quality re-usable project information versus low quality or irrelevant project information; Difficult to find all available documents for a specific customer (contracts, internal notes, project files)

Possible solution: Deploy a search engine to index all file folders and the intranet. Move all project information to a central location. Within the search engine, create a scope (or collection) for the project information location. Users will then be able to perform search queries that will return only project information within the results. Using ‘date modified’ as the sorting order will locate information from the most recent projects. Create a central location for storing top-rated ‘best practice’ project information. Set-up a team of subject matter experts to work with project teams and promote documents as ‘best practice’. The Best Practices store can be given high visibility throughout the intranet and promoted as high relevance for search queries.

Now that is a very brief answer outlining one possible solution. But the solution is relatively simple to implement and should offer immediate (and measurable) improvements based on feedback regarding the problems people are experiencing. There were two red herrings in the requirements that could have resulted in a very different, more complex, solution: 1. That MOSS was going to be the technology; and 2. The need to find documents for a specific customer. When you have chosen a technology, there is always the temptation to widen the project scope. MOSS has all sorts of features that can help improve information management and the starting point is often to replace an old crusty static intranet. But the highlighted problems did not mention any concerns about the intranet. That’s not to say those concerns do not exist, but they are a different problem and not the priority for this project. The second red herring is a classic. When people want to be able to find information based on certain parameters, such as all documents connected to a specific customer, there is the temptation to implement a corporate-wide taxonomy and start classifying all content, starting with the metadata property ‘customer name’. But documents about a specific customer will likely contain the customer’s name. In this scenario, the simplest solution is to create a central index and provide the ability for users to search for documents containing a given customer’s name. If that fails to improve the situation then you may need to consider more drastic measures.

Rejecting the large-scale information management project in favour of small chunks of continuous ‘just enough’ improvement is not an easy approach to take. The idea of having a centralised, classified and managed store of content, where you can look up information based on any parameter and receive perfect results, continues to be an attractive one with lots of benefits to the business – both value-oriented (i.e. helping people discover information to do their job) and cost-oriented (i.e. managing what people do with information – compliance checks and the like). But a perfectly classified store of content is a utopia. Trying to achieve it can result in creating systems that are harder to use and difficult to maintain when the goal is supposed to be to make them easier.

I mentioned that the common approach to implementing taxonomy has two flaws. The first has been discussed here – how to create just enough taxonomy. The second flaw is the desire to create a single universal taxonomy that can be applied to everything. I’ll tackle that challenge in a separate post (a.k.a this post is already too long…)

Reference: Just Enough Architecture (MSDN Channel 9). Highly recommended. There are plenty of similarities between software architecture and information architecture (of which taxonomy is subset). Don’t be put off by the techie speak, it debates the pro’s and con’s of formal processes and informal uses, and includes some great non-technical examples for how to find a balance.

Recent related posts:

Technorati tags: Taxonomy, Tagging, Information Architecture

When IT doesn’t matter

I’m currently reading ‘The Labyrinths of Information: Challenging the Wisdom of Systems‘ by Claudio Ciborra. I haven’t gotten very far through the book yet, it is written in an academic tone which always slows me down. But early on, I stumbled across a very interesting point of view.

IT architecture is popular topic right now. You can get enterprise architects, software architects, infrastructure architects, information architects… the list goes on. One of the focus areas for architecture is the adoption of standards and consist methods for the design, development and deployment of IT systems. All sounds very sensible and measurable.

But Claudio makes a simple observation that suggests such architecture doesn’t matter, in that it does not help an organisation to become successful. Instead, architecture is a simple necessity of doing business digitally. This argument concurs with Nicholas Carr’s controversial article (and subsequent book) ‘IT doesn’t matter

A sample from the book: (note – SIS refers to strategic information systems)

“…market analysis of and the identification of SIS applications are research and consultancy services that can be purchased. They are carried out according to common frameworks, use standard data sources, and, if performed professionally, will reach similar results and recommend similar applications to similar firms.”

So what do you need to do to become an innovative company? Claudio suggests:

“…To avoid easy imitation, the quest for a strategic application must be based on such intangible, and even opaque, areas as organisational culture. The investigation and enactment of unique sources of practice, know-how, and culture at firm and industry level can be a source of sustained advantage…

See, I have been telling those techie-oriented IT folk for years, collaboration and knowledge sharing are far more important than your boring transaction-based systems 🙂

…Developing an SIS is much closer to prototyping and the deployment of end-user’s ingenuity than has so far been appreciated: most strategic applications have merged out of plain hacking. The capacity to integrate unique ideas and practical design solutions at the end-user level turns out to be more important than the adoption of structured approaches to systems development…”

Sounds like an argument in favour of mash-ups and wikis to me. See also: Let’s make SharePoint dirty