Wikis in moderation

Had a great debate during a workshop last week. I was discussing how Web 2.0 technologies can be applied internally to improve knowledge sharing. Opinions (and expectations) about the use of tools such as blogs and wikis vary – from being rose-tintedly optimistic about their potential benefits through to being thoroughly pessimistic and convinced their use will be a disaster.

In this instance, the customer raised a valid and common concern – that innacurate information could be posted on a wiki page or blog and become relied upon, leading to misinformation being distributed internally and (worse) externally. It’s a fair comment. Information tends to be sticky – once we learn something, we hold onto it until we learn the hard way that it is no longer applicable or true. See previous blog post – Sticky Information.

But it is a mistake (and hence the debate) to assume that, without wikis and blogs, misinformation doesn’t occur. It does, through email and conversation. The difference is in the discoverability (or lack of). Wikis and blogs are transparent – published on a web site for all to see. If somebody publishes inaccurate information, it can be quickly corrected for all to see. The same is not true for email and conversation. But such corrections rely on people to moderate the digital conversation.

Wikis are a great way to democratise the sharing of information and knowledge, but do not consider them a labout-saving device. The reverse is usually the case. The most successful wikis balance open editing with moderators who keep a check on the quality and accuracy of information being published.