Social Media judges the Olympics

Techcrunch has an interesting article: How We Hate NBC’s Olympic Coverage: A Statistical Breakdown.

NBC Olympics Sentiment Analysis

The statistics are coming from a couple of different ‘Sentiment Analysis’ services that track what people are saying about brands online. Twitter Sentiment tracks positive and negative comments on Twitter, updated in real-time (image shown above). Another service, Crimson Hexagon, went further to breakdown into specific categories, discovering only 15% were happily watching NBC’s Winter Olympics coverage (more details are provided in the TechCrunch article) whilst 85% were complaining.

What’s interesting is how easy it has been for these services to gather the data. Crimson Hexagon analysed over 20,000 tweets and 5,700 blog posts and forum comments. Twitter Sentiment is continually updating in real-time, as the tweets are posted. When I grabbed the screenshot above, over 2,500 tweets had been automatically categorised as positive or negative.

The analysis demonstrates just how easy it is to discover what people really think thanks to the Internet. People who take the time to tweet and write blog posts are more likely to be giving raw opinions than a selected audience targeted to respond to a survey. For sure we tend to be more compelled to write when we have something bad to say, so results are almost always going to skew towards the negative. But they are readily available, often for free or little cost, and offer an insight into how products and services could be improved. Sentiment analysis shows how businesses can benefit from getting involved in social media, even if only to listen.


Related posts:

Our connected future

When you reach the giga, peta, and exa orders of quantities, strange new powers emerge. You can do things at these scales that would have been impossible before…

Kevin Kelly has talked about the coming age of data, oodles of the stuff thanks to the Internet and what we’re doing with it. Here’s a nice video visualising how all this data and the devices connecting to it will define the future, albeit at the scale of trillions rather than zillions…

…and the makers of the video have more details on their web site – MAYA Design – including a research paper for download (PDF).

Related posts: Tim O’Reilly’s talk about The Internet Paradigm and Kevin Kelly’s Zillionics Change Perspective

Zillionics change perspective


Interesting article – ‘Zillionics‘ by Kevin Kelly. Well worth a read if you are interested in long tails, social networks and wondering where digital information technology is leading us to. Here’s a soundbite:

More is different.

When you reach the giga, peta, and exa orders of quantities, strange new powers emerge. You can do things at these scales that would have been impossible before… At the same time, the skills needed to manage zillionics are daunting…

Zillionics is a new realm, and our new home. The scale of so many moving parts require new tools, new mathematics, new mind shifts.

It’s a short and thought provoking article.

And on the same subject, a longer article from Wired: ‘The End of Theory: The Data Deluge Makes the Scientific Method Obsolete‘ by Chris Anderson. Hmmm…. we’ll see about that 😉

Header image: World in dots (iStockphoto, not for re-use)

When patterns mislead…

In case you missed it, there was a great article in FastCompany this week: Is the Tipping Point Toast? written by Clive Thompson. The article covered research from Duncan Watts, author of Six Degrees: The Science of a Connected Age (amongst others), that challenges the belief that you can use influencers (the well-connected) to seed a new trend.

To grossly over-simplify, the idea behind the tipping point is that people watch people who watch people who watch the influencers. (Classic Pyramid stuff.) Therefore, if you can get the influencers to adopt a new product, it will go viral and grow exponentially = big success. Duncan challenges this claim and argues instead that the likelihood of success has nothing to do with influencers. They are a side effect that can speed up adoption of a trend that would have gone viral anyway. In other words, spending your marketing money on the elite few is unlikely to be significantly more effective than standard mass marketing.

Central to Duncan’s argument is the habit we have of taking an event and then working backwards to identify what happened and spot a pattern that can be reproduced. Anyone who has read Freakonomics will recognise the flaws in this approach – correlation does not guarantee cause and effect, and indicators are easy to spot once you know what you are looking for.

Simple demonstration. Go find somebody, find a table or similar surface, ask them to ‘name that tune’ and tap out the Happy Birthday song with your hand. It will be a miracle if they spot the tune when it is tapped in monotone with no words. Tell them what the tune is and then both tap out the tune. It is easy to ‘hear’ it when you know what is being played.

On a related theme, I am currently reading a book about unpredictable events – The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb. The Black Swan is all about unpredictable events and why we never see them coming but think we should have (and therefore think we can predict the next one and get it wrong all over again).

In the Financial Times on Friday was yet another example – Last year’s model: Stricken US homeowners confound predictions:

¨…it seems that mathematical models used to predict future default rates, based on past patterns of losses, have gone wrong because they did not adjust to reflect shifts in household behaviour.¨

In the past, when US households struggled to repay debts, they tended to default in a certain order. Credit cards and car loans were the first to suffer. Failing to pay your mortgage was the absolute last resort. (Losing your house = big social no-no.) This time around, people are defaulting on their mortgages before personal loans or credit card bills. (The current climate has created negative equity and changed behaviour – why repay a mortgage for a property you don’t have any stake in anyway.)

There are two technology trends that need to beware this Achilles heel with using the past to predict the future. One is performance management (and its sibling: busines intelligence) – the use of data visualisation to analyse your information sources and gather new insights that should improve decision making. The classic turkey scenario – you get fed every day and expect to be fed again tomorrow. Instead, you get your head chopped off.

The other trend is social networking applications, in particular any that plan on using the ‘Social Graph’ as a method to track and use relationships. And that leads onto the final link (this post is really a collection of links from the week…) an article in VentureBeat – Google’s Marissa Meyer: Social search is the future. Coincidentally(?) it has come out at the same time as a video clip of Google’s new Social Graph API.

You can find out more about the Social Graph API at Google Code.

The example that Brad gives in the short video clip makes sense but let’s change the players. Instead of Brad finding his friend Bob on Twitter, imagine a spam company creating a loooooooong blog roll of ‘friends’ on LiveJournal and then setting up an account on Twitter to find out more contact details for all those ‘friends’. The potential problem with the Social Graph concept is that it reduces social networks down to a logical drawing, when relationships are anything but. Our concept of who is, or isn’t, a ‘friend’ has changed with the arrival of massive social networks such as MySpace and Facebook, and our behaviour has changed with it. The concept (and associated behaviour) will likely change again in the future, when organisations learn to exploit those friendships in new and unexpected ways. Will the API adapt?

Filed in: Social Graph (new topic); Data visualisation