Can apps be open and trusted?

Summary: Technical experts may criticise Apple’s strict approach to the app store and what apps are allowed on it. But never underestimate the human need to feel in control of, and trust, personal devices such as mobile phones. Walled gardens have their benefits.

One of the arguments for open source versus proprietary closed source software platforms has been that ‘many eyes’ developing and testing the code will result in a more secure and stable platform with fewer risks or bugs. Also, that there will be less opportunity for ‘vendor lock-in’, meaning better choice and control for customers.

Any discussion about open vs closed systems will often include polarized views. In practice, each has its benefits.

I prefer to drive privately-manufactured cars, trusting that the vendor will have completed sufficient testing to guarantee the safety of the car within certain driving parameters. I expect my car to cope with a certain amount of road surface water, a necessary requirement living in the UK, but I know it is not designed to be a boat. The fuel I put in the car is also likely to be privately manufactured, but delivered in a standard format. I prefer open source/public roads in the UK, expecting them to be mostly of a suitable width and quality to drive on. Private roads can be an entirely different experience – some better, some worse. And the better ones usually come with additional or hidden fees.

The same can be said about software. I don’t mind using proprietary software (the vehicle) but I prefer the data and communications (the fuel and the road) to be in open or standard formats. The reason for this ramble is a recent series of articles about mobile phones and criticism directed towards the closed nature of different vendors’ app stores.

One news article – Microsoft balks at Apple’s 30% fee – describes how updates to Microsoft’s Skydrive app for iOS have been rejected by Apple for not complying with the app store guidelines, causing problems for the app and any other apps that integrate with it (Skydrive is a cloud-based file storage system). The article focused on the subcription model as being the cause of the issue. Whilst money may have had something to do with it, I don’t think it is the only reason. Apple has a very strict criteria for how apps are installed and updated. From the content of the article, it seems the issue is with apps having features that link to external web sites. The risk being that such links could lead to updates taking place outside of the app store making it possible to bypass all review processes and restrictions.

Another article – Google’s Android malware scanner detects only 15% of malicious code in test – describes the concern with methods that allow apps to be updated outside the relevant app store. How easy is it for someone to distribute an app containing malicious code? Very easily it would seem. App stores aren’t completely immune, but how easy or difficult it is to release naughty apps is much more dependent on the app store review process.

It is possible that the Microsoft-Apple app store squabble is just about who gets what cut of what fees. And technical experts may criticise Apple’s closed approach to installing apps on the iPhone. But we should never underestimate the importance of feelings such as trust and control. Those feelings matter to people, and all the more so when it comes to personal mobile devices. Walled gardens have their benefits.

References

Flickr image courtesy of Martin Cathrae

Loyalty, control and terms of service

Laptop secure but not

If you want to protect your intellectual property, worry less about online security controls and more about loyalty. If your employees care, they are less likely to share with outsiders.

In the past week, Microsoft has changed its standard terms of service agreements. As reported by The Verge:

Microsoft’s revised policy allows the company to access and display user content across all of its cloud properties. Whereas the previous version of the TOS granted Microsoft the right to appropriate user content “solely to the extent necessary to provide the service,” the terms now state that this content can be used to “provide, protect and improve Microsoft products and services.”

Commentors on the article noted that this was a somewhat hypocritical move. When Google made a similar change to their terms of service just 6 months ago, Microsoft took out adverts in major newspapers to spread a little FUD*. Covered by the IdeaLab at the time, Microsoft felt the need to advise everyone:

Google is in the midst of making some unpopular changes to some of their most popular products. Those changes, cloaked in language like “transparency,” “simplicity,” and “consistency,” are really about one thing: making it easier for Google to connect the dots between everything you search, send, say or stream while using one of their services.

But, the way they’re doing it is making it harder for you to maintain control of your personal information. Why are they so interested in doing this that they would risk this kind of backlash? One logical point: Every data point they collect and connect to you increases how valuable you are to an advertiser.

Hmmm….

Hypocracy aside, and pity the Microsoftie that has to keep a straight face explaining the about-turn, the changes in terms of service are no surprise. Enabling content to be integrated across services does offer the potential to improve the services and yes, also the potential to earn more money from advertising. A necessary factor when offering ‘free’ services to consumers. Somebody always pays.

Dropbox is a popular online file sharing tool and has also come in for criticism. A recent article by Varonis highlighted that Dropbox holds the keys to encrypt and decrypt to your data on their servers (their emphasis, not mine). They have to, both for feature reasons – the file sharing element – and for legal reasons. What does this mean?

This means that a Dropbox employee could theoretically view (or steal) your data

O! M! G!*

Before you start worrrying abut Dropbox’s employees, look closer to home… A recent study by the Ponemon Institute found: (my comments in brackets…)

  • 90% of organisations in the study had experienced leakage or loss of sensitive or confidential documents over the past 12 months
  • 71% of respondents say that controlling sensitive or confidential documents is more difficult than controlling records in databases (surprised it wasn’t higher than that)
  • 70% of respondents say that employees, contractor or business partners have access to sensitive or confidential documents even when access is not a job or role-related requirement
  • 63% of respondents do not believe they are effective at assigning privilege (permissions) to [manage] access to sensitive or confidential documents

Hmmm….

So to summarise, most organisations do not have adequate controls to manage their intellectual property when it is in document form, regardless of where it is stored. If that’s the case, accept a simple fact. If a document exists, at some point you may lose control of it.  The terms of service for online storage are the least of your worries.

So what’s a business to do?

Whilst I would not suggest throwing out the security controls and it sounds like some organisations could do with improving them, I would encourage putting more effort (and investment) into making sure employees care. People who feel loyal to a cause will protect that cause.

Over this last weekend, Lewis Hamilton grumpily shared an Instagram picture of the McLaren Formula 1 team telemetry sheet, showing his and his team mate Jensen Button’s performance during qualifying. Reported today in The Times:

Christian Horner, the Red Bull team principal, could not contain his mirth as he claimed his engineers were poring over data that is usually restricted only to McLaren’s drivers and race engineers. Hamilton deleted the tweet but it was too late.

Oops…

What security system could have prevented that? A photo of a print-out shared via Twitter by someone paid an awful lot of money to win races in part based on the intellectual property held in said photo. But who was having a particularly crappy weekend with the team, again… Naturally the PR machine is now in full throttle (pun intended) and McLaren claims the data loss is no big deal.

How employees feel about the company will have a far bigger influence on maintaining control of your data than any security system, digital or physical. The ‘vibe’ of the office matters more than most people realise, for so many different reasons – productivity gains, collaborative working, knowledge sharing and yes, protecting intellectual property from prying eyes.

References

* FUD = Fear, Uncertainty and Doubt. What competitors like to create about rivals in customers’ minds
* OMG = Oh My God/Goodness, depending on your religious slant, often delivered with a twist of sarcasm.

Unexpected social connections

In the past couple of weeks there have been a series of articles raising concerns about the amount of personal data being published to online social networks and the potential for it to be used for ill intent.

There are two different scenarios people should consider before sharing personal information:

  1. Would I mind if a complete stranger knew that information?
  2. Do I mind what any of my ‘friends’ do with the information?

If the answer is Yes to either question think twice before putting that personal information online at all. That’s not to say sharing is inherently good or bad. But once you have shared information with anyone, you have lost control of it. If you answered ‘No’ to question two above, you answered ‘No’ to both.

Social Network Connections

Here is a simple scenario using Facebook. In the image above, the green buddy is you. The blue buddies are your ‘friends’. The red buddies represent everyone else with Internet access.

You set up your privacy settings so that only friends can see your personal information. Anyone who is on Facebook but not a friend will only see your name, nothing else. That’s your decision.  Sounds sensible. Sounds under control.

But if one of your friends decides to share information with their friends or third party applications, they may handover your personal information as well. It can be done in complete innocence and for good intentions – ‘I want to send birthday cards to my friends’, ‘Are any of my friends nearby to meet up with?’, ‘I’m interested in this group, I’ll add my friends to it as well’, ‘Has anybody in my network bought this <insert name of any item>?’ In the right context, all great stuff. But information about you has now been handed over to and stored somewhere beyond your control. The same applies to every application or web site that you allow to connect to your Facebook profile. Do you read all the terms and conditions, the notes about agreeing to data being stored indefinitely or granting access to other third parties?

It is not just you who decides how secure your personal information is. If you decide to share it with them, all your friends get to decide too. As do all the apps and web sites you connect to. And if you’re one of Facebook’s social butterflies, everyone gets to decide.

This doesn’t mean you should head straight to Facebook and switch everything off (too late for existing content anyway) but if you are going to participate in online social networks and care about what happens to your personal data, it’s a good idea to keep track of privacy settings and changes to policies.

If you’re not paying for a product, you’re not the customer, you are the product being sold. – Andrew Lewis

For Facebook and every application/advertising tool that uses it, it is in their best interests to get you to share your personal information. They will make it as easy and seamless to do as possible. And many will make it difficult or inconvenient to change those default settings to be more private. So think long and hard about what you want to share with anyone. And question whether having different privacy policies for everyone versus ‘friends’ actually means anything. A simpler (and more reliable) approach is to either share something with nobody or share with everybody.

A hassle, yes. But massive online social networks are still a young concept on the Internet meaning lessons will be learned the hard way. And everyone with a Facebook account can count themselves as one of the testers.

References

When security leaks matter

Laptop secure but not

There’s lots of news about the latest release of classified documents on Wikileaks. If you want to have a peek, The Guardian has a great visualisation to get started.

I had a mooch. Everything I scanned through was thoroughly boring. As is the case with most information, even the classified stuff.

Over the past 10 years, I’ve worked with numerous government organisations. When discussing intranets, collaborative sites and knowledge management systems, one of the most frequent concerns is how to secure access to information and prevent the wrong eyes from seeing it. It is no small irony that the first ever monetary fines applied by the Information Commissioner’s Office (ICO) this month were for breaches that had nothing to do with networks.

The first was a £100,000 fine against Hertfordshire County Council for two incidents of faxing highly sensitive personal information to the wrong people. The second was a £60,000 fine against an employment services company for the loss of a laptop.

Here’s another example. A fair few years ago, I was in Luxembourg to present at an EU event. The night before the meeting I was in my hotel room when an envelope appeared under the door. Assuming it was details about the event, I opened it and pulled out the documents. The first hint that the documents might not be to do with the event was seeing Restricted stamped across the top of the first page. The second indication was, when scanning the content, it became apparent the documents were something to do with nuclear weapons facilities across Europe. By that point, I looked at the front of the envelope to discover that it wasn’t addressed to Miss S Richardson (i.e. me) but was instead addressed to <insert very senior military rank I can’t remember> Richardson. A rather terse conversation took place in the hotel reception as the documents were forwarded to their rightful recipient.

All three examples above were security breaches due to stupid human error. None involved networks or bypassing security systems. But only one required legal intervention – the faxing of content to the wrong people that was both legally confidential and highly sensitive.

And that’s the rub. Most security leaks don’t matter. A few years ago, some idiot in the UK tax office HMRC downloaded oodles of personal details to a CD and then lost it in the post. If there has been a bout of serious identity theft as a result, I haven’t heard about it. Ditto for a more recent breach that managed to send addresses and bank details to the wrong people. More people have been affected by a cock-up in the calculation of tax due to incorrect data than from identity theft due to lost data.

Most of the content on Wikileaks is embarassing to its targets (usually governments and/or large corporations) rather than dangerous. Yes there are exceptions, such as the failure to redact personally incriminating information from documents that could put lives in danger. But they are the exception, rather than the norm we tend to assume when documents are classified as confidential.

One of the recommendations I give to clients looking to improve the use and value of their intranets is to devalue information. Make it easier to access. The confidentiality of most content is over-rated. It’s importance and usefulness to other people is often under-rated.

For most organisations, content falls into one of three categories:

  • Legal – fines and prison (though that is rare) may result from failing to protect legal documents
  • Sensitive – contains information that could put at risk or be damaging to an individual or organisation
  • Everything else

There’s no arguing over legal documents. No prizes for guessing at least most content falls (or should) under Everything Else. And sensitive…whilst some is easily justified (such as research into a new prototype that you wouldn’t want your competitors knowing about) an awful lot is considered sensitive purely to avoid embarrassment or conflict. Sometimes people should question verbalising their opinions, let alone putting them in writing… And if you work in government, for goodness sake don’t save it on your laptop!

One closing quote whilst on the subject of securing information. Whilst it refers to anonymity, it equally applies to trying to hide information from public view, which too often appears to be the reason for confidential classifications: (apologies, I can’t recall the source)

Providing a level of anonymity is great for play but prevents accountability

References and examples:

Lessons from Facebook’s experiments

[Update] Adding links and references as they bubble up on this topic…

There has been a range of news recently about Facebook’s latest approach to users’ privacy.

Wired has an article – Facebook’s Gone Rogue; It’s Time for an Open Alternative – explaining the concern being raised by many. By default, Facebook is now connecting and publishing every piece of data you choose to share on the platform. You may think you are only sharing your photos with your friends and family, but you are granting permission for Facebook to share your content with everyone and anyone on the Internet.

Robert Scoble has an article – Much ado about privacy on Facebook – with the counter argument. That we’re kidding ourselves if we ever thought anything we share on a computer, especially one connected to a network, is private. Facebook is just exploiting that which others have exploited less visibly (or easily – and that’s the key difference) in the past, and in the process helping people find what they need in ways Google never can.

Robert has a point. However the picture is a little more complicated. Not everyone wants to share their entire life online with everyone else and every organisation on the planet. Some people have very good and legitimate reasons not to. You could argue that such people simply shouldn’t be on Facebook. But in the past, it wasn’t a problem – the default behaviour in Facebook’s privacy policy was that information would only be shared amongst your network, which could be as large or small as you choose it to be. And your content stayed within the walls of Facebook unless you chose to opt-in to third party applications. That has now all changed and Facebook does not make deleting anything easy. Even if you choose to leave, if your ‘friends’ have already shared your content or tagged their own content with your name then your identity will continue to persist without you. And if you choose to stay, for certain content it is now all or nothing – if you try to opt-out of sharing with everyone then it will be removed from your profile and friends will no longer see it either.

Facebook is transitioning from a site for building social networks between friends to being one giant social network. A new mesh of connected personalised data is being created that has never before been possible. And that mesh is being shared with whatever organisations Facebook chooses to do business with. At the same time as we are seeing new tools arise that can mine massive amounts of data for patterns and profiling… We don’t yet know what all the implications – good and bad – will be. And whilst Robert highlights the good, history tells us there will also be bad. This is a live experiment that over 400 million people (and that’s just the active users) unknowingly volunteered to participate in.

Related Blog Posts

References

Other posts of interest on this topic:

All web sites great and small

Spotted a depressing article on Techmeme on Friday – Hackers turn Google into a vulnerability scanner (Infoworld). I suppose it was inevitable that this would happen.

Hacking group Cult of the Dead Cow (CDC) have kindly released a tool that uses Google to automatically scour web sites for sensitive information. Because it is automatic, it means that new and novice web sites are no longer protected by relative anonymity. If you are storing information anywhere in ‘the cloud’ and are worried about it being kept private and secure, the best approach is to run the tool for yourself and find out if your site needs fixing.

Whether Google likes it or not, they are as good as a monopoly on the Internet. There isn’t the proprietary lock-in achieved by a certain other technology company. But Google is the one location that most* people go to in search of stuff and therefore the one location most web sites aim to be discovered by. The trouble with technology monopolies is the lack of diversity. It’s what makes Microsoft software so vulnerable. Give a cold to one computer and you can pass it on to them all. Now the Internet is the focus and Google is the target to exploit. The CDC tool doesn’t care if your web site is on page 1 or page 1,000,001 of Google’s search results. It can and will find you (cue Terminator music).

The ultimate irony – the tool takes advantage of Google’s index, has been written using Microsoft .NET and is licensed as free open source… it’s not often you see those three areas come together as a single solution. Pity it had to be this one.

*According to comScore World Metrix, Google hosted 62.4% of web searches in December 2007. Next nearest rival was Yahoo with 12.8%, trailed by Microsoft with 2.9%