Chris on Software Testing
A blog by Chris Neville-Smith.
-
Closed App Store or open Android Market? Both, please.
Apple and Google are at war over whose system of accepting apps is better. Here's why they should offer both.
There is little doubt that one of the biggest changes in technology over the last ten years is the adoption of the smartphone. And well as changing the habits of mobile phone users, it's meant a lot of changes to computers in general. Not all have been good - it has propagated some ridiculous patent lawsuits, and it's encourages the rise of some highly dubious "freemium" games - but one of the best things it's brought, in my opinion, in my opinion, is the concept of the app store.
In the Linux world, the idea of the app store is old hat. For decades, most Linux distros have been orgnaised into packages. Some are integral to the system, such as the kernel and desktop, some are standard packages such as Libreoffice, and some are extra packages that users add to their system. To add an extra packages, you simply go to the Add/Remove programme, click on what you want, and Linux downloads and installs it for you. There are a lot of advantages to this method: it automatically installs any other software you need to run this program, everything is automatically updated, and if you ever want to install the program, Linux does it for you rather than relying on a dubious uninstallation package that came with the program. Although most software installed this way is free, it has been used for paid apps too.
So, in theory, it is welcome that this practice has been adopted on smartphones. In practice, however, things are more complicated. There are two big changes between Linux and smartphones. Firstly, it's opened this approach up from a mainly tech-savy small group to the masses of smartphone owners. Secondly, this method of installing software has suddenly become a lucrative way of earning money. As a result, there are now thousands of app writers all jostling for status in a highly competitive market. And this is where Apple and Google have heavily differed in their answer to this challenge.
Apple's solution has been to vet apps through its app store - a major derivation from most Linux distros that broadly welcomed anything. Needless to say, this has been controversial, and it would be easy to write a whole article bashing Apple for this. Firstly there's the obvious argument of whether Apple should be dictating to iPhone users what apps you can and can't buy. There have been some dubious decisions to ban apps that come across as censorship of opinions, such as Phone Story. Apple has been ridiculed for the apparently arbitrary way that apps get accepted or rejected. So far, so bad.
In Apple's defence, however, I'm not sure it's the sinister evil Apple conspiracy people have suggested. Microsoft took a similar to approach to Apple for Windows 8 apps. I read through those guidelines for a previous piece of work, and I can tell you that there's nothing particularly unreasonable. Nonetheless, the outcome was a mess, with stories of perfectly legitimate apps getting rejected for strange reasons. I suspect the root problem is that vetting policies, no matter how well-intentioned they must be, are in practice a nightmare to implement.
Apple, however, argue that their vetting procedure ensures that users can be assured of quality and security in their apps. The claim of quality is again dubious, because Apple gets criticised for apps making it through the criteria not being stable. But on the issue of security, they've got a point, and this is where Google's Android market comes into play.
Smartphone, being essentially another kind of computer, share the same security principles as normal computers, but there are a number of differences. Potentially, the spoils of a hacked smartphone outweigh those of a hacked computer. You can plunder address books, sell on details of the holder's personal movements, and make money by getting the phone to dial premium rate numbers. Luckily, Android and Apple have both proved themselves to be quite resilient to ne'er-do-wells - there's certainly no sign of a return to the bad old days where your Windows XP computer could catch all sorts of nasty viruses just because you visited the wrong site with IE6 and ActiveX.
But the chink in the armour is apps. No matter how secure an operating system is, a rouge app you willingly install is free to inflict all sorts of bad things - and there is little Android/OSX/Windows can do to protect you. Is that programme you just installed accessing your address book for legitimate reasons or is it sending it on to identity thieves? Is the programme making calls meant to do that or is it trying to sting you with premium rate numbers? There's no easy way for the phone to know. And whilst this could be a threat to any operating system, it's Android that gets targeted time and time again.
It's a serious problem. In the old days, it was easy to blame users for downloading a dubious virus-ridden program they found on the internet, but in the days of app stores where legitimate programs and virusware both come from the same place, how do you tell which is which? Most people cannot reasonably be expected to have background knowledge of the latest app scams out there. In theory, whenever you download an app you are presented with a list of actions your new app is and isn't allowed to do, but it's so confusing to layman the default response is to say yes to everything.
So, yes, iPhone has one over Android here. They can claim that the only way you can be sure of being protected from rogue apps is a properly vetted App Store. And the only way you can have a properly vetted App store is with an iPhone. But that means buying into Apple's idea of what you can and can't do with a smartphone. It's a high price to pay, and it's a choice people shouldn't have to make.
So, here is my proposed solution to all at Microsoft, Apple and Android: stop arguing about whether it's better to have an open or close app store, and instead offer both. Vetted store or open store, let the users take their pick.
So, how would this work, and why would it differ from what Android does now? Well, the way I think this should be done is for smartphones, by default, to take apps from a vetted store. Exactly how fussy they want to be over software quality or adherence to standards is up to them, but the important one is security. Someone answerable to Apple/Android/Windows has to have a look at the App to see if it's doing something it's not supposed to. But the power to opt out remains with the users. If they want to switch to an open store, by all means display a message explaining the risks of unvetted applications, but if the user selects "Yes" to "Are you sure", that's the user's choice.
This, I think, is a good balance. People who aren't interested in a choice of a gazillion apps out aren't going to be bothered with a limited range on offer from a vetted app store. That saves them the problem of picking the reputable weather app from the dodgy one, and saves the hassle of understanding the confusing security permissions messages. People who want a wider choice, who understand the risks of unvetted third-party apps, are free to do their own thing, as is anyone who finds the rules of the vetted store too restrictive for their liking.
I also believe it would do Apple (and Microsoft) some good to offer this choice. If few people choose to opt out of your vetted app store: great, you've got the confidence of your customers. If customers opt out in droves, that is a warning sign that you're doing something wrong, but also an opportunity to identify the problem and put things right. Surely that has the be better than people hating your vetting policy but being forced to stick with it. The advantage to Android from this policy, of course, is offering customers worried about spiked apps a safe option and peace of mind.
Wow, a blog article that picks out good points of both Android and iPhones. Is that allowed? var _paq = _paq || []; _paq.push(["trackPageView"]); _paq.push(["enableLinkTracking"]); (function() { var u=(("https:" == document.location.protocol) ? "https" : "http") + "://chrisnevillesmith.me.uk/piwik_test/piwik/"; _paq.push(["setTrackerUrl", u+"piwik.php"]); _paq.push(["setSiteId", "1"]); var d=document, g=d.createElement("script"), s=d.getElementsByTagName("script")[0]; g.type="text/javascript"; g.defer=true; g.async=true; g.src=u+"piwik.js"; s.parentNode.insertBefore(g,s); })(); -
The big bang theory
If you look beyond the political point-scoring over the latest debacle on Universal Credit, the real lesson is that the "big bang" approach to IT projects rarely pays off.
A completely inaccuarate depiction of the Big Bang.
Also not a good approach to most software projects.Well, I hate to say I told you so, but... I told you so. Just under a year ago, I idly speculated that the next big story about an IT cock-up might be the upcoming Universal Credits system. I won't go over the whole thing in detail, but it boiled down to two concerns: firstly, I was sceptical over whether the intended launch of October 2013 was realistic; and secondly, I know from my experience of ID cards that there is a culture in the civil service of making promises that cannot be delivered. And what do I find yesterday? Oh dear, oh dear, oh dear.Now, before I jump on any bandwagons, it's helpful to put this in a bit of context. Firstly, the National Audit Office is notorious for nit-picking (as is the Public Accounts Committee), and their supposedly damning reports are often little more than minor points blown out of proportion by the press. Secondly, benefit reform is a hugely controversial issue and a lot of criticism (and defence) of this IT project will be down to ideological stance on benefits rather than whether the product does the job. (For the record, I think the principles of Universal Credit - that work should always pay and simplification of a bloated complex system - are a good idea, but there's valid points over using the reform as a smokescreen for cuts.) Nevertheless, it looks like there's more to this one than political hype. The October launch is now just six pilot sites, which is a common Civil Service method of back-pedalling in a way they can claim they "met" the deadline.So what's gone wrong? There is a good summary of reported mistakes on BBC news, and the thing that struck me the most about this is how similar these mistakes are to the mistakes made with ID cards. Comparing what's happening now to what happened with ID cards, I can tell you the following:
- The over-ambitious timetable: Oh dear, I'd really hoped that the civil service might have learnt their lesson here. I remember the first thing that set alarm bells ringing was my first look at the development timetable. Even with the promises that you could easily adapt an existing piece of software (they thought a program designed to issue security passes for buildings would do the job), I could tell straight away the schedule was unrealistic. You need to allow at least one month of testing and stabilisation for every month of programming - this project tried to six six months' worth of work in three months.
The new passport issuing system has fared little better - what was supposed to roll out in 2010 is only just being rolled out now. They never seem to learn this lesson.
- There was no detailed plan: Now, for once, I can't accuse civil servants of repeating an old mistake. ID cards was based on the V-model method of software development, where requirements, system design and component design were done in order, and testing starting with components and working up to a whole system. This was a reasonable approach, but - as often is the fate of V-model-based projects - there were a lot of oversights in design that resulted in too many disruptive last-minute changes.
This project, however, seems to have made the opposite mistake. It is reported that this project used the Agile Model, where you start off with creating and testing a prototype, and through many iterations add in extra features, developing the design go along. The Agile method can work if you have a suitably flexible project. This was not. There was a fixed deadline, a big undertaking, and complex contracts awarded to suppliers. The plans for ID cards were probably too rigid, but it seems we've gone from one extreme to the other.
- A bunker mentality developed: This one doesn't surprise me at all. One of the things that shocked me the most about the ID cards projects was the amount of toadying that went on. I sat in a training session where people tried to out-do each other on how enthusiastic they were about ID cards, including - I am not making this up - a helpful suggestion for how to encourage take-up, which was to only give benefits to people with ID cards as if this would suddenly make the scheme popular. It was also taken as gospel truth that once ID cards were introduced, no future government could reverse it. (Um, yes they can, and yes they did.)
Any attempt I made to highlight the mildest of concerns about the state of the IT systems got nowhere, so it's no surprise if DWP underlings with constructive concerns fared little better.
- Poor financial management: I didn't see much of the financial side so I can't make much comment about this. What I do know is that there was a massive discrepancy in pay between permanent staff and interims brought in from outside, which created a lot of resentment within the organisation. To be fair, the civil service has made a lot of progress cutting down on expensive consultants when they're not needed, but evidently that didn't happen with Universal Credit.
- High staff turnover: Unclear what the exact cause of this was in DWP's case, but a common effect of badly-managed IT projects is that everyone wants to get out. This is especially common when staff are expected to have their lives outside of work put on hold in order to work round increasingly unreasonable demands, as happened with ID cards. To be fair, I believe the people at the top of the project genuinely tried to maintain a work-life balance, but there were too many people down the management chain who ignored this.
- Inadequate control over suppliers: I will be fair to the ID cards management here: managing the relationship between government department and contractor is difficult. You need clearly defined areas of responsibility so that everyone knows who is responsible for what, but you also want to avoid fiddly micromanagement going on between the two. The management of the ID cards project were perfectly aware of this challenge.
But for all of their efforts to get this right, something went badly wrong.. I did a large part of the testing at the offices of the developers, and it wasn't long before I heard developers openly making derogatory remarks about individual senior civil servants and interims when I was clearly in earshot. The relationship between the programmers and the testers on the ground was all right, thank goodness, and we managed to ignore the squabbling and work together, but I dread to think what was going on upstairs.
- Ignoring recommendations: This one I can't comment on, because I have no idea if any recommendations were made to the ID cards project, let alone whether they were acted on - those sorts of things were kept out of view. What I do know is that, as well as the futility of anyone on the inside trying to give advice, successive layers of management were seemingly oblivious to the increasing notoriety of the scheme outside the bubble. Doesn't bode well for listening, and it seems that the bunker mentality has struck again.
In defence of the DWP, the project managers were probably unaware of all the mistakes made in the ID Cards project. This is because we never got to hear about it. ID cards were binned on political grounds by a change of government, so the problems with the IT system never came to public attention. This time, however, there's no way you can fail to notice what's gone wrong. Some serious lessons need learning here before yet another IT project is embarked on with silly timescales.But there is one other other issue here, one that I think is more important than all of the above. I think the original mistake was to embark on an IT project of this scale in the first place. This is what I call a "Big Bang" project, where a date is set in the future where everything switches at once: IT systems, rules, maybe even working practices. Big Bang projects are extremely risky because if they go wrong, they can go massively wrong, in this case threatening to derail a flagship government policy. Sometimes a Big Bang approach is unavoidable; ID cards, for example, were a completely new thing that required the development of a completely new system. Even so, they had the safeguard of not needing to switch over millions of existing records.I cannot understand why it was necessary to make the Universal Credits IT project so complicated. When one purpose of the project is to simplify the benefits system, one would have thought the easiest approach would be to adapt the existing systems to work with the new rules. I'd have thought it would have been reasonably easy to make the existing system for Jobseekers' Allowance (which already considers means-testing such as income as savings) also handle Universal Credits claims. Housing benefit and other benefits that are replaced with Universal credit could then be set to zero, hopefully preserving the interface between the systems. The DWP system will need replacing eventually, as all systems do. But it's better to do that as a project in its own right, where you're free to do it over a sensible timescale without deadlines for government policy getting in the way.It is not clear whether this debacle was down to a government minister imposing this silly timescale on civil servants, or civil servants choosing this silly timescale and telling government ministers it was achievable. That is not important. Well, it is important if you're more interested in scoring political points one way or the other, but that is what happens after every IT cock-up and lessons don't get learned. Neither will lessons be learned if civil servants and government ministers blame each other. If things are ever to change, ministers and mandarins alike must appreciate a simple rule: the big bang approach rarely pays off in IT.var _paq = _paq || []; _paq.push(["trackPageView"]); _paq.push(["enableLinkTracking"]); (function() { var u=(("https:" == document.location.protocol) ? "https" : "http") + "://chrisnevillesmith.me.uk/piwik_test/piwik/"; _paq.push(["setTrackerUrl", u+"piwik.php"]); _paq.push(["setSiteId", "4"]); var d=document, g=d.createElement("script"), s=d.getElementsByTagName("script")[0]; g.type="text/javascript"; g.defer=true; g.async=true; g.src=u+"piwik.js"; s.parentNode.insertBefore(g,s); })(); - The over-ambitious timetable: Oh dear, I'd really hoped that the civil service might have learnt their lesson here. I remember the first thing that set alarm bells ringing was my first look at the development timetable. Even with the promises that you could easily adapt an existing piece of software (they thought a program designed to issue security passes for buildings would do the job), I could tell straight away the schedule was unrealistic. You need to allow at least one month of testing and stabilisation for every month of programming - this project tried to six six months' worth of work in three months.
-
Bring on the naked laptops
If we’re serious about using technology to empower users, people should have the choice to buy a laptop, tablet or smartphone without the software.
This is my new laptop. Observant readers will notice that this is a Chromebook running Ubuntu on it. As Linux fans know, Ubuntu and most other Linux distributions can be legally downloaded for free and installed on any computer. The only question is what computer you choose. For me, a Chromebook seemed like a good bet: they are cheap low-spec laptops, probably incapable of running Windows 7, but Ubuntu is a resource-light operating system and I use my desktop for anything resource-intensive. Chrome OS is heavily geared towards users of Google services, like GMail and Google Docs, but I’m installing my own software so that doesn’t matter. So, let’s buy a Chromebook and install Ubuntu. Simple, huh? Simple?
Hah, I wish! You have no idea how much blood, sweat and tears I’ve been through to get to what you can see in that photo. It all boils down to this thing on Chromebooks called secure boot (aka verified boot). Oh boy. This is something that, in theory, is meant to protect you from hackers up to no good – I have used the words “in theory” for a reason, but I’ll come back to that later. As far as Chromebooks are concerned, there is a way of switching off secure boot by going into “developer mode” (which isn’t advertised widely, but if the intention is to prevent people fiddling with settings who don’t know what they’re doing, that’s fair enough). Unfortunately, even in this mode, you still can’t boot from a CD/USB drive, which is the normal way of installing an operating system. Never mind, there’s an Ubuntu derivative out there called Chrubuntu, specially designed to be downloaded and installed from a command prompt in Chrome OS. Okay, that doesn’t sound too bad.
So, how did it do? Well, firstly I discovered that the way you enter developer mode on a Samsung Chromebook is different from other Chromebooks. Then I made several failed attempts to install Chrubuntu and eventually realised that the installation script I was using doesn’t work for my particular model of Samsung Chromebook, and in fact needed a different installation script. Once it was installed, I had another nightmare getting Libreoffice installed, which was down to the Ubuntu UK mirror not being able to download software with ARM architecture from uk.archive.ubuntu.com (you have to use gb.archive.ubuntu.com). There’s still other issues to fix, but currently Chrubuntu is only for open-source masochists. Yes, nothing beats the thrill of getting your new laptop to work, amidst fears you may have spent £230 on an oversized paperweight, but this is not an experience I’d recommend for Joe Public.
Why is this so frustrating? Because it doesn’t have to be this complicated. If you want Ubuntu or any other operating system on a desktop computer, you simply buy a computer with a blank hard disc, insert the installation CD or USB stick, boot up, and away you go. Laptops are a different story. It is near-impossible to buy a laptop without a pre-installed operating system on it, paid for by you. And even if you think you’ve managed to buy a laptop without Windows, you may still end up paying for it owing to this ridiculous arrangement where laptop manufactures have to pay Microsoft for one Windows license per machine whether or not it gets installed. But most Linux users choose to bite the bullet, pay a £40-ish premium for a product they didn’t ask for, and forget about it.
Now even this is getting worse. Microsoft have also been joining in with secure boot, but unlike Google, there’s not always an opt-out – and that would have included my ARM-based laptop had it been pre-installed with Windows 8 instead of Chrome OS. Microsoft argues that secure boot is necessary to protect you from rootkits, but is this really a proportionate response to the threat? The suspicion is that the real threat that concerns Microsoft is the threat of pesky users running operating systems Microsoft doesn’t want you to use. This may sound paranoid, but as Microsoft have previously attempted to stop “naked” desktop computers being sold (those 5% or so sold without pre-installed operating systems), and they were slow to announce that some (not all) kinds of computers would be allowed to opt out of secure boot, I struggle to find any more charitable explanations.
Tablet and smartphones are little better. These devices are, for all intents and purposes, another kind of computer. The only difference is that they have touchscreens instead of keyboards. Therefore, one would have thought you should be able to insert a USB stick and install whatever you like. Instead, installing another operating system on an Android device is about as complicated as my experience with a Chromebook. And this is at the mercy of Google voluntarily providing a mechanism to opt out of Android on a smartphone. They might withdraw this is a future update, just like Sony did on the Playstation. As for anyone who bought an ARM-based Windows 8 tablet – forget it.
Now, I don’t believe in leaping on the “Google good, Microsoft bad” bandwagon – there are plenty of questionable practices where Google has a case to answer. But on this very important issue of consumer choice, Microsoft is clearly the worst offender here. Google hasn’t been terribly helpful to users wanting to customise Android devices or Chromebooks as they see fit, but they have stopped short of outright blocking it. Microsoft, however, are actively preventing it (as does Apple, although I doubt many people would want to spend a fortune on an iPad if the first thing you’re going to do is scrub the disk), and that’s something I fundamentally disagree with on principle. If you’ve bought a computer with your own money, it’s nobody else’s business how you choose to use it.
But even the opt-outs offered by Google aren’t good enough. If we are serious about computers empowering users, users should have the option of buying any kind of computer – desktop, laptop or tablet – without the software. As this is possible for desktops, it’s no excuse to claim this is too difficult, and certainly no excuse for laptops which are functionally identical to their larger cousins. The current grand scheme to empower customers in Europe is the “brower choice” screen in Windows; I personally think that is a waste of time, because the choice of browsers was already there, just made a little more obvious. Instead, we need to look at where users don’t get a choice. It’s bad, it’s getting worse, and the availability of naked laptops would do a lot of good. -
Time to wise up to Freemium
The recent case of a £1,700 Zombies vs Ninja bill should be a wake-up call for how ruthlessly children are being used as cash cows.
“That will be £699.99, please.”
For all the criticisms I have of Apple, one of the things they got right was the App store. They weren’t first people to use this model (Linux distros had already used this approach for years), but they did pioneer mainstream adoption. This has brought a lot of benefits: software installed through repositories such as App Stores easily remains up to date, you don’t have to search on the internet to find the program you’re after (and therefore little danger of accidentally installing a spiked programmasquerading as the one you’re after), and it’s easy to remove anything you don’t like (as opposed to hoping the program came with a working uninstall mechanism). It’s also opened up the market on paid apps beyond the big players, and pushed down prices; no more will we be forking out £29.99 for very basic games. On the whole this has been a major step forwards.
Not everything about it has been welcomed. There are quite a few iffy questions about Apple and Windows 8’s over-zealous vetting policies, which I’ve discussed before. But lately I’ve seen a new breed of programs coming to App stores which I think needs questioning. These are known as “Freemium”, and these apps, usually games, are free to download. But if you want to advance in the game, you have to pay real money to receive in-game power-ups. Let’s make this clear: it is nothing like the old model of a free demo version or a paid full-version – they make their money from customers who pay for upgrades again, and again, and again. Freemium advocates might argue that if you want to be a football champion, you have to spend money on a decent kit and training, but I don’t agree. This is cyber-land, where “training” and “kit” is merely changing a few ones and zeros in your favour, and unlike real training and kit this costs nothing to make. I would rather liken this to an owner of a cricket pitch charging you extra for bowling overarm.
To be fair to the Freemium companies, this isn’t entirely a new thing. The practice of gamers paying real money to better themselves in imaginary games has been going on for years without their help. For many years, people have willingly paid real money for virtual gold in games such as Warcraft on a virtual black market, in spite of the game owners trying their best to stop it. There are other also practices taken to such a ridiculous extent that a Chinese player even ending up killing someone for real over sale of an online sword. It was perhaps inevitable that someone would realise that there was a whole market for paying real money to win an imaginary game. I want nothing to do with this – this seems to me like buying a gold medal and thinking this makes you an Olympic champion – but is it really my business? No-one’s being forced to pay for this, so why force someone not to pay?Good question. I have frequently berated Microsoft and Apple for depriving their customers of freedom of choice, especially in their app markets. I have never accepted the argument that customers need protecting from apps deemed to be inferior quality to the alternatives, so it’s not easy to suddenly claim we need to protect customers from unethical methods of payment. Surely we can decide for ourselves if we’re all adults?The problem is, we’re not all adults. I’ve noticed that Freemium games are increasingly being aimed at children – and young ones at that. Young children, with no concept of financial responsibility, are the easiest targets for tat whose retail price vastly outweighs the design and production cost. Ruthless marketing pre-dates apps - remember the controversy over Pokémon Red and Pokémon Blue? - but it also predates computer games completely. My Little Ponydidn't need computers to churn out endless ranges of new ponies, and woe betide any parent who says no. The reason I am picking on My Little Pony is that this greed has blatantly gone straight into their latest app, where you have to pay as much as £35 to unlock new virtual ponies. Smurf’s Village has also come under heavy criticism for racking up large bills, and recently a 5-year-old child racked up a £1,700 bill with the blatantly child-aimed Zombies vs Ninja. Not all Freemium games are aimed at children, but increasingly it’s the kiddie games that are behaving the worst.The predictable response is to scorn parents for not having control of their children. I think that’s a poor excuse, no better than a supermarket blaming parents for tantrums over sweets they deliberately placed at the checkout. With the shameless targeting of children combined with the extortionate amounts Freemium games try to bleed off customers, and the fact that the contracts used mobile operators make it difficult to keep track of that you’re spending, we’ve got a very serious problem on our hands. Probably the most unkind but telling analogy I’ve heard is the business model of the cocaine dealer: the first hit is always free.I don’t want Freemium banned. Like other consumer products pushed at children, creating new rules rarely solves the problem – they are too easy to get round. There is a case for pressing Apple and Google to be clearer about in-game purchases – as it stands, some iPad apps mention this as an optional courtesy, and one must consider Apple’s priorities when you consider how restrictive their vetting procedure is. What we do need is for the public to rise up as one and stop appeasing these tactics. With many freemium apps costing five times what you'd have paid for something similar in a shop, it's only a matter of time before people start wising up. And this may happen sooner than you think, because the mood is already turning ugly.Done right, Freemium could still work as a business model. If they carry on taking users for granted the way they do now, it could be the next Instagram. And, unlike Instagram, it won't be missed. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Who needs 1984 when we’ve got Foursquare?
Online snooping is getting worrying – but if we want to stop this, we must ask some fundamental questions about social media.
The next poster in the series says "Facebook is privacy"
When George Orwell created Nineteen Eighty-Four and Big Brother in 1948, he could scarcely have imagined the future. Not so much the nightmarish vision of the Ministry of Truth, Ministry of Plenty, Ministry of Peace and Ministry of Love, but two things he would never have guessed. Firstly, the emergence of god-awful reality TV show Big Brother (and all the other god-awful reality TV programmes it spawned), and secondly, a load of persecution complex-ridden Middle Englanders who says “It’s just like 1984” every time they get a speeding fine. I suppose some bits bear resemblance to the book, but that tends to be things like petty council officials invoking anti-terrorist laws over littering. All in all, it’s a bit of a damp squib.
But fear not, Mr. Orwell, all is not lost. Recently we have seen the arrival of a new program called RIOT (Rapid Information Overlay Technology). This little device uses information from social networks to track the movements of individual people. It is suggested this could be used as ways of monitoring people who are about to commit a crime – cue analogies to Precrime in Minority Report – but just like its ficticious counterpart, there are serious questions of how reliable this would actually be. Certainly there’s not much enthusiasm from the Police. Which makes me think the key market might be employers. Like a retail manager who wants to know if his staff are shopping at competitors. Or a civil servant checking which pesky underlings attend opposition party meetings in the run-up to an election. This could be fantastic news – if you are a control freak with lots of money and power.
There is just one small but crucial difference to what Orwell had in mind. The subjects of Oceania were forced to be monitored day and night in everything they do, through cameras, curfews and spies. RIOT, on the other hand, runs entirely off information that its unwitting subjects quite happily stuck in the public domain. Love your Facebook status updates? Can’t live without your Tweets? So does RIOT. All this information about where you are and what you’re doing is most useful, thank you very much citizen. Better still, why not take information from Foursquare, a service that makes it trendy to reveal your location as often as possible. Who needs “Big Brother is Watching You” when you can say “Hey there, are you going to put all your private information online like the COOL KIDS do, or are you a LOSER?”
This is not the first time someone was written an online snooping program that uses publicly-accessible information. Previous examples include “Please Rob Me”, to inform you, me, and any local burglars which houses are empty, and sex pest-bonaza “Girls Around Me” showing you the location and physical appearance of females nearby.[1] I should point out that these programs were both written to prove a point – albeit in a highly irresponsible way – but that’s little consolation for anyone affected by this. The Inner Party must be kicking themselves they never thought of this.
Now, as someone with no Facebook, Twitter or Foursquare account, it would be easy of me to scoff and tell everyone affected that they brought it on themselves. But the reality, I think, isn’t quite so simple. This is an issue that I think can only be addressed with some fundamental far-reaching questions about social media.
The problem is that, for many people, social media is now effectively compulsory I have lost count of the number of people who say they’ll Facebook me, as if this is the only way you communicated with people nowadays. (I mean, haven’t these people heard of e-mail?) I personally think that friends who won’t stay in contact if you’re not on Facebook aren’t worth having as friends, but I have a choice of friends who aren’t so obtuse. Other people don’t. This is especially a problem amongst teenagers where invitations to parties and the like are now exclusively given through Facebook – and habits made in teenage years can persist for a long time. And that’s just individuals. If you’re a business, or you’re self-employed, woe betide you if you’re not signed up to Facebook, Twitter, Linkedin and mysociallifesbetterthanyourssothere.com.
Once you’re signed up, social media sites have a very poor record for privacy. Oh, they’ve got an excellent record in producing privacy policies – it’s just that the typical privacy policy roughly says you don’t have any. The reason I left Facebook (apart from endlessly getting contacted by people I was quite happy to have lost contact with) is that I got sick of all the times the site pestered me to add more and more personal information about myself. Facebook’s claim to privacy lost any credibility when they started sharing information with friends’ friends without asking you. Bear in mind that at least one of your Facebook friends is probably trying to break with world record for most Facebook “friends” they don’t even know; so this makes Facebook about as private as announcing your next relationship breakdown with a skywriting plane. I know there’s all sorts of opt-outs available in social media, but the combination of apathy and confusing configuration settings renders this largely ineffective. As for safeguards against combining information from different social networking sites to form a highly intrusive profile of you – forget it.
Normally I would argue that privately-owned companies should be able to do what they like. But the very nature of social networks makes sites such as Facebook and Foursquare virtual monopolies. And as private monopolies, they have a lot of power but very little responsibility. Foursquare cannot credibly blame third-party apps for using public information they’ve collected, neither can Facebook credibly blame its users for handing over private data they encouraged them to reveal in the first place. We need a serious debate about where social media stands in an increasingly lawless privacy-disregarding internet. For what it’s worth, I think social media should, at the very least, operate information sharing on a strict opt-in basis. And if any users wish to share their information at all, they should make it absolutely clear what this means and what the risks are. I don’t know exactly how this should be done, but this push to make users share more and more private information online isn’t doing any good.
If the big social media sites won’t budge, the only other hope is a culture change from the users. Strange as this may seem to some people, until a few years ago the world functioned perfectly well without Facebook. Social media itself is undoubtedly here to stay, but do we really have to keep the whole world informed of every aspect of our wildly trumped-up social lives? Not all techno-crazes stick around – few people today want the latest Jamster ringtone (thank God). It would, I think, be better if this fashion for sharing all your information online became a passing fad – maybe with a return to old-fashioned offline boasting. If this sounds too difficult, just think what we could achieve. When the Establishment creates the Ministry of Online Privacy, we’ll know they’re rattled.
[1] In Foursquare’s defence, it’s only fair to point out they did block access to Girls Around Me as soon as they found out about it. However, all this really proves is that next time you want to use Foursquare for snooping or stalking, you just make sure they don’t know what you’re up to. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
A harsh lesson for Facebook
As expectations for a free internet increase, more novel ways have to be found to make money. Instagram is a prime example of how not to do it.
“Hello John. You only did six Facebook Status Updates yesterday. Why don’t you buy
the new iThing plus max supreme, with new Facebook infinity plugin included?”
There’s a famous scene from the Stephen Spielberg classic Minority Report depicting a possible future of advertising. In the film, whenever our hero John Anderton enters shopping centre, the nearest advertising billboard scans his irises and says “You, John Anderton, need a holiday / designer jacket / ticket to the Superbowl.” (And when he gets a new pair of eyes on the black market, the adverts change to “You, Mr. Yakamoto, need a holiday / designer jacket / ticket to the Superbowl.”)
Like most science fiction films, it sought to portray an uncomfortable vision of the future, in this case one with scant disregard for civil rights or privacy. However, it appears that the advertising industry completely missed the point and thought Mr. Spielberg was portraying a rosy future where hard-working businesses can sell more products to consumers through a “relevant adverting experience”. At least, this would explain the logic behind those internet adverts of “57-year-old [Insert location you are accessing internet from] Mom looks 27 – click here to discover her secret”. It would also explain why, when you look at one website, the adverts of that product keep following you to other sites – an action I find comparable to sales reps from Boots following you into Debenhams and Costa to pester you into buying the shampoo you were vaguely browsing.
They haven’t quite reached the technology needed to do full Minority Report -style advertising, but recently a photo-sharing site decided it would join in the fun. Yes, following its recent acquisition by Facebook, Instagram helpfully informed its customers, somewhere in its new terms and conditions, stating that in one month’s time they’d have the right to use your photos for any advertising they want One problem: in most cases, it’s not just the consent of the uploader you need: you also need the consent of the photographer (who is not necessarily the uploader) and for adverts you really need the consent of the people in the photos too. So really the only practical legal way they could use this is to use people’s photos as personalised adverts directed at them. Not sure what they had in mind – maybe “If you liked these hills, you’ll love the hills in Bratislakislavia which you can now reach with cheap flights from us. Click Here.” Anyway, we’ll never find out what their plans were because a massive backlash forced them into a U-turn.
As I’ve said before, it is unfair to vilify a website simply for seeking new ways of getting commercial revenue. I can only think of one major website that runs itself entirely on voluntary donations (Wikipedia), and that is only possible through an unprecedented amount of good will, both from donors and volunteer contributors. The rest cannot run themselves for free. How far it’s morally acceptable to go is open to debate; some web users, for instance, argue that even the most non-intrusive advertising is bandwidth theft, whilst some less scrupulous advertisers would see no ethical issues in, say, putting an ad for a Wonga loan on a debt advice site. The moral debate, however, is a side-show: as is stands, there’s very few rules against intrusive advertising. They can do it, and they are.
But no matter how blasé you are about advertising ethics, there is one thing you ignore at your peril: how your customers react when you go too far. And this is where I think Facebook’s management of Instagram is a problem. Because Facebook has a record that it can get away with anything. It is one of the most heavily-criticised sites for is casual disregard to privacy. And yet every time Facebook makes a controversial change to its privacy policy, Facebook users usually react en masse by either joining a disapproving Facebook group or writing something disapproving in a status update. This is not a sweeping generalisation of all Facebook users being apathetic, but more an observation of how hard it is to vote with your feet. If you leave Facebook, you forfeit your network of Facebook friends. That’s not an issue for people like me who found the whole concept of Facebook friends utterly pointless, but I’m in the minority here. You Facebook friends won’t be waiting for you on Google+.
That safety net does not apply to Instagram. Migrating to another site is much easier: you just open a new account, upload all your photos, and close your Instagram account. No need to worry about how your friends will view your new site – search engines will pick it up in no time. But with Facebook so used to doing what it likes without consequence, it seems that complacency overruled common sense. And the rest is history. The outcry forced them to back down. Even this may be too late. Those who went through all the trouble of migrating to Flickr are unlikely to bother coming back. Facebook may well have changed Instagram into a $1bn waste of money.
What is most frustrating is that this was completely avoidable. There are plenty of ways of making money without alienating your users. Google Blogger provides AdSense on blogs on an opt-in basis, with a cut for both the blogger and Google, with enough left over to pay for the ad-free blogs. Wordpress funds its blogs through a series of optional paid extras, with again enough revenue left over to fund a free blog service. Surely there must be ways for a photo-sharing site to make money? How about, instead of using other people’s photos without permission, scour instagram for the best photos are offer a deal on selling as a stock photo? You get a cut, Instasgram gets a cut, have enough left over to run the site and the added bonus of an incentive to upload the best possible photos. Sadly, such is the damage to Instgram’s reputation we’ll probably never know if they could have done this.
Will Facebook learn lessons from this? I hope so. Will advertisers in general learn lessons from this? I suspect not. I suspect they’ve already moved on from Minority Report and they’re now getting inspiration from the “RESUME VIEWING” scene in Black Mirror. It’s the bit where a computer detects you’re looking away from an annoying advert, displays a message saying “RESUME VIEWING” and plays an earsplitting high-pitched noise until you give in and look at the screen again? Luckily, there currently isn’t the technology to create something like this in real life - wait a second, could you adapt a Kinect to do that?
Oh dear. I've got a bad feeling about this. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
What is going on with Google’s takedown requests?
Iggle Piggle: The new boss of the Pirate Bay?
I know I promised to take a break from Microsoft blog posts, but here’s a third one in a row. Not Windows 8 this time; it's about how Microsoft has got itself in the news for the wrong reasons. It’s been spotted that Microsoft has been sending automated copyright infringement notices to Google claiming that its copyrighted material is being infringed by sites such as, err … the BBC and its well-known hotbed of online piracy, CBeebies. The BBC was unaffected as it’s on a Google whitelist, but other sites weren’t so lucky, including perfectly reputable sites such as AMC Theatres and RealClearPolitics.
First of all, embarrassing though this is for Microsoft, it’s not fair to single them out. Their only crime is getting caught. The majority of copyright enforcement comes from the big film and record labels. As I’ve previously written, wanting to protect their material is reasonable, but their record of heavy-handedness isn’t. Although Microsoft has been criticised for collusion with the record companies, on this issue of dodgy automated takedown requests, I imagine the record companies are doing the same, if not more.
The root problem is that online piracy and copyright is a horribly complicated issue, and it doesn’t help that rules designed to be fair are being abused on both sides. I’ve looked at Google’s information on takedown requests and it seems fair and even-handed, so what’s going wrong? To consider this, let’s go back to the beginning.
First of all: something has to be done. I don’t want to go repeat my arguments, but the short answer is: i) films and music can’t be made for free, and ii) the pirates forfeited any moral high ground when they started raking in huge profits. But, as we all know, with the pirates swift to place material on foreign servers out of reach of the law, and to move on every time a takedown writ is issued, lawsuits alone doesn’t do the job. So attention turns (in part) to making it harder to find the stuff in the first place; the idea being that if it takes effort to pirate your new favourite single, you might decide that paying 99 whole pennies for a legal download might not be such a bad option after all. One obvious thing you can do is stop illegal downloads showing up in Google search results. No grounds for the pirates to complain – it’s not Google’s job to make life easier for them. So far, I have no objections. And then things start to get messy.
There are two snags: firstly, Google is an automated service covering gazillions of websites, and it is simply not practical for the staff at Google to fully investigate every claim. Secondly, a lot of copyright holders have been getting greedy and claiming copyright over things that aren’t theirs. It’s not just the film and record companies who are guilty of this; film studios have tried to get unfavourable reviews hidden, businesses have tried to get rival companies blocked from Google, employers have tried to get critical employees’ blogs delisted. Google claims to be refusing these requests, but these were easy because the copyright grounds were blatantly frivolous. One must wonder how many borderline cases are getting pushed through by the side with the bigger legal team.
But, this worry aside, Google makes a good attempt to come up with a fair copyright policy. If you want to claim copyright over some else’s web content, you have to back it up with an affirmation that this is true to the best of your knowledge. If you’re found to be lying, you might face criminal charges. Google, in turn, informs webmasters of alleged copyright infringements and give them a chance to state their case. Again, if you lie when making a counter-claim, you can also go to court. This seems fair enough – reporter or defendant, if you are in the right you shouldn’t mind making yourself answerable to the law. That ought to weed out most pirates without giving copyright claimants undue power – or so one would think.
In the last few months, things have suddenly changed. There has been an explosion of takedown requests, now seven times what it was in June. How has this happened? It’s not like there’s been a sudden surge in pirated material, so I can only assume it’s a surge in copyright claims. So presumably lots of companies, not just Microsoft, have started sending automated notices of copyright infringements to Google. To some extent, you can argue it is necessary to do this in order to keep you with pirates repeatedly re-uploading the same material. But as the recent debacle with Microsoft and CBeebies shows, these automated requests can get it badly wrong.
The current consensus seems to be that it’s all Microsoft’s fault and not Google’s, with Google apparently only doing what it legally had to do. I don’t quite agree. Google has to take its share of responsibility. It’s one thing taking the word of a human who stands to go to court if found to be lying; it’s another thing to take the word of a computer. Neither Microsoft nor any other company should have “computers errors” as an excuse for false copyright accusations with impunity, and it’s up to Google to put their foot down. If I was in charge of Google, I would think twice about accepting automated reports; at the very least, Google should only be allowing it if Microsoft and everyone else can demonstrate they’re taking steps to stop false positives. Few people would argue that Google search results alone is going to stop piracy – the most it can hope to achieve to persuade some casual pirates that legal downloads are easier – but it would be a stupid own goal if the moves to stop piracy is derailed by a faulty computer program. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Lessons from the Narwhal
There is a lot at stake with the new user interface in Windows 8. Ubuntu’s experience from 2011 gives us clues for how this might work out.
Brace yourself Microsoft. It's your turn now.
With a new Windows version coming out, 8 is of course dominating the tech blogs. I haven’t looked much myself, but I’m assuming there’s gushing praise from Microsoft fanboys and scathing remarks from the hardcore Mac and Linux fans. I really have no appetite for a string of blog posts on one product myself, but having had a look at Windows 8, there’s now one extra thing that’s grabbed my attention other than the Windows Store, and that’s the Metro Interface (it’s now called the “Modern UI” due to a copyright row, but everyone’s still calling it Metro). I promise to move on to something else next time.
This new interface has grabbed a lot of attention, and not all of it’s good. Microsoft’s incentive is to make Windows 8 more friendly to tablet users where they desperately want to compete with Apple and Android, but they risk alienating their desktop customers. I have now tried out their interface and I can confirm it’s a right pain in the bum to operate with a mouse compared to the Start menu it replaced. I can see this being good for touchscreens, but there’s no sign touchscreens are going to replace keyboard, mouse and monitor in the office. Usability is a major issue for mass consumer software, and from the sound of some commentators you’d think this was Windows suicide.
Well, the good news for Microsoft is that there’s a favourable precedent here. Two years ago Canonical did something similar with Ubuntu 11.04 aka Natty Narwhal and its controversial Unity interface. There were a number of factors behind this decision, touchscreen-friendliness being just one of them, but there was a similar scornful reception from the Ubuntu faithful as there is from the Microsoft faithful now. I was just as sceptical about Unity then as I am about Metro now. In fact, after trying out Unity on my test partition, I decided to upgrade to Ubuntu 11.04 – but only once I knew how to force it back to the old interface.
And here’s the good news. I kept Unity on my laptop and netbook (Unity was partly designed as a space-optimised OS for small-screen netbooks), and after while started to understand where it was coming from. The Dash interface was a nightmare to use as a replacement to the Launcher (the Linux equivalent of the Start Menu), requiring numerous extra clicks to launch a program. But once you put all your key programs in the sidebar (which, realistically, is unlikely to be much more than the web browser, word processor and spreadsheet for most people), that’s not a big issue. I found the buttons at the side are a good way of keeping track of different windows belonging to the same program (very similar to the Windows 7 taskbar), and when used in conjunction with the new-look virtual desktops it becomes a powerful way of organising all your windows. There were a couple of feature I felt were more trouble than they were worth on desktops (overlay scrollbar and Global menu), but were easily disabled. When the next release came out six months later and the Unity interface had been refined a bit, I finally make the leap. And this was the experience of a lot of users.
And this is an important lesson in usability for Microsoft and everyone else: it can take months or even years to know if an interface change is a success. I’ve said previously that usability testing is hard because developers and testers, by definition, don’t know what it’s like to be a novice on a computer. One solution is to bring in non-technical people for usability tests, but this example shows a limitation: how can a few days or even a few weeks testing tell you what users think in six months’ time? We know from Ubuntu that it can take months or years for a change to gain acceptance from your users. Canonical and Microsoft both chose to the sceptics, then go ahead anyway and hope for the best. Canonical got away with for Unity, and that’s where there’s hope for Microsoft and Metro.
But it would be foolish to use Unity as proof that it’ll all be fine in the end. There’s a fine line between introducing unpopular changes that gain acceptance over time and imposing unpopular changes that stay unpopular. Facebook’s timeline looks set to be the latter. The Microsoft Office ribbon is at best a “Marmite change”, where you either love it or hate it. Unity wasn’t a complete success, because some users switched to Linux Mint (an Ubuntu fork that, amongst other things, stuck with the only Windows XP-like interface). In any case, there’s only so far you can go using Linux as a precedent for Microsoft. Linux users are a different demographic group to Windows users, generally more tech-savvy (so more likely to customise their favourite OS from the default setting but also more likely to switch distros if they get too annoyed), and generally different expectations. There’s no knowing if a change accepted by Linux users will also be accepted by Windows users.
If it was up to me, I would have made the new “Metro” interface the default for tablets and the old interface – including Start button – the default for laptops and desktops. Nothing particularly against this interface; just that Unity struck me as a good all-purpose balance between desktops, notebooks, netbooks and tablets, whilst the new Windows 8 screen strikes me as heavily optimised toward tablets. Or, at the very least, they should make it easy to switch back to the old interface with a few clicks. I know that maintaining two different interfaces is extra work (Linux users who stray from the default settings too much will find themselves running into bugs quickly), but – come on, it’s Windows, the highest-selling piece of software in the world, Microsoft can afford to do this.
But it’s unlikely this change of interface will be a Windows killer. Microsoft has many things to worry about – lack of Windows 8 app, a minor share in the smartphone market, open-source competitors getting better, the prospect of Android making the leap from tablet to desktop – but the Ubuntu experience shows that outrage over user experience tends to be a short-term thing. The real test will behow well Microsoft adapts to the changes to the IT market in the last decade – and it will take more than a change to the start page to make or break Windows. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Cross-platform is the way to go
AMD will shortly be enabling Windows 8 users to run Android apps. I would advise Microsoft to welcome and support this.
Mr Ballmer, surely you won't deprive
your loyal customers of this?
Last year I wrote a blog article on “The Ghost of Vistas Past”, outlining how high important it was to Microsoft that Windows 8 is a success (along with the mistakes from Windows Vista that overshadows the reputation of all future releases). Well, we’re now approaching the release date and I’ve been looking at the pre-release version. Have to say, there have been a lot of Windows 8-bashing comments, but it’s hard to tell whether this is just the new tablet-optimised interface they’re getting used to or something more. At the moment, this could still be anything from a revolutionary ground-breaker to a Vista Mark II. But I’m going to make Microsoft a helpful suggestion regarding their controversial app store.
Firstly, an app store is a good idea. Linux distros were doing this years before there was the iPhone, when it was called “package management”. It’s good because instead of a mish-mash of programs from installation CDs or the internet, there’s a central database which takes care of all installation and updating. And as your computer keeps track of which packages installed which files, if you want to uninstall anything, you can do it properly, instead relying on unreliable uninstallation files that came with the program you don’t want. So far, so good.
The problem is that the range of official Windows 8 apps is reportedly very low. As late as last month (September), it was being reported there were only 2,000 apps, compared to 500,000 for Android. You can argue that it’s good to have a small number of apps that you know are high-quality and reliable, but that’s no good if you can’t find the app that does the job you want. There’s a debate around whether Microsoft is being too stringent accepting apps in the first place,[1] but the real obstacle is incentive to write these apps. There’s no getting round the fact that Apple and Android, having got to the smartphone and tablet markets first, dominate the market. Just like Linux suffered for years from lack of software when Windows dominated the desktop market – which Microsoft used against it – some might say that Microsoft is getting a taste of its own medicine in the smartphone and tablet market.
But help is at hand from AMD. As a result of a collaboration with Bluestacks, it will shortly be possible to run Android apps in Windows 8, on desktops, laptops, tablets, and possibly smartphones. Even if, as Microsoft hopes, their apps store is a bastion of high-quality apps, by adding the choice of Android apps you turn Windows 8 tablets into far more versatile devices. Can’t wait for the latest Angry Birds to be ported to Windows 8? No problem, just install the Android version and away you go.
And the response from Microsoft? Apparently nothing. Their strategy was to encourage developers to write more apps for Windows 8, so I assume this strategy is unchanged. I can’t understand the logic behind this. With Windows Phones still a niche product with no immediate prospect of growth, it makes no commercial sense to write an app for Windows instead of the big two. Even porting an app from one operating system to another is tricky. If neither app writers nor Microsoft are able to put in the work getting apps to run on Windows, one would have thought they’d have welcomed AMD doing the job for them.
And leaving AMD to do their own thing is far from a safe bet. We don’t yet know how reliable this will be. In theory, you can run Windows programs on Linux using wine, but this is such a nightmare to get working, many Linux users don’t bother and use the closest equivalent native Linux program instead. The problem is the masses of fiddly settings you need to tweak to get a Windows program to properly interface with all the Linux components such as sound, graphics, printers, internet, you name it. Crossover is reputedly better, but only because you pay people to do all this fiddly work for you. Even so, the range of programs certified to work using crossover is limited. How much work is AMD going to have to do to get 500,000 Android apps working in Windows? It will depend a lot on how clever their cross-platform component is, and only time will tell. But it would be an awful lot easier if Microsoft threw its weight behind this. They could integrate this into Windows 8, they have deep enough pockets to test and tweak all the apps they want, and I’m sure it would be a better job than AMD/Bluestacks going it alone.
I can only imagine the thing Microsoft stands to lose is pride, especially after years of telling their customers they’re best off sticking to Microsoft Windows, Microsoft Office, Microsoft Internet Explorer, Microsoft Exchange, and Microsoft Everything (or, only where Microsoft doesn’t have a program, something written for Microsoft Windows).[2] It’s one thing being behind in the smartphone/tablet market, but quite another thing to admit it by welcoming apps written for a rival. And it could raise questions. If Microsoft is depending on apps written for Linux-based Android to sell Windows 8, how can they justify refusing to port Microsoft Office to Linux? Serious question.
My advice to Microsoft is that, in the long run, they would be better off forgetting about Microsoft Everything and go back to competing in a cross-platform world. At the moment, Microsoft are hoping that as long as customers need Windows to run Word and Excel, they’ll buy Windows, but with Libreoffice catching up on everyday functions, and file compatibility improving, customers may soon question whether they need Word or Excel in the first place. But where Libreoffice won’t be going any time soon is the advanced features of MS Office. Crossover’s chief selling point is running MS Office on Linux, and many Linux users pay for this. Remove the complication of an emulator and you can expect demand to increase. For every risk posed to Microsoft for going cross-platform, there’s an opportunity.
Will Microsoft embrace multi-platform? So far, it’s hard to imagine them dropping their old models. In 2005 technology columnist Bill Thompson hypothesised a future where Microsoft re-dominates the IT market with its own version of Linux called Micrix. But he didn’t seriously expect Microsoft to remotely consider this route, and they didn’t. Seven years on, I still don’t expect anything this radical, but there is one crucial change: Apple is rapidly overtaking Microsoft as the all-controlling bad guy. Can Microsoft rediscover itself as the guardians of a free IT system. Relive the heyday of Windows 95, the OS that freed you do anything with your computer?
The daft thing is that Microsoft is slow to recognise its own cross-platform successes. The Microsoft Kinect, as well as being quite successful on the X-Box, is a very popular accessory for all sorts of other uses. But wasn’t until hackers took the matter into their own hands that Microsoft realised they were on to a winner. Could we see the same take-up for Android apps in Windows? Microsoft Office on Android tablets? The sooner Microsoft sees this as a good thing, the better.
[1] Of course, the most notoriously stringent app store is Apple’s. The crucial difference is that Apple, as the first entrant into the commercial app market, can get away with it. Few developers want to cut themselves out of Apple’s app market, however many hoops they have to jump through. With the smaller Windows Phone market, the same app developers might decide it’s not worth the hassle.
[2] Although, to be fair, this is still an improvement on Apple. At least with all things Microsoft you get a free choice of hardware. Under Apple’s ideal, you don’t even get a choice on that. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Politics versus Plan B
There is no more important place to get IT projects right than central government. Unfortunately, internal politics encourages the opposite.
This is a visual metaphor, with little or no relevance to the actual article.
Who’d be a prime minister? On one day it’s all “We Love You Dave/Gordon/Tony”, then the moment you’re under 35% in the opinion polls it’s a catalogue of everything your government’s doing wrong. This year it’s been the granny tax, pastygate, the petrol non-strike, G4S and the West Coast rail franchise to name a few. All that’s missing is a good old IT shambles. After all, the last government kept us busy with the lost child benefit discs and the ill-fated NHS system. Well, for all you restless journalists itching for a story, I recommend you keep an eye on the upcoming Universal Credit benefit system.
In case you’re not following UK politics on an hourly basis, Universal credit is a plan to merge a number of key benefits such as jobseekers’ allowance and tax credits into a single system. Benefits is a controversial issue right now, but this is a software testing blog, and the issue of interest is that this is all dependent on a new IT system being developed. Now, before I go any further, I must stress I don’t know anything about how this project is going. For all I know, it could be all going swimmingly. But what if it isn’t? There doesn’t seem to be any kind of Plan B ready if the project goes behind schedule. And if this happens, it won’t be the first, because I worked on the last IT project where that happened.
Which IT project, I hear you ask? Well, please don’t be too harsh, it wasn’t my idea, they made me do it, but – I did software testing for ID cards. Yes, those ID cards. Remember them?
Now, I could give you a blow-by-blow account of ID cards testing, and at some point I probably will. However, the one overriding lesson I learnt from that project is the dangers of mixing politics and software testing, especially when it’s such a controversial scheme. And I’m not just talking about party politics, but also internal politics. The result of this, the thing that made the project such a nightmare, was the Identity and Passport Service committing itself to a deadline it couldn’t meet.
I don't know why IPS signed up to such an unrealistic deadline, but I can guess. When you have a project as politically controversial as ID cards, no government wants to show any sign of weakness; they say it’s going ahead and say when it’s going ahead (preferably far enough ahead of the next election to make it difficult for a future government to reverse it). Senior civil servants, meanwhile, are eager to demonstrate a “can do” attitude and make promises on when a project can be delivered. Once a date is set, any slippage is politically toxic for both. The press and opposition will see it is a U-turn and hammer the government. The civil servants who promised delivery on time get it in the neck for failing to deliver. Politically, it’s far better to stick to the date you set. Plan B? There is no plan B.
But whilst this might have been a prudent political decision, it was a terrible IT decision. For a system as complex as IT cards that you are making from scratch, there was no way of predicting when it will be ready. And, worse, I can only assume the people who set these deadlines didn’t properly understand IT projects, because the moment I saw the timescale intended for testing I could tell it wasn’t realistic. Inevitably, the project descended into what is affectionately known as a Death March. Slippages in programming were compensated with cuts in testing. With the delivery date set in stone, the only way to meet to deadline was to declare the system ready to go, when the reality was quite different.
Fast forward to 2012, and I fear that not everyone has learnt lessons. In fairness, when Iain Duncan-Smith was questioned by a Select Committee last month,[1] he did try to explain this process wasn’t an October 2013 “big bang” and it would instead be introduced in stages. But when a Downing Street spokesman was asked about whether there’s any possibility of the October 2013 start date could be allowed to slip, the response was simply, I quote, “It's on track to be implemented in that timetable.” Which, at the risk of bringing up the same analogy again, is like responding to the question on lifeboat capacity with “The Titanic is unsinkable.”
In IPS’s defence, when the beta-quality system ID cards system went live, they did at least have the sense to keep the flow of customers manageable. Rather than open the floodgates on day one with the first 100 enrolments, they took in a few at a time, allowed for lost time with the inevitable bugs, and only ramped up the intake as and when the system was stable enough to take it. I’m not sure whether such a safeguard will exist for universal credit. The ID cards system was a pilot system designed for, at the most, tens of thousands of records. Benefits records, however, run into millions. Are we looking at transferring, say, a million cases on the new system by December 2013, ready or not? Even if well-intentioned managers at the DWP are trying to learn lessons and keep the timescale sane, will they be leaned on by minsters or permanent secretaries to hurry up? At the moment, I can’t rule this out.
There is no greater enemy to software testing than politics, be it party politics, management politics or just plain office politics. If you want testers to do their jobs properly, you must be prepared for them to tell it as it is, even if it’s not what you want to hear. You won’t get this in a culture where everyone is expected to be “positive” (i.e. anyone who expresses concerns is ignored or told to shut up). The opposition and press would do well to keep their eye on the Universal Credit system and keep their laptops poised. The government and civil service would do well to realise that it always pays off to have a Plan B.
[1] Actually, I don’t think it should have been Iain Duncan-Smith answering IT queries at all. Government ministers can only be as accurate on technical matters as the information they’ve already been briefed on. I’d much rather that on important IT matters, Select Committees directly questioned the people responsible for the IT, rather than use the minister as a go-between. This, of course, relies on the IT people answering questions honestly, which won’t happen if they’re worried about speaking out of line with their department. It could require a big culture change if politicians ever get a chance of knowing what’s really going on in flagship government projects. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Where's H. G. Wells when you need him?
Is advertising really legalised lying? In cyberspace, it seems, the answer is still yes.Bad and wrong. But is this coming to YouTube?
I’ll start with an obvious defence: if we want an internet, we need ads. Some websites, such as this one, are done by people in their spare time (which can be sporadic, as this one has just shown), whilst others, such as BBC News, are funded by other means. But for many sites, somebody has to be paid to create the content, and the only source of revenue is from the website itself. Even ad-free sites can depend on adverts. This blog, for instance, has no adverts, and I want to keep it that way, but I’ll admit that Blogger would never have developed the blogging tools and hosted the blog for free without the cut Google gets from adverts on other blogs they host. There are some interesting suggestions for online micro-payments as an alternative to ads or subscriptions, but there is little interest in making this a reality. Like it or not, adverts are just as much a part of the internet as they are to ITV.
And the obvious complaint? Internet ads are an absolute pain in the backside. At least on ITV they leave you alone when you’re watching the programme. Web adverts, on the other hand, seem hell-bent on grabbing your attention when you’re trying to read something else. All too often they rely on big flashing boxes, garish animations, and the ad itself leaping out of the box and covering the rest of the page. As well is being immensely irritating, it also makes a lot of pages inaccessible to people with disabilties that would otherwise have been fine. It’s little wonder people are turning in droves to products like AdBlock Plus. [1]
But strange as it may seem, annoying the hell out of people isn't the biggest problem with internet advertising. The worst problem is how misleading some of these adverts are - if not outright lies, the sort that H. G. Wells was on about when he said "Advertising is legalised lying". We've all seen the adverts for "London/Middlesbrough/Carlisle/Bristol/wherever Mum looks 20 years younger". Do these vendors of these products really have a Mum who looks 20 years older in each local area of the UK? I think not. You would never get away with this in any other media (indeed, adverts get banned over relatively minor issues, such as this BT broadband one), but in cyberspace this seems accepted as fair game. And it shouldn't be, because it's been two years since the Advertising Standards Authority gained a remit over internet adverts.
This isn't a dig at the ASA for not doing their job. As far as I've seen, they're doing their best and they are very fair in their decisions. The problem is what they're up against. I can understand why small-time bloggers might subscribe to an ad feed without thinking about it, but some of the worst practices are on sites of big companies that should know better. Take Google's "Sponsored Links" for example. Yes, Google couldn't provide its service without these, but the background shading they use to distinguish sponsored results from real results is so faint it's easy to miss completely. This problem has gone on for years and Google has done nothing about it.
This is a huge problem in IT, especially software installation, because users search for a program, mistake the top sponsored link for the top real link, and end up installing something completely different. Or, worse still, AVG - an anti-virus vendor of all things - allows banner ads at the top of the download page using the same lettering and colours as the AVG page, tricking users into downloading a different program (SRO2012). (This has now changed and the banner is at the bottom of the page, but the fact AVG allowed this to happen in the first place is very disappointing.) Reputable sites such as Lycos allowed adverts being used as Scareware.
Then there's the trick of pretending it's not really an advert. Recently a TOWIE star was hauled up for trying to pass off promotional endorsements on Twitter as her own opinions. Great that the ASA showed some teeth here, but who else is doing this and hasn't been caught yet? There are suspicions that tobacco companies - hardly a shining example of ethics in advertising - are using supposedly user-uploaded videos on YouTube as their way of dodging the ban on tobacco advertising and showing how cool and anti-establishment it is to smoke.
With such rich players determined to ignore the rules and such high-profile players tolerating this behaviour, the ASA have a mammoth task ahead of them. It's not clear which way this will go. It might be that more powers will have to be considered in the future, but with more powers always comes more scope for abuse. It would be a lot easier if the internet-using public simply wised up to these practices. The more people who spot these tricks a mile off and ignore the ads, the less money there is to be made. Better still, if people stop buying these company's other products, and tell the company why they're doing so, as well as complain to the websites hosting these ads, they might think twice before pulling these stunts.
Action from the grassroots against vested interests is always a wildly optimistic idea, but, hey, this is a good time to believe in optimism.
[1] Unsurprisingly, some people aren’t too happy with this. AdBlock plus sparked a minor anti-Firefox crusade (which in turn sparked a whole load of derision). However, one legitimate point made during this furore was whether this would put websites out of business. Personally, I think there's nothing to worry about. The people who go through the trouble of installing this extension are the least likely people to actually click on any of these adverts, let alone buy something. Suffice to say that AdBlock blocks adverts on blogs such as this one, and yet Google is still happy to sponsor Firefox. If Google - which gets almost all its revenue from ads - doesn't have a problem with this, that's saying something.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
So what went wrong at Natwest?
A lot of questions need to be asked over RBS’s computer problems – but if we want to stop this happening again, we need to listen to the answers.
An easy answer. But not a useful one. So there we have it. For anyone who questions the value of software testing, here is a prime example of what happens when you let a bug slip through. I know we’ve already moved on to another banking scandal, but in case you’ve forgotten: many Natwest customers failed to get paid owing to a botched system upgrade. This has led to all sorts of consequences, and the obvious question of how this could be allowed to happen.Except that when people ask this question, I fear most of them have already decided on the answer, which is that RBS is a bank and therefore Big and Evil and responsible for everything bad in the world from Rabies to Satan to Geordie Shore. That answer might make people feel better but does little to stop this happening again. In practice, what went wrong is likely to have little to do with the credit crunch or banking practices and a lot to do with boring old fact that any bank – no matter how responsibly they borrow and lend – runs on a highly business-critical IT system where any fault can be disastrous.
An easy claim from a software tester would be that RBS, as Natwest’s owner, must have gone cheap on the testing. I suspect it won't be that simple. By its very nature, a banking IT system is going to be very complex – it has to be capable of handling thousands of transactions every second whilst keeping itself totally secure from hackers – so it would benefit from as much testing as possible. But, as any ISEB-qualified tester can tell you: exhaustive testing is impossible. There is always a balance between testing and finance, and testing has to be prioritised and targeted. This is taken for granted all the time, and it’s only when things go wrong that we ask why.The fact remains, however, that something went seriously wrong. The Treasury Select Committee is already asking what happened, as has the FSA, so we should get more details on what happened soon. But how much we learn will depend on whether the right questions are asked. So here are my suggestions:- Was the upgrade necessary? Chances are, it was. Security loopholes are uncovered all the time, and a security update for a banking system can’t wait. But if it was an update for the sake of updating, that would be a different matter.
- Were they using out-of-date software? I can’t comment on what banking software is and isn’t used, but I know of numerous systems that doggedly stick to Windows XP or Internet Explorer 6 in spite being horribly error-prone in a modern IT environment. A business that becomes dependent on out-of-date components, and fails to bite the bullet and upgrade when it needs to, only has itself to blame when the testing can’t keep up with the bugs.
- Was enough time allowed for testing? As a rule of the thumb, every day of development should be matched by at least one day of testing. A common mistake is when software uses commercial off-the-shelf products as back-end components, little testing is done in the belief that the commercial product is bound to work fine. In my experience, that gamble usually backfires.
- Was everything tested that should have been tested? This might seem obvious, but it’s not unusual to concentrate on easy feature tests without paying much attention to more problematic areas such as performance or integration.
- Was the timescale realistic? I ask this only because a common response to a software project overrunning is to cut the testing time. That is a stupid thing to do, but if the budget and timescale has been set in stone the project manager might have had no other option.
- Did they carry on monitoring the update after it was implemented? Software that worked perfectly in the test environment can still fail in the live environment. Since it took them three days to identify the case of the problem, they have some explaining to do here.
- Was the testing correctly prioritised by risk? To state the obvious, when an area of the software is known to be likely to break, or the consequences of a component going wrong will be severe, you need to concentrate testing on this are (and not spend your time doing endless repetitive tests of low-risk areas). What’s not so obvious is identifying what are the high-risk areas in the first place. And this brings me to a pertinent question.
- Did the people in charge of the testing properly understand the job? This is where RBS may have a case to answer. The Unite union has suggested that RBS replacing outsourcing their IT work abroad was to blame. I don’t believe in assuming off-shored worked is cheaper, more expensive, sloppier, better quality, faster, slower or any other silly generalisation. But when you suddenly outsource your IT work to another country, you lose most of your in-house expertise – quite possibly the people who knew what the risks were and how to avoid them. In the worst-case scenario, the work may have ended up with people whose idea of testing is telling you everything’s fine.
However, it might be that RBS has perfect questions for all of the above. That would still not guarantee that nothing can go wrong. As exhaustive testing is impossible, there is always a chance that an untested area thought to be low-risk goes disastrously wrong anyway, and there is no foolproof way of stopping this. So I have two final very important questions:- Did they have a fall-back plan for a fault making it into the live environment? No matter how good your test plan is, you always have to think “What’s the worst that could happen?” The wrong answer is “But it definitely won’t happen.” The #1 mistake of the Titanic was not the design flaws that allowed the ship to sink, but the foolish assumption that as the ship was unsinkable there was no need to provide enough lifeboats. Did RBS do a Titanic and assume their tested upgrade couldn’t possibly go wrong? I doubt they would have been stupid enough to have no plan at all, but this leads me on the other important question.
- If they had a contingency plan, was it credible? In far too many cases, contingency plans are made for reviewing, signing off and shelving but not actually implementing. When the sole purpose of a contingency plan is to allow you to say “Yes, we have a contingency plan,” … well, you can imagine the rest.
But all of these questions rely on an attitude of “What went wrong?” first, and “Who went wrong?” a long way second. Unfortunately, there are already signs of thelatter option being favoured. I’ve seen what happens when people blame each other for IT problems, and it’s not a pretty sight. Whatever story RBS offers, there are valuable lessons to be learned. I only hope someone’s interested in learning these lessons.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Is superfast broadband always a good thing?
As superfast broadband gets adopted by more and more people, we must not shut out those people who cannot have this.Are you listening to me, Network Rail? Look how quickly this image downloads on my site.One of the perks of being a software tester is that you can take your work home with you and tell managers of other public-facing software (especially websites) what they're doing wrong. I've recently been arguing with Network Rail over the redevelopment of Birmingham New Street station. No complaints about the redevelopment itself (anyone who's actually used this station will be able to tell you why); my problem is pages like this one. Can you spot what's wrong with it? Possibly not, if you've got a fast internet link. But if you're on a slow internet connection, it takes ages to download the pictures and chews up your bandwidth – about 5MB for three images. And to illustrate just how unnecessary this is, here is the full picture you have to download in order to view a small (304px × 172px) image.1This is an example of lazy programming that suits the majority but excludes the minority. This is nothing new – it is been going on ever since the internet began. In the 90s there was Netscape Mail's HTML-only e-mails (absolutely and totally utterly vital so that you can write in multi-coloured Comic Sans font), instantly rendering them unreadable to people on text-based programs such as pine. Then came Internet Explorer's dominance and the web pages that didn't work in any other browsers, or worse, did work in other browsers but blocked them anyway because “it's designed for IE”. Meanwhile, we were plagued with Flash-only websites, removing perfectly decent text content away from many users with sight disabilities. What all these have in common in that all of this was completely unnecessary – it wouldn't have required any more work to make websites accessible to everyone, just a little bit of thought.
But whilst problems so far have affected individuals, the disparity in broadband speeds threatens to affect whole communities. It's not the fault of anyone in particular that internet speeds in many rural areas are so slow – sadly, the costs of laying hundreds of miles of cables versus the income gained from a handful of users in remote areas makes this an expensive problem to solve – but the government’s idea of digital infrastructure, to all-round applause, is to give cities that already have decent broadband speeds even faster speeds. My worry is that instead of making the internet faster, it will simply encourage websites to get more bloated and inefficient – still available to 90% of the UK, but increasingly inaccessible for the other 10% who face being treated like they don't matter.
And, unfortunately, there seems to be little appetite to stop a bloated internet. When a business suffers for having a below-average internet speed, it's usually considered the business's fault for not having a faster line even if that's impractical. In my last job, people routinely e-mailed ludicrously large attachments to each other eating up all the disk space, and yet this was never questioned – instead, staff were blamed for not clearing out their inboxes often enough. As the resolution of digital cameras increases, so has the bandwidth needed to download a few photos a friends e-mailed to you. A photo that only needs to be viewed on a screen as opposed to printed can be scaled down 90%, but the process for doing this is so complicated and laborious most people don't bother. Outlook and Thunderbird could easily add a feature that offers to scale down images for you, but they haven't, and show no signs of doing so.However, there are some signs of hope. Video-streaming sites such as Youtube generally keep their streaming bandwidth down to something sane. I suspect this is more down to Youtube wanting to control their own bandwidth costs than anyone else's, but the effect is the same. Libreoffice Impress has a pretty nifty device to reduce the size of presentation files, which I'm sure anyone who's been e-mailed a 45MB .ppt file will welcome. (Sadly no equivalent functions in PowerPoint yet – hurry up Microsoft.) ITV player seems to be smart enough to switch between low-quality and high-quality streaming video depending on your internet speed. If more people can adopt good the good practice used here, maybe we can all live in perfect harmony.[1] And unfortunately, the response I got from Network Rail wasn't encouraging. Their justification is that some people want high-resolution images available, but anyone who creates websites can tell you this is not the way to do it – you should put a thumbnail image on the page and provide a link to the larger-scale image. Worse, Network Rail actually does this on different pages in the same site, so why they think they can't do it on other pages is beyond me. I suppose it's unfair for me to be so scathing about a random customer services representative who, in all probability, doesn't normally deal with technical queries, but this is what happens when you don't provide a contact for technical queries. But that's a subject for another post.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Time for a whitelist?
Here’s a new approach to a safe internet: instead of trying to filter out unsuitable content for children, how about an opt-in system?
A low-tech solution. But some of the high-tech solutions are worse.For some reason, the news story that’s all the rage at the moment is how to stop children looking at internet porn. I’m not sure exactly what’s happened to bring this, but I can vouch it’s a tricky one. Not so long ago we were looking into testing a website for, amongst other things, checking content was suitable for everyone to access. It would potentially involve moderating everything posted, including forums, applications and documents. And even if we could vet all of that, what’s to say a linked page site will be suitable? And what about linked sites from linked sites? And linked sites from linked sites from linked sites? Not easy at all.Now, I’ve always thought that the same rules should apply on the internet as apply everywhere else. For adults, the basic principle, quite rightly, is that you should have the choice to view what you want (bar a few accepted limits such as paedophilia, certain depictions of rape, incitement to violence and so on). For children, there are a few rules such as 12-, 15- and 18-rated films, but it’s broadly viewed as the job of a parent to decide what they should see, and that’s the way it should be. The internet, however, has made this job harder. Yes, in the old days there was lying about your age when seeing an X-rated film, or borrowing the mag your mate got off the top shelf, but it’s now possible to view this stuff without even leaving your room, so it must be taken very seriously.
The magic bullet frequently touted is family protection software, but its track record hasn’t always been impressive. Some of the earliest family filters were easily disabled using, out of all things, CTRL-ALT-DEL, which doesn’t give much confidence about how seriously suppliers take this. There were also suspicions that certain programs were, as well as filtering out unsuitable content for children, also filtering out incorrect opinions on subjects such as abortion or homosexuality, or even relatively uncontentious material like information on eating disorders. These are old stories from many years ago so maybe things have improved, but still the focus seems to be on installing software on the child’s computer. There are filtering services at the ISP end which are harder to circumvent (which some people use, even on its own or on top of filtering software) but they aren’t easy to set up, and I suspect this is being overlooked in favour of more lucrative products in shiny boxes at PC World.
Perhaps filtering software can work, but I’d like to propose two alternative solutions. The first solution, which will take some time to explain is … a whitelist.This is a solution I’m proposing for younger children – I doubt this would be workable for teenagers so Mail and Express readers will have to wait for my second proposal – but my reasoning is simple: even with the best will in the world, it is very difficult to imagine a filter that catches everything. Perhaps it would be better to place the onus on web developers to keep their content suitable if they want it to be viewed by children. This is how film certification works – anyone who wants a U, PG, 12 or 15 certificate has to apply to the BBFC for the certificate rather than the BBFC chasing after films that don’t comply – so maybe something similar can work on the internet.How would this work? Well, to start with, it’s got to be opt-in. It’s one thing blocking illegal content for adults and putting an opt-out filter for outright porn, but extending an opt-out filter for all web content equivalent to a BBFC 18 rating is open to too much abuse. But if it’s an opt-in filter, how do you get started? Few websites are going to bother applying for a whitelist entry if no-one’s subscribed to it, and few people and going to subscribe to a whitelist if no websites have opted in. Vicious circle.So … how about we start with all UK primary schools subscribed to this filter? There’s obviously no need for pupils to access adult material in primary schools, and this would give a large enough user base to prompt websites who want to subscribe to the whitelist to do so. This will then leave parents free to opt into this whitelist or not as they see fit. (It shouldn’t be too hard to apply different access right to parents’ and children’s computers.) Should schools or parents wish to add extra sites they consider safe, they could add this to their own personal whitelists.Next question: who decides what is and isn’t suitable? This is not a decision to take lightly. Even with an opt-in filter, it would be unacceptable to use a government-backed scheme as an excuse for political censorship. Luckily, we can take lessons from the BBFC here. Every decision they make to award or refuse a certificate is publicly available online with detailed explanations as to why the decision was taken, which is open to scrutiny from the public, and I’m confident that if they ever started selectivity censoring content on political grounds, they’d get rumbled quite quickly. This model could be used for an internet whitelist, with the added safeguard that the moment any family stops trusting the filter, they can opt out.Now for the big complication: websites change. The BBFC have the advantage that everything they certify is a finished product. However, a website that has nothing objectionable today could have anything tomorrow. This is especially a problem for internet forums and sites that rely on user-uploaded content. So I suggest that an internet whitelist would need to be based in part on a commitment to self-policing, and acting promptly to remove unsuitable content. Or, for big sites such as Youtube (who are never going to make their entire site family-friendly to get on to a whitelist), you could have the option of selectively whitelisting content flagged as family-friendly on the site, as long the website can be trusted to enforce this.One big tripping point is Wikipedia. For all the criticisms they face, Wikipedia is an important educational resource, but the Wikimedia foundation are adamant that Wikipedia is not to be censored. Moderators are good at removing objectionable material where it doesn’t belong, but they won’t skip over what happens in Debbie Does Dallas. In fact, adult material relevant to the article can appear where you least expect it (such as an innocent-looking article on classic 1980s cartoon Henry’s Cat.) There is, however, a Wikimedia-backed edition of Wikipedia for schools, which is an excellent idea in its own right. This edition is still in its infancy, but with a bit of support this could be everything that primary schools could wish for.So that’s my idea for a whitelist. But I’m the first to agree that plans that look good on paper don’t always work in practice, especially the complicated ones. And above primary school age, I can’t see this solution being workable. Which brings me on to my second much simpler proposal, which is … parents need to take responsibility.I’m not a parent, so I’m not going to dictate to parents what’s best for them, but the most convincing solution I’ve read to date is to sit with your children whilst they’re on the internet, with simple rules such as “Don’t talk to strangers” extended to “Don’t talk to strangers online”. But whilst I’m sure some parents are being quite sensible, we also have parents who assist children in bypassing the age block on Facebook, or refuse to let children watch 18-rated films but allow them to play the most violent of 18-rated computers games. Something is seriously falling down here.No matter how good parental controls get, no matter how much freedom parents have to make their own decisions, it is a mistake to view this as a substitute for parental responsibility. Just like the pre-internet days, a child or teenager who is determined to get round parent controls will find a way somehow. This is a blog on IT so that’s enough of a digression into parenting, but from an IT perspective, the message is simple: this is both a technology problem and a social problem. Technological solutions can only help with the technological problems – how you solve the social problem is up to you.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {}
UPDATE 11/05/2012: Since I posted this last week, the government have announced plans to force ISPs to introduce an opt-out filter for internet pornography (meaning that everyone will be subjected to this filter unless they expressly request otherwise). Some will doubtless argue this is a political plan to win votes on the back of bad local election results. But I'm not really interested in the politics behind this. All I'm interested is whether this can work. Except that it might not be possible to separate the two. Confused? Let me explain.
Sticking to my principle that the same law should apply on the internet as applies everywhere else, I can certainly see the case for applying this to stuff that would otherwise be certified R18, or a similar level. (I don't want to repeat what makes a video an R18 – if you really want to know you can read it here.) It makes no sense that a video that you would only be legally allowed to buy from a specialised adult store is also legally available to anyone who can switch off Google Safesearch. I have some sympathy with the argument that it should be up to parents to opt into this, but the problem is one of apathy. Many parents don't even consider whether they should use a family filter, let alone make a decision, and I'd much rather the default option for children was that this isn't available. Anyone who objects is welcome to opt out of the filter.
However, we must be realistic as to what this filter can achieve. Forcing ISPs to filter out R18-rated material is one thing, but any lower than that and you start running into all sorts of problems. Could you create workable automated filters for 12-, 15- or 18-rated material? My guess is no. Even if you can, is it possible to do it is a way that doesn't impede debate of adult issues? Sex education? Drug debates? Gun control? Rape law? This is treading on very dangerous ground. If we're not careful, we could see a repeat of the silly 1950s censorship rules which were circumvented using daft loopholes.
My worry is that if the government does not properly manage expectations, this could send us down the wrong path. Suppose the government introduces the filter, and instead of it being welcomed, parents complain that their teenage children move on to other sites that got round the government-approved definition of internet pornography. So the government create new tighter rules. And teenagers move on to other sites. And the public demand tighter rules still. And so it goes on. And all this time, more and more of the internet gets caught up in increasingly indiscriminate censorship.
Perhaps I'm being paranoid, but I can see this happening if this gets used to chase votes. Because whilst compulsory internet filters might be a vote-winner, and tightening up the rules further might win even more votes, telling the public that it's going too far – and consequently implying that middle-class parents should stop treating the issue as someone else's responsibility – could well be a vote-loser. There's no need to come anywhere near this nightmare scenario if it's handled sensibly, but sometimes politics gets in the way of sense. So I'll wait and see how this pans out before giving my verdict.
-
Newbies are your friends
All programmers and testers share one weakness: they don't know what it's like to not be familiar with computers.Easy to laugh - but in IT the equivalents to push and pull signs aren't so obvious.(Cartoon from The Far Side, in case you live on another planet.)
I have a confession: for a long time, I couldn’t get the 3G internet to work on my smartphone. When I bought it six months ago, I could make calls and connect to wi-fi, but the mobile broadband stubbornly refused to work. I read the manual from beginning to end, trawled the internet, fiddled with every setting and swore at it, before I finally realised mobile broadband wasn’t switched on. After all the times I’ve been showing off making things look easy that other people struggle with, I can consider this a taste of my own medicine.But, embarrassment aside, that was a good lesson in what it’s like to not be a techie. As a late entrant into the smartphone market, I was getting to grips with things that are second nature to most users. To someone who is familiar with Android, checking 3G internet is activated is such an obvious thing it’s not even worth mentioning,[1] any more than a locksmith would consider it worth asking if you were pushing a door with a “PULL” sign. But little things like this add up and can stop people using new products completely. This is where usability testing comes in.
There’s a trap that both developers and testers frequently fall in to, which is assuming your users know the things you take for granted. I’m currently trying to get to grips with a load testing tool and spent days fixing one issue after another: all presumably straightforward to people who use this, but a nightmare for me. Open source software is another regular offender. The mainstream products like Libreoffice and Firefox aren’t too bad, but the documentation for less popular programs is often incomplete or missing completely – in some cases, you need as much background knowledge as the programmers to use it. The programmers can of course point out that they’re doing this for free and they don’t have time to write user-friendly manuals too, but that’s little consolation for anyone trying to use it.Even the most mainstream products have problems. Microsoft Office, supposedly the gold standard of ease of use, has plenty of features like automatic bullet points or numbering geared at user-friendliness. But the side-effect of the innocent-looking functions are confusing formatting changes that are difficult to change back – I have lost count of the number of times I’ve had to help rescue documents mangled up by auto-formatting. That’s not intended as a dig at Microsoft, just a way of pointing out how difficult usability is, even for the companies with the deepest pockets.There are endless ways to be caught out. Are you sure copy-pasting is easy, or were you using CTRL-C and CTRL-V? Because that might be common knowledge to you, but it isn’t to other people. And if you thought of that pitfall, there’s another, and another, and another, and it’s hard to anticipate all of them. There’s usability guidelines and usability training out there, which is good practice, but this still suffers from the weakness of tech-savvy people telling other tech-savvy people devising things for everyone else. How can you, as a software programmer or developer, really put yourself in the shoes of a regular user with no background or training in IT?The answer, I suggest, is: you don’t. If you want to do usability testing properly, you need to involve people who don’t know much about computers. I know I’ve complained endlessly about people who don’t understand computers imposing decisions on people who do, but it can be just as dangerous to do it the other way round. If your customers or workforce don’t know how to use a new system, it’s no use blaming them for not understanding the nice easy interface you designed for them.I’m not suggesting it’s as simple as bringing along some non-techies and everything will be fine. It’s a tough call on when the best time is to do usability testing. Do it too early, and non-technical users will get bogged down in the inevitable beta-edition bugs (or do it really early when the system only exists on paper, and they won’t know what to expect). Leave it until everything else is ready to go, and usability testing could be too late. When the system has been programmed, tested and stabilised, do you really want to change half a dozen features found to be user-unfriendly? Usability testing at any stage is useless if it’s treated as a rubber-stamping exercise. This applies to any kind of testing, but if a project manager wants to believe the new system is easy and intuitive, there is always a way of showing usability testing confirms this irrespective of what people really thought.But combined with other bits of good practice, people who don’t know about computers is a valuable tool. In my last post I suggested that it’s better to release a new system in several stages rather than a single “big bang” release. This is partly to avoid feature creep-crippled projects, but it also means that a usability issue found it one release can probably be fixed in the next release. When I think I’ve found a usability issue, I make a habit of asking someone in the office who isn’t a tester whether he knows how to work the system rather than second-guess this myself. There are of course some absolute clangers which any software tester will spot a mile off (like blue underlined text in a web page that isn’t a link). But if you’re serious about user-friendliness, you need to take your users seriously.[1] In my defence, my Samsung phone has two settings for 3G broadband in completely different places, and both have to be switched on in order for this to work. All I can think is that the less obvious switch was set to off in the shop when I was trying to check if the wi-fi hotspot worked. But my point remains unchanged: if I couldn’t work out what the problem was, I can’t see the average customer faring any better. -
The dreaded feature creep
Even in the best managed projects, feature creep is difficult to avoid. Here are my tips for how to reduce the risk.
Apologies for another quantum mechanics in-joke. But this explains a lot. Right, I’ve been told off for starting too many blog entries with “I’m afraid this is going to be another moan”, so this time I’m going to try to be a bit more positive. My last post had a go a web designers often over-charge for websites, and people who actually pay them that much. This contained an observation that this can apply to IT procurement more widely, with an example of the notorious contracts for £3,500 per computer in some government departments. Having thought about this, it was a harsh generalisation.Where government IT projects overrun costs, it’s rarely because a company charged a fortune upfront. It’s usually because the initial costs are cheap but the contractor charges extra for things like including additional features, or installing new hardware. In some cases this gets out of control, like ridiculous call-out fees for something as simple as changing a mouse, and that is a key driver to the argument that IT companies rip off Whitehall. But the IT companies do have a good counter-argument. They often say that if government departments ask them to do a simple task, and then keep changing their mind in mid-project, it really does cost that much to keep making all the changes. I have come across both scenarios in my time.But if we forget these two extremes and assume both client and contractor are genuinely motivated to work together and keep costs down, the fact remains that controlling costs is an absolute bugger. It is very difficult to get every detail of a working IT system right when the system currently only exists in paper plans. The mistake that must be avoided at all costs is “feature creep”, where more and more changes are requested to software in development, until costs rocket, the original design is no longer fit for purpose, and if you’re the NHS – well, we know what happened there. But there’s nothing new about feature creep, so why is does this mistake keep being made?I’m a software tester and not a project manager, so I won’t claim I have the magic answer for how to run a project cheaply. But based on how I’ve seen projects go from my tester point of view, here’s my tips for how I think feature creep can be avoided:- Don’t embark on a “big bang” project unless you have to. Hugely ambitious and revolutionary IT projects might make the project manager look bold and decisive, but this is how the most embarrassing and expensive failures begin. If you can, deliver a large project in manageable stages. That way, you can observe how taking to the new system works in practice and adjust your plans accordingly. It also reduces the worst-case scenario from a whole project getting binned to losing only a single stage.
- Get the software testers involved as soon as possible. All right, I’m a software tester so I’m bound to say this, but it’s good practice. There’splenty of good reasons unrelated to feature creep, but one added advantage is that it’s the job of software testers to look for flaws from day one. A tester saying “Have you thought about this problem?” or “Wouldn’t it be better if we did that?” at design stage could save an expensive mid-project feature change later.
- Get the users involved as soon as possible.This isn’t always practical if your intended userbase is the general public, but if it’s for your own workforce you should listen to what they think. Some systems, I swear, are designed, programmed and tested without consulting anyone who’s actually going to use it. Testers and programmers can sometimes make an educated guess on how a system might be used, but they won’t know the subtle intricacies of how things work in practice on the shop floor. And again, if a prospective user spots a flaw in the business process at design stage, you’ll be grateful.
- Remember there’s always the next release. Once the programming starts, it is inevitable that you’ll discover something isn’t designed as well as it could be, or a feature wasn’t included that should have been. That’s not the end of the world, but it will be if you commit a complicated patch every time this happens. Ask yourself if this change can wait until the next release when it can be planned and delivered properly.
- Include people in project management who understand both the business and the software. I keep saying this, but feature creep is one of the strongest reasons why. It’s a lot easier to weigh up the pros and cons of a last-minute feature if you can weigh up the business benefits against the technical issues yourself – leaving the technical arguments to another department or the contractors is not reliable. Understanding both the business and technical aspects also makes it easier to find a practical solution when all other options are being dismissed as technically impossible or expensive.
- Avoid a departmental free-for-all. Assume every department in your company considers themselves the most important one. If Finance request feature A because Anti-Fraud requested feature B, leading the procurement requesting feature C and so on, you are in trouble. Someone has to make a decision on which features are feasible and which ones aren’t, which features can’t wait and which features can.
- Allow time in your project for the unexpected. Delays and overspends sometimes can’t be avoided. Sometimes a serious bug can only be fixed by a completely new feature. If a decent system gets delivered in the end, the project should not be considered a failure just because it was late. But if will be if you’ve tied your hands to a deadline, by perhaps advertising a launch date for an exciting new product you can’t deliver, or promising a Cabinet Minster your high-profile project will definitely definitely definitely be ready to go on a certain date. All too often, the solution found is to shorten or cut the testing phase – after all, it looks like it’s working, doesn’t it? – and it’s the users who discover the real price for ticking the “on time” box.
Unfortunately, the standard reaction I’ve seen to botched feature creep-ridden projects isn’t to learn any of these lessons – it’s to blame the contractors. And that usually means making the same mistakes all over again with new contractors. This is why it’s dangerous to dwell too much on “rip-off” IT projects as if it’s always someone else’s fault. Regardless of who’s in the right and who’s in the wrong, the fact remains that clients and contractors have to work together. It is only with a true spirit of trust and mutual understanding that problems such as feature creep can finally be put to bed.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Are web designers the new car mechanics?
Websites are easier to make than most people think. Bear this in mind when a website designer wants a hefty payment.
A joke, obviously. But does this sales pitch work in IT?
Advance warning: this post is another moan. Up to now, I’ve had two pet hates: people who sign up to wildly optimistic cheap/convenient IT projects that turn out to be unreliable and expensive; and at the other end, people who block trivially easy IT projects because of silly overblown cost estimates. I’d forgotten the third type. But we’ll get on to that later.This story begins with my website – you know, the one in my shameless plug masquerading as a piece on Search Engine Optimisation. Well, my web traffic is still quite abysmal, in spite of pushing up the Google rankings. But from the few people who’ve looked at the site, I’m quite likely to set up a website for an arts organisation, which I’m happy to do as a freebie; and if all goes well I may get some paid work off the back of that. And in this scenario, the obvious question is: how much should I ask to be paid?The thing is, there’s nothing special about my web design knowledge. What I created for myself was technically very basic (I was using a free web template and Kompozer if anyone's wondering). I’d rate my skills above those of a 13-year-old who has discovered FrontPage – I do at least understand the importance of Cascading Style Sheets, W3C compliance and not doing fancy animated backgrounds – but ask me to produce a site that handles user-uploaded content, streaming video or credit card payments and I wouldn’t have a clue. And yet paltry offerings to the interweb like mine seem to be regarded as the height of technical genius.
Oh, and another reason why not to pick on small companies: if you’re serious about over-charging, why stop at £150? How about $18,000,000? Yes, that’s right: eighteen million US Dollars. Because that’s what luxury hotel chain Four Seasons paid for theirs. Some websites might be expensive to make – I’m testing a feature-rich website at the moment and I know first-hand how much work can be involved – but $18 million for this one? A secure banking site might cost that much, but this one has a hotel booking facility, smartphone compatibility, and some pretty panoramic pictures of their expensive rooms and beautiful locations: all standard features seen in websites made at fraction of the cost. They’ve not even done that good a job of it – it’s been criticised for shutting itself off search engines, poor accessibility for disabled users, and sloppy user friendliness amongst other things. One would have expected a project that expensive to dedicate at least a few million to proper testing to deal with those sorts of problems. I can’t help thinking someone is going round with $17,990,000 in his back pocket.In a way, website designers can be likened to car mechanics. Just like the unscrupulous car mechanic can make wildly inflated estimates for easy repairs, it is far too easy for website designers to say the IT sales-speak equivalent of: “Right, let me see … that’ll be HTML, CSS, web server rental, domain name, SEO, setting up an FTP server … hmm, you’ll want a contact form so that’s PHP and SMTP as well … oh dear, we’re talking about a work here, ‘sgonnancostya”. The difference is that whilst most people know better than to hand over money to car mechanics until you’re satisfied you can trust them, the same is not happening for IT products. From small clubs and societies to the biggest boardroom, people sign cheques first and ask questions later.I’ve said it before, and I’ll say it again: people – big organisations in particular – making the decisions on IT projects have to understand what they are hiring a contractor to do. You cannot rely on techno-waffle from sales representatives; you need people independent of the contractors who can tell you if it’s a bargain of a rip-off. Claiming IT consultants are too expensive is no excuse – in most cases, you can get what you need by identifying people in your organisation who understand computers and listening to what they think. I cannot imagine anyone would have paid a motor chain $18 thousand, let alone £18 million for a contract repairing company cars without at least getting an opinion from someone who knows about motor repairs.So there you are, my new pet hate. Joining people who cook up silly overblown expenses as an excuse not to do IT projects are people who cook up silly overblown expenses and then actually pay it. It’s not just websites; it wasn’t that long ago that the House of Commons Public Accounts Committee highlighted government departments spending £3,500 per computer. Many schools are eager to equip every classroom with iPads when cheap netbooks would do the job equally well. And yes, software testing companies are not immune – I’ve read my fair share of sales pitches for test automation tools that I can tell are overcharged and not that useful, but someone must be buying them if they’re in business.The Government's latest initiative to keep costs down is the GCloud programme - it's a good idea in principle, whether it works in practice is yet to be seen. Private companies too have the means to find out for themselves when they’re being overcharged. But individuals aren't so lucky. Many IT companies routinely promote unnecessarily expensive products in the domestic market, such as computer stores bundling expensive security suites into PC sales when a free package off the internet would suffice. Some laptop vendors promote special “school” laptops at twice the price of lower-spec machines, when most school children have no use for the higher specs. And not everybody has a tech-savvy friend to warn you if it’s a waste your money. But you don’t need a PhD in computer science to understand that “expensive” does not necessarily mean “better”, and a little more attention that that principle would go a long way.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Give penguins a chance
Would switching to open source software save public money? I don’t know, but we should at least try to find out.
The Windows logo versus the Linux mascot. A little-known but very bloody feud.
I know software testing is a very absorbing activity, but in between bouts of testing you might have noticed there’s a bit of a financial crisis going on. As tax rises, benefit cuts and axing public services don’t go down that well with the public, the government is keen to find less painful ways of saving money. This, in part, was the idea behind the Spending Challenge letters that went out to all public sector workers shortly after the 2010 election asking for ideas to save money. The ideas ranged from the pragmatic to the ridiculous, but one suggestion that caught my eye was to switch proprietary software for free open-source alternatives. This is not an unthinkable as you might expect; the Lib Dem manifesto said they’d look into this, and George Osborne himself is said to be interested.I’ll be open and upfront here: I use Linux, LibreOffice (effectively the successor to OpenOffice) and other free open-source products wherever possible. It’s partly I don’t want to pay for software when free stuff does the job, and partly because I have problems with the way Microsoft uses its dominant position to make life difficult for people who use competitors’ products. But I don’t believe in imposing my views on other people, and I’ll help out with any IT problems whatever software they’re using. (Indeed, a software tester who doesn’t is a short-lived one.) I wouldn't push savings too much with a charity (Microsoft usually heavily discounts software for them). I’d also be hesitant to encourage a small business to switch to open-source when everyone they work with expects them to do all things Microsoft. The public sector does not have that problem – they mostly communicate with each other, and they’re big and ugly enough to insist anyone else works with their software if they wish – but any move away from Microsoft or any other proprietary software must save the public money, and not just be done to prove a point.
But Microsoft does make one valid point: there’s more to the cost of IT in business than the licence. The term Microsoft keeps banging on about the Total Cost of Ownership, and much as I hate buzzwords, it has to be taken seriously. There’s labour costs associated with installation, maintenance, fixing problems before they disrupt your business, and the hardware needed to support your system. Microsoft also claims that if software’s free, there’s no-one on the end of a phone if things go wrong. That’s not really true any more; the major Linux distributors sell Enterprise packages that include this support, but the fact remains it costs money. The bottom line is that Microsoft claims their software works out cheaper when you factor in everything. I find some of their anti-Linux claims to be dubious, but that’s just their marketing department doing their job, and I wouldn’t be surprised if Microsoft and Canonical do the same.Anyway, here is my idea. It’s a suggestion which the Government is welcome to take up without any need for acknowledgements or royalties. It’s a tried and tested method which works in every other area of government business when different companies claim to provide the same goods or services for less money.Without further ado, the solution is …[Drum roll]… put it out to tender.At the moment, public sector IT contracts generally are a choice between company A providing Windows and MS Office, company B providing Windows and MS Office, and company C providing Windows and MS Office. That’s not good enough. I can’t think of a single example other than this where it’s considered acceptable to choose one company without considering any competitors. It doesn’t have to be a choice of all Microsoft or no Microsoft; it’s perfectly possible to run LibreOffice on Windows, Microsoft Office on Linux, or mix and match pretty much any combination of open source and proprietary components. Claiming Microsoft is the only option doesn’t wash any more – government bodies elsewhere in the world have made the switch and managed. Claiming it’s what everyone uses is a poor excuse for any government that believes in free and fair competition. If 90% of motorists drove Skodas, would anyone argue the Government should help make it 100%?What should we consider when awarding the contract? Anything we think is important, just as long as all sides get to make their case. Does Microsoft believe their software is cheaper to maintain in the workplace? Are their servers easier to maintain? No problem – let Microsoft make their case, let the open source vendors reply. Is there a problem with a Microsoft lock-in? Their licensing arrangements? Let the open source vendors say why there is, let Microsoft say why there isn’t. Does Microsoft or Linux offer better security? Which is faster? Which is more reliable? For all of these questions, we should be asking the vendors to make their case themselves, rather than picking one and dismissing the others out of hand.And what if the winner is Microsoft, Microsoft and more Microsoft? It will still be worth the paperwork. Experience shows competition is good for Microsoft products. Microsoft moved on from the horribly outdated IE6 because of competition from Firefox. When the XBox’s standing was threatened by the revolutionary Nintendo Wii controller, they responded with the equally innovative Kinect. There have been advances in Windows and Office in the last two decades, but two things in particular have never really been addressed: why it’s necessary to pay hundreds of pounds for software when you only use 10% of the features, and why the processing power needed to run them balloons as quickly as processing power of computers. With real competition to the office market and something might be done about this.Will this happen? On the one hand, if the Cabinet Office consider upgrading from IE6 to be too difficult/complicated/expensive, there isn't much hope. On the other hand a consultation was launched last year on this area, and although it seems to be confusing open source with open standards a bit, there are signs that the Government is starting to recognise the need for proprietary and open source software to compete on fair terms. The Government is in a far better position to bring competition back to IT than any other company, and if they stick this course, it could be rewarding for everyone. -
SOPA is not the answer to piracy
Ordinary people’s livelihoods need protecting from copyright theft somehow – but SOPA is too high a price to pay.
Apologies to software testing blog entry fans, but this week it’s another generic IT-related post. This can’t wait because, as you may have noticed, there was a blackout of several websites last week, most prominently Wikipedia. This was in protest over the Stop Online Piracy Act (SOPA) going through the US House of Representatives, and although this is only a US law, like software patents it stands to affect the UK. The participation of Wikipedia has suddenly brought this issue into the spotlight, with pro-piracy activists, pro-control record companies and all sorts of people in between giving their points of view.Let me be absolutely clear: I have no time for pirates, especially not those who run websites like The Pirate Bay. They are not noble crusaders selflessly standing up for internet freedom – they are big businesses who make a packet from advertising and subscriptions without the tedium of sharing the proceeds with anyone who made the stuff in the first place. Yes, the music industry has survived home taping, CD copying and bootleg market stalls, but file-sharing makes the practice much easier, so the issue must be taken seriously. I couldn’t care less if Jay-Z or the chairman of Sony-BMG can’t afford an extra Mercedes, but they aren’t the real victims. And I’m not talking about the people who work in the music industry (although this is a valid point the record companies make), but the small-time artists struggling to make a living.I am fortunate. My own small-time artistry is play writing and directing. I have willingly put in a lot of time into theatre for nothing, but I would not have been able to use various theatres for free without the money they get from ticket sales. I have little to fear from online piracy because you can’t copy a theatre visit online, but musicians, authors and computer programmers aren’t so lucky. One of the favourite pro-piracy arguments is that it benefits small-time musicians by promoting their work, but reality does not back this up. In Sweden, where there is the strongest culture of piracy, any musicians who complain about loss of earnings are vilified for not sharing the Pirate Party’s views of what’s best for them. An obvious point is that musicians tend to promote themselves through samples on MySpace or YouTube rather an online free-for-all, but this too falls on deaf ears. My view is that all these arguments about supporting music aren’t reasons for piracy, they are simply excuses.But the stance of the big record companies does small-time artists no favours. Of course they have to protect their sources of income, but the arguments they use are blatantly geared towards maximising profits first and protecting creativity a long way second. When Prince chose to release an album for free - surely no-one can object to a millionaire pop star giving something away at his own expense? - the record companies were outraged. Half the time, anti-piracy technology seems to have little to do with anti-piracy and plenty to do with restricting how you may use your own products, from unskippable adverts on DVDs through to some highly suspect restrictions on Blu-Ray. Until the RIAA backed down, the RIAA resorted to mass lawsuits against people who may or may not have illegally uploaded material based on questionable evidence and scary lawyers. Copyright laws, like most laws, work best when people have confidence in them, and so far the record companies are failing miserably.This bludgeoning approach is a large part of the problem with SOPA – although, to be fair, it’s largely the big-time pirates’ fault this was considered in the first place. Pirates evade the law either by claiming their operations are out of reach of the law by locating their servers in Belize, or claiming that their site’s Not For Use By Copyright Infringers (Honest). The latter category is the big problem. Countless sites rely on content uploaded or shared by users, from Limewire and old-style Napster to YouTube to Wikipedia, plus Facebook, Twitter and pretty much any site that allows users to post comments such as this blog. What they all have in common is that there’s no way foolproof way of ensuring uploaded material isn’t someone else’s work. Beyond that the similarity ends: the Newzbins of the world turn a blind eye, and sites such as Wikipedia diligently police themselves. The question is: how, in the eyes of the law, do you tell one from the other?SOPA’s answer is, at best, vague. And vague laws are dangerous, because that places power in the hands of those with the most expensive legal teams. We’ve already seen US software patent laws used almost exclusively by big companies to keep small companies out of the market and extract money from big competitors, and for all we know SOPA could go the same way. Could a company who cares little about piracy but disapproves of Wikipedia try to put them out of business? Could they claim the upload mechanism "might" be used for piracy? It might seem a ridiculous scenario, but there’s little to assure us this couldn’t happen. It’s little wonder sites like Wikipedia are up in arms about this, and yet the big record companies still see their opposition irresponsible pro-piracy.There are plenty of other possible solutions. I’d take a good look at Wikipedia founder Jimmy Wales’s idea of going after the money rather than the uploaders. I’ll bet that if you take money out of the equation, people running sites like The Pirate Bay will suddenly forget their ideological commitment to “sharing”. Websites can work with the copyright holders; almost all music videos streamed for free on YouTube now are done with the copyright holders’ blessing, either for a share of advertising revenue or just promotion of the song. Existing laws are getting quite good at telling the difference between bona fide content sharing sites, and piracy sites masquerading as legitimate ones. All of these possibilities could and should be considered before resorting to handing poorly-specified powers to unspecified individuals.However much idealists want to believe otherwise, the music and film industries are not sustainable in a world where payment is voluntary for everyone, but this is what we will get if the big record labels carry on behaving like they own the internet. At the time of writing, SOPA's passage through Congress has been suspended – whether this really means the end of SOPA as we know it is unclear. But I hope this will be used as an opportunity to go back to the drawing board and think about what really matters. It may take many more attempts to get the balance right, but if we stick with this, it will be worth it in the end.
-
Don’t be afraid to upgrade
Upgrading software in the workplace requires caution – but some companies make this far more complicated than it needs to be.
No, you’re not having a strange dream, Microsoft really is celebrating the demise of a flagship product. Continuing the tradition of celebrating milestones in web browser development with cakes, Microsoft’s latest cake marks the “death” of Internet Explorer 6 – or, more accurately, the decline in US IE6 usage to 1%. Microsoft have make a huge effort to get people off Internet Explorer 6 (obviously, they’d rather you went to Internet Explorer 7, 8 or 9 than Firefox, Chrome or Safari, but an effort nonetheless) through hasty development, advertising campaigns, and now even silent updates to upgrade remaining computers. And with Microsoft themselves admitting IE6 has had its day and even the die-hard open sources fans accepting that IE7 onwards is a big improvement, you’d think everyone would be happy.If, however, you’re reading this blog from a UK government building, you may think you’re accessing news from a parallel universe. The UK public sector is inexplicably at odds with the rest of the world. IE6, like most early browsers, has a sluggish Java engine that runs at snail’s pace on modern Java-Rich pages. Most public web pages have now dropped support for IE6. And yet when the China hacking scandal exposed hugely embarrassing security flaws in IE6, and the French and German governments warned everyone off IE6 (and , for a while, later versions), the Cabinet Office insisted there was nothing to worry about. To be fair, web browser security isn’t the be-all-and-end-all for government buildings – their strongest defence will always be the safeguards within the Government Secure Internet – but the web browser is the last line of defence in a compromised network, and it’s a reckless to rely on a web browser written before widespread broadband adoption and the security threats it brought along.The Cabinet Office does, however, make a reasonable point. Upgrading a system in the workplace is not a just a simple matter of waiting for Microsoft / Apple / your Linux vendor to issue an update and click on “Yes, Upgrade”. The effects of the same upgrade can vary from one computer to the next. Many Mac users were caught out last year when the latest OSX upgrade rendered their pre-Intel software unusable. This is not normally a big issue for most domestic users – the worst that can happen is a few computer-free days until someone can put your old software back – but in a business, even a few hours without working IT can cost thousands of pounds. Businesses also have to consider whether the latest upgrade exposes them to new security threats.The UK Civil Service, however, takes this to the extreme by refusing any upgrade without a thorough acceptance testing process – meaning in practice that almost everything is ruled out on cost grounds. That is not how you are meant to approach software testing. Instead, you should prioritise your testing based on risk, and the risk of upgrading IE6 after 7, 8 and 9 have been used by the public for years without problems is minimal (as is using Firefox or Chrome). You certainly don’t need the extensive testing required for software specially written for your own company. (And okay, if you’re the Civil Service, you also need to think very carefully about security implications of upgrading – but doing nothing exposes you to the security implications of not upgrading.)There is also a strange obsession that any change to IT entails expensive training costs. This is sometimes true – I, for instance, would have be hesitant to drop an Ubuntu-based workplace straight into controversial Unity desktop (Ubuntu only got away with this because their user-base tends to be tech-savvy) – but most of this time this mentality assumes workers can’t cope with even the simplest intuitive change. I’ve said before that public knowledge of IT could and should be better, but that doesn’t mean ordinary office workers are all IT-literate idiots. The equally controversial ribbon that came with Microsoft Office 2007 was a big change from earlier versions, but you’ll struggle to find a workplace that rushed into Office 2007 without training and found its workers couldn’t cope.Then there’s the problem of workplaces locking themselves into outdated software – and this is a particular problem with IE6. Many workplace applications were written to specifically run through Internet Explorer 6, making an upgrade impossible without a fundamental rewrite of all these applications.[1] This was an easy mistake in the early noughties when IE6 looked set to be Grand High Lord of the Internet forever, but one of commonest complaints I’ve heard from software developers is that even when IE6 was on the decline and they warned customers of the dangers of locking yourself into IE6 further, companies were still insisting that applications were written to run through IE6 because that’s what they’ve always used.Finally, I can’t help thinking that there’s a mindset that slow and unreliable systems are something normal. When I was last in a government building, I was regularly screaming and cursing that something as simple as checking the price of a train ticket took me five times as long as my (relatively low-spec) computer from home, but this didn’t seem to be considered a problem. When managers are downplaying the negative impact that out-of-date software is having in their workplace this much, the change of doing something slips even further out of reach.In a way, software testing has a lot in common with health and safety. Good health and safety is all about identifying the risks and concentrating your efforts accordingly, so that you can carry on doing you’re doing safely (so frequent accidents such as slips, trips and falls, and serious risks such as road accidents get more attention than the chance of getting a papercut at your desk). Lazy health and safety – the sort which gets gives the business a bad name – involves overblown risk assessments over the most trivial dangers to the point where the only practical solution remaining is to not do it at all, which is why you get schools cancelling school trips for daft reasons. The same principle applies to software testing: good testing helps you achieve what you want safely, bad testing stops you doing it completely. And like silly health and safety decisions preventing children playing outside, the risks of not upgrading can often be far greater than the paranoid risks used as justification not to do it.It’s perhaps unfair to blame project managers for being risk-averse. There is no shortage of botched IT projects out there, so it’s understandable why people would choose to play it safe and stick with what they know, however inefficient it may be. But the paperwork around upgrading is far more complicated than it needs to be, and if we’d focused more on what really matters and less on hypothetical scenarios that don’t, we could have enjoyed Microsoft’s cake much sooner.
[1] Having said that, you can install a modern version Firefox/Chrome/Opera/Safari alongside IE6 so that you can access the internet on a modern browser whilst still having use of your IE6-specific applications. But given the lack of adoption of this easy solution, I can only assume that companies who mindlessly run everything through IE6 are the same people who obsess over overblown acceptance testing and training costs whenever anyone considers using a new product. -
Security should be everyone’s responsibility
There are two main enemies to security: convenience, and inconvenience. Better public education of the risks would make things safer.
"But I only wanted to check my Facebook."
(Photo: 48states, Wikipedia)
Now, in case you lost track of the plot somewhere around episode 4,605 of the Leveson Inquiry, one of the latest developments is a claim that hacking extended to e-mails. At the moment, unlike phone hacking, this has not yet been proven or admitted to. But, quite frankly, it would come as no surprise if this turns out to be true. Like voicemails, the security surrounding personal e-mails has been notoriously lax, and practically an open invitation for hackers to pry into private matters.In the olden days of workplace and university e-mails, your e-mails would typically be managed on a local server, which was great until you went home and had no e-mail access. This changed with the coming of Hotmail, Mailcity and many other web-based e-mail services that allowed anyone to read their e-mails anywhere in the world. The snag: this also allowed anyone in the world to read your e-mail, if could find a way round the password protection. And that was scarily easy: even if your intended victims hadn’t been silly enough to set their passwords, say, the names of their favourite pets, it was often a simple matter to use basic personal information, like a mother’s maiden name, to reset their password on the Forgotten Password page. Worse, it was (and still is) quite normal practice to store every e-mail you have ever sent and received on a server, ready for a hacker to pore over a lifetime of indiscretions. And in case you think this is just paranoid speculation: it’s happened, and it’s been nasty.[1]
In defence of Joe Public, it’s not easy to protect yourself when big IT companies routinely prioritise convenience over security, or – worse still – offer insecure products as standard when safer solutions already exist. When broadband first became popular, the “broadband modems” supplied by most ISPs offered virtually no protection from the outside world, even though routers with built-in firewalls were available at the time. (Windows firewall and other firewalls built into computers aren’t enough; it only takes one rogue program to switch it off and your protection’s gone.) Routers only became standard when wi-fi became popular, but this introduced the equally bad problem of unencrypted wi-fi; this was standard, and configuring encryption yourself was a nightmare. Internet suppliers have, thankfully, caught up with this and now routinely supply pre-configured encrypted routers, but even now new problems are emerging. Thanks to Facebook, we are being encouraged to put all of our personal information in semi-public view, even though this can be used by fraudsters to impersonate you. Meanwhile smartphone suppliers make it so easy to put so much personal information on your latest gadget, stolen smartphone are going like hotcakes not because of the handset but all the data you can use for identity fraud.Large businesses, however, often make the opposite mistake to domestic users. They heavily lock down what users can do on the system, bog their computers down with bloated security software, refuse to consider any new software or upgrade of existing software without an overblown laborious “impact analysis” (meaning in practice that everything new becomes cost-prohibitive), and sometimes even preventstaff from encrypting data because it’s not in line with the security policy.This Fort Knox-style mentality is just as dangerous, because it gives staff the choice: either work at snail’s pace on inefficient systems, or take short cuts such as bypassing security features or sending confidential documents to their home computers. I can’t help thinking that no-one would have copied poorly-encrypted data to two CDs that got lost in internal mail had suitable data transfer or encryption software been made available.
[1] Okay, the tabloid e-mail intrusion went a bit further than this. It wasn't just cracking webmail passwords, it was outright hacking of people's own computers. But I'll bet it began with the easy opportunist snooping first and went on to more determined hacking once they realised how much information people were leaving around and how profitable this scheme was. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
How to win attention and annoy people
Search Engine Optimisation is big business in IT. It’s just a pity it’s become so intrusive.It used to be this simple(Photo from SMBSEO.com)Can I have your attention please? I apologise in advance, but I am about to abuse my position as a software tester. No, I’m not going to sell confidential client information to Russian spies or anything like that, but I am nonetheless going to misuse this blog to further my personal interests outside of my job. All right. Are you ready? Let’s do a countdown and get this over with. 5 … 4 … 3 … 2 … 1 …Actually, you needn’t click there if you don’t want to. I’m not too fussed either way. For those who didn’t bother clicking, that was a link to my web site on play writing, which is what I do in my alternate life. I don’t care too much whether you view it – seriously, there can’t be that many people with interests in both software testing and theatre in the vicinity of Durham – but that’s not the purpose of the link. The purpose of the link is for Google and other search engines to know it’s there. Because the more links Google finds to your page, the higher it gets up the page rankings.It used to be so much simpler. In the olden days, if you wanted some builders in Woking, you looked until “Builders” in the Yellow Pages. Builders and other businesses paid for advertising space, with more money for a bigger advert, and unless you traded as Aaron A. Aardvark or Zzacharias Z. Zzyzz, there wasn’t any real way of gaming the system. This all changed when the internet came along. The early search engines gave the top entry to whichever entry put the search term in the text and keywords the most often. This was a reasonable idea – after all, if you’re looking for a web page about Yorkshire, you probably don’t want a food menu from a pub in Dorset that happens to include Yorkshire puddings in its Sunday roast – but this inevitably resulted in every builder in Woking entering keywords of BUILDERS BUILDERS BUILDERS BUILDERS WOKING WOKING WOKING TRUSTWORTHY RELIABLE QUALITY etc. etc.So when a couple of researchers at Stanford University came up with the idea of “PageRank”, which instead considered how many websites link to yours (and how prominent the linking pages are), Google became the overnight success we all know about. But anyone hoping for an end to search engine wars can be disappointed. I confess, I find chasing pageviews on this blog and my own site addictive, but I have better things to do than put links on as many sites as possible. If, however, you’re business dependent on web visibility, there’s a lot more at stake. And this is why Search Engine Optimisation (SEO) is such big business.Now, it wouldn’t be fair to portray SEO companies as the bogeymen. Plenty of SEO techniques, such as placing appropriate links on other websites, are considered perfectly legitimate. I have absolutely no problems with my Google searches being made relevant to what I’m looking for. If I’m looking for builders in Woking, I’m quite happy for SEO companies to ensure that no-one I might be interested in gets overlooked. The problem is that once market forces come into play, a “relevant web experience” often means trying to bombard users with whatever gets money out of them. As soon as Google started judging importance on links from other sites, attention turned to these other sites – and the lengths some sites went to was astonishing. Blogs and open wikis used to get plagued with irrelevant links (with reasons for the link frequently no better than “check out this cool site”). Many platforms, including Wordpress and Wikipedia now use the “nofollow” tag to stop this practice paying off, but whether this actually deters link spammers is anyone’s guess.The lengths some sites go to is astonishing. There is a big business is linkfarms: sites that serve no function other than trying to push up another page’s Google place. Sites that get caught by Google are, in effect, disqualified and put to the bottom of the list. One high-profile casualty in 2006 was BMW. Three years earlier, a company called SearchKing was rumbled and penalised for blatantly gaming the system, who promptly resolved by suing Google. They got nowhere, but it says something about how much some people consider buying their Google rank as entitlement. Lately, Google appears to have gone to war with WebPosition Gold for sending automated queries to probe Google’s rankings (and, one might suspect, find the loopholes). But the question remains: how much of this practice goes undetected?Then there’s the practice of drawing people to your site who were looking for something else. I got a surprising number of visitors to my blog entry about software patents who were looking for pictures of the Montgolfier brothers. That was purely by accident, but there is a growing suspicion this sort of thing is being exploited on purpose. BMW was found to have redirected users to a site with far fewer keywords that the user searched for. It was claimed by Private Eye that journalists are encouraged to put popular search phrases into articles in order to increase web traffic, and therefore advertising revenue. There's no knowing where this will end.Is there a solution to this? I honestly don’t know. I’ve previously argued you could solve the software patent problem by scrapping patents, but you can’t exactly solve this problem by scrapping search engines. It’s all very well telling Google to try harder, but they are already in a fight to stay one step ahead of the link spammers. I’m almost tempted to suggest a return to an internet version of the Yellow Pages, where people looking for adverts can go to a web page where prominence is once more governed by how much you pay for advertising cyberspace – but as paid adverts are an even bigger pain in the backside, I can’t the public buying into this idea. -
The Ghost of Vistas Past
Damage to consumer confidence can haunt you for a very long time. Windows Vista is the classic case.
In case you’ve been locked up in a wardrobe for the last two months, Windows 8 is on the way. At the launch a few weeks ago, they demonstrated how the next version of their operating system is designed to work in tablets. The fact that Microsoft is focusing on tablets is interesting, because it shows how high the stakes are. For over a decade, bar a few niche markets (Macs for high-end users and graphic designers, Linux for the tech-savvy), Microsoft has been the undisputed king of Desktop PCs, and none of Microsoft’s competitors are anywhere near taking their crown.
The problem is: they don’t have to. The computing market is moving on. Many things that used to be done on a Windows XP machine can now be done on a smartphone or a tablet, and consequently, many Desktop PC users are switching to these devices. And so far, both tablet and smartphones are dominated by Apple and Android. The nightmare scenario is that Android makes the leap from tablet PCs to the desktop and undercuts Microsoft’s safest market. Little wonder Microsoft wants Windows 8 established on touchscreen computers so badly.
It should not have been this way. Tablet devices such as the iPad are really just laptops with touchscreens instead of keyboard, so Windows ought to have had an easy transition from one to the other. Instead, tablets are being treated as oversized mobile phones, and Apple and Android, both miles ahead of poor old Windows Phone 7 in the market share, got there first. The thing is, Windows Phone 7 actually got a fairly good reception on its launch, and Windows 7 on the desktop didn’t do too badly either, so what is going wrong in the touchscreen market?
The answer, I suspect, is lack of consumer confidence. It came to a head with Windows Vista, and Microsoft never truly recovered from it. Now, it would be easy to say that it’s all Microsoft’s fault for taking customers for granted and making no effort to get their stuff working properly. I don’t think it’s that simple. Microsoft is not run by IT-illiterate pen-pushers, and they are quite capable of producing popular and reliable devices – the success of the Xbox and Kinect speaks for itself. So where did they go wrong with Vista? Without an open test and development plan, it is hard to know what they were thinking, but my theory is that it wasn’t that they didn’t see the need to test; they just underestimated the work that needed to be done. For what it’s worth, I think the key mistakes were:
- Lack of attention to hardware compatibility. In the days of Windows XP, Microsoft could get away with expecting manufactures to get their hardware compatible with Windows (on pain of going out of business). When Windows Vista came along, suddenly Microsoft had to do it the other way round, and evidently didn’t realise the amount of work involved.
- Too much trust in the upgrade facility from Windows XP. Upgrading an operating system has never been that reliable, and whilst there was no harm in providing the facility for those people who understood and accepted the risks, it was a big mismanagement of expectations to present this to Joe Public as the quick and easy way of moving from XP to Vista.
- Underestimating the implications of a five-year gap between releases. Between the release of XP in 2001 and the release of Vista in 2006, we saw the widespread adoption of home broadband, wi-fi, CD writing, digital cameras, MP3 music, online financial transactions and – unfortunately – a whole load of security threats abusing these technologies. All of these were accommodated in Windows XP with one sticking plaster after another. Incorporating all of these into a consolidated modern system was inevitably going to take a long time to get right.
- Over-dependence on high-spec systems. Microsoft products had been criticised before for getting more bloated as computers got faster, but Windows Vista took this to a whole new level, with even new computers pre-installed with it struggling to meet the system demands. Some performance testing on lower-spec machines should have set alarm bells ringing much sooner.
- Lack of caution with digital rights management. It’s not fair to blame DRM on Microsoft completely, because they were leaned on by the big film companies (who, let’s be fair, had their sources of revenue to worry about). But when you introduce a feature that’s designed to restrict what you can do rather than enhance it, the last thing you want to do is end up also stopping users doing perfectly legitimate thing. DRM was always going to be controversial, but giving the impression that faults elsewhere in the system was a price worth paying was really asking for trouble.
I could be wrong; for all I know, it was a different set of mistakes. But there was a little doubt of the result: Windows Vista needlessly reinforced Microsoft’s reputation as the unreliable software we all love to hate. Windows stayed king of the desktop PC for one reason and one reason only, being that most people considered switching to the alternatives too much work or too expensive. And in the smartphone and tablet market, where Microsoft doesn’t dominant, that’s not good enough. The moral of the story is that even after you correct your mistakes (as Microsoft largely did with Windows 7), the damage to your reputation can haunt you for a very long time.Microsoft will survive somehow. We saw with the Xbox that Microsoft can still compete, and we saw with the Kinect that Microsoft can still innovate. Microsoft has doing better in the server market than it used to. Even if the apocalyptic predictions of the demise of the Desktop PC come true, Microsoft has deep enough pockets to hold out until they find a new role in the IT market. But when we are even contemplating this of a company that once wowed the world with Windows 95, something has gone seriously wrong, and the rest of the world needs to learn lessons from this.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {}
-
All hail the Ocelot
Linux and open source software isn’t for everyone. But it’s a good way to learn how software is developed and tested.
As well as preying on rodents and resting in trees, ocelots are surprisingly skilled in optimising recently-overhauled desktop environments.
(Photo: Danleo, Wikimedia Commons)
Yesterday (October 13th) was an exciting day for many reasons. It marked the first anniversary of the completion of the rescues of the 33 Chilean miners. Classic 80s movies fans saw the return of Ghostbusters to the big screen. It was also the day to celebrate 65 years since the adoption of the constitution of the French Fourth Republic. All of these fascinating events, however, paled into insignificance against the most eagerly anticipated event of all, which is the release of Ubuntu 11.10, codenamed Oneiric Ocelot.
For those of you who don’t know what's so Oneiric about an Ocelot, I should explain what all the excitement is about. Ubuntu is a Linux-based operating system, which works as an alternative to Windows, and this is their latest six-monthly upgrade. (If you want to know why you’d choose to name an operating system after a South American wildcat, this page should explain.) Like most Linux distributions, it’s free – and not just free to use (like Adobe Flash Player or Microsoft Word Viewer is). It’s free for anyone to copy, modify and redistribute, as long as any derivative you produce is also free to modify. Only a small number of Linux users actually modify software this way, but the fact this is possible has a huge influence on how Linux is developed. Windows fans argue Linux is just a mish-mash of cobbled-together software written in backrooms, whilst Linux fans argue that the open collaborative way Linux is developed is actually better than Microsoft’s work behind closed doors. Anyway, the arguments could go on for years, but this is a blog about software testing – anyone who wants to continue on this subject can read why Windows is better than Linux or why Linux is better than Windows.
From a software tester’s point of view, however, there is a big advantage to Linux: you can learn a lot about software development and testing. All of Ubuntu’s Alpha and Beta releases are publically available to download and try out, and it’s a valuable lesson in just how much work is needed to test and stabilise software. The Alpha 1 release typically comes out after just 1½ months of development with all the major changes already in place – but expect the flashy new features to crash the system the moment you sneeze at it.[1] It is only over the next 3 months that later alpha releases transform the bug-ridden mess into something reasonably stable. The Alpha releases are also the stage where features get pulled – they can be features that looked good on paper but don’t work in practice, or simply features that got a hostile reception from early adopters.
When you reach the Beta releases, you’ll probably come across a system that looks all polished and ready to go. It isn’t. It may load up fine, and all the programs you fancy using may fire up fine and appear to work, but it’s only when you try using the programs that you run into annoying bugs that haven’t been picked up yet – bugs that can still add up and render the system unfit for purpose. This sort of thing, I suspect, is a trap many projects fall into: impatient testers or managers try out beta-quality software, watch it run smoothly on face value, go ahead and deploy it, and learn its shortcomings the hard way.
Then there’s the open bug tracking system. Anyone who finds a bug – whether in an Alpha, Beta or stable release – can report it. But if you want your bug report to be taken seriously, you have to do it properly. Simply writing “Firefox didn’t work” is useless. If, however, you state exactly what Firefox is doing wrong, what you were doing when this happened, whether this error is reproducible or just intermittent, what version you were using, and anything else that might help developers pin down the bug, you will be more likely to get the bug fixed. If the bug you’ve found has already been reported, you can look at the progress of the bug report to see how it was handled and how it was dealt with.
Following open-source projects doesn’t teach you everything. The kind of testing you can observe – the unstructured error reporting from users as and when they come across bugs – is useful, but the kind of work done by paid software testers, in both open- and closed-source, is systematic testing designed to track down bugs and make the system fit for purpose. There’s many other concepts in software testing, such as testing models, reviews, static/dynamic analysis, that generally goes unnoticed by users of alpha and beta releases. But it’s still a good way to try out the world of software testing, and if you find it interesting, perhaps you can make it your full-time job.
Anyway, Precise Pangolin Alpha 1 comes out on 12th December 2011. I can hardly wait.
[1] Oh, and if you are thinking of trying out an alpha release, you probably want to do it on a spare computer. In theory, an alpha release of an operating system can sit quite happily alongside a partition of Windows or Linux that you use for work, but as test releases by their very nature are liable to do disastrous things you didn’t expect, you probably want to keep it out of harm’s way. var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {} -
Rest in peace, Steve Jobs
When you're a advocate of Microsoft/Apple/Linux, it's tempting to do nothing but pick faults with the two competitors. I have had a go at Apple for their patent lawsuits against Android smartphones. But that should not distract us from what Apple has achieved under his leadership. Technology is not just about creating something new - anyone, for instance, could have created a miniaturised computer capable of playing MP3 files - it's also about recognising what people want. There is no shortage of inventions out there that failed to take off simply because people saw no point in switching from what they were using before. But Steve Jobs had an extraordinary talent for identifying what will grab people's interest, how to sell these ideas to the public.In the years when the world subscribed to all things Microsoft, Apple kept a niche with the iMac: tightly integrated software and hardware, praised for its reliability, and still the number one choice for artists and graphic designers. With the iPod, iPhone and iPad, Apple pioneered products that were previously unheard of to the everyday public. Google's Android has since taken a significant chunk of the smartphone and tablet market, and Microsoft's Windows Phone 7 can't be dismissed just yet, but no-one can take the title from Jobs as the man who introduced these products in the first place.We will never know what future innovations Jobs may have brought to Apple, but one thing is certain: the loss of Steve Jobs yesterday is a huge loss to both Apple and the world.var pkBaseURL = (("https:" == document.location.protocol) ? "https://chrisnevillesmith.me.uk/piwik_test/piwik/" : "http://chrisnevillesmith.me.uk/piwik_test/piwik/"); document.write(unescape("%3Cscript src='" + pkBaseURL + "piwik.js' type='text/javascript'%3E%3C/script%3E")); try { var piwikTracker = Piwik.getTracker(pkBaseURL + "piwik.php", 4); piwikTracker.trackPageView(); piwikTracker.enableLinkTracking(); } catch( err ) {}