March 22, 2007
Successful Applications Had Better Be Good EnoughOne of the most frustrating aspects of software development is that - no matter how great a job we do - there are no guarantees that our software will get used. I've seen some pretty crappy applications propping up global enterprises, while elsewhere some jolly fine code got chucked in the bin because there was no uptake from the users.
It seems there's no accounting for taste, and we'd do well to remind ourselves that no matter how good or useful or stable or scalable our software is, it's only a small part of the overall picture.
Blind luck, I suspect, plays a very significant role in the success of our software after we've delivered it. Like forest fires - yes, I'm harking back to that old chestnut - we can never know with any degree of certainty whether a small fire will quickly burn out and singe a few trees, or suddenly spread like - well, like fire - through the entire forest.
This presents us with an extra dimension when we're managing portfolios of projects or systems: success. After our first release, we need to monitor not only how well development is going, but also how well-received each release has been.
There's not much point in being the best band in the world if nobody's listening. And there's all sorts of risks in being the worse band in the world with everybody listening. In a funny kind of way, it's probably better in the long run to deliver a good quality application that nobody uses than a poor quality application that everybody ends up relying on.
Heavily-used bridges need to be sturdy
If you saw 10,000 people walking across the same bridge, you'd probably ask yourself if the structure is sturdy enough to hold their weight. If I found find I had an application being used by 10,000 people, I'd ask myself how sturdy that code is... (I'd also be asking myself how much it will cost to maintain.)
I've seen plenty of software projects become victims of their own success. Users jump on the first release of the software and high expectations are raised for the next release. In this situation, you'd better hope that the code is in good shape.
Sadly, since 90%+ of projects deliver substandard code, and since user take-up is largely down to chance, the odds are stacked against the beleaguered IT manager. Typically, he ends up with a portfolio of legacy systems that are riddled with bugs, cost a fortune to maintain, hold the business back by making change too slow and expensive, but that would cause the business to collapse if they were switched off.
Considering all that, I think we take the decision to roll out applications to the business far too lightly. Once working software has been delivered, the next question should be: "How would this thing perform if we rolled it out to N users?" "How much would it cost to maintain?" "How much downtime might we expect and what would the impact of that be?" "How much new functionality could we expect from the next release of the software?" And so on.
Politically, that's a very tough sell when your boss has just spent $1 million on developing the solution. But sometimes it's the most rational solution. Better to waste $1 million today than to waste another $10 million trying in vain to make it work.
Posted 13 years, 9 months ago on March 22, 2007