July 27, 2005

...Learn TDD with Codemanship

More On Metrics

Of late, I've been on a metrics mission. Time was when I truly believed that measurement was the key to achieving higher quality software at lower costs and in tighter timescales. Having experienced more mature software development organisations, who were no strangers to metrics, I got the same feeling as I do now practicing test-driven development. It boosts your confidence to have clear and objective indicators of what you do right and where you need to improve.

Under a straightforward measurement regime, I suspect most developers would improve much more than if they just stuck a finger in the air and said "I think I'll go this way". Think about it - how fast would you improve as a sprinter if nobody ever timed you? Understanding your capability is one half of the battle in increasing it.

But, of late, metrics have been falling out of favour with the movers and shakers in software best practice. It's certainly true that there are "lies, damn lies and statistics", and metrics at best raise more questions than they answer. But the questions they raise are often ignored on many projects. I've seen software released into businesses that was of a very low quality, but since nobody knew for sure what the quality was, it becomes a matter of opinion - your word against theirs. If you can demonstrate that the number of bugs per thousand lines of code is 10 times higher than the average for similar kinds of projects, and if you can demonstrate the medium-term and long-term effects of those bugs on productvity and the total cost of ownership, managers might be less inclined to inflict the software on the business.

Measurement is rare in software development. That perennial litmus test of IT trends, jobserve.com, reveals that few if any organisations are measuring themselves. I don't see any evidence that things in that area have improved much since the mid-nineties, and that doesn't bode well for the profession as a whole.

The impact of metrics can be profound, and most metrics are relatively easy to collect. There are pitfalls, of course, but they pale in comparison to the consequences of not measuring at all. In other disciplines, we're used to measurement, and welcome the feedback they bring, even if they need to be taken with a pinch of salt occasionally. If Project X costs $40 for a line of code, and Project Y costs $45/LOC, that doesn't tell us much. But if X costs $40/LOC and Y costs $400/LOC, then surely there are questions that need to be asked?

Typically, if you go to the IT director and say that you think there's a problem on a project, the response is for him to stick his fingers in his ears and say "la, la, la! I'm not listening". Managers respond better when good evidence is presented. It's shocking, I know. If they trust your judgement enough to develop business-critical systems for them, why won't they trust your instincts when you think something's amiss? The sad truth is that they didn't hire you because they trust you. Well, probably not, anyway. It'll take more than your word to get the message across, and this is where metrics can be most powerful. And, of course, there's always the possibility that your instincts are wrong... If I get it right 51% of the time, I'd be doing very well. I might instinctively believe that better unit test code coverage will improve quality, but if 70% of the defects actually come from the requirements, I might be barking up the wrong tree. Maybe doing UI walkthroughs would be a bigger help. Or perhaps driving development with automated acceptance tests would offer more value. We need our instincts, and we should never be slaves to metrics, but ideally we should use a combination of the two to balance our innate intelligence - which can be highly subjective - to objective measurements that make the whole process just that little bit more scientific.

Certainly any attempt at improving development capability - or any capability - should start with testable definitions of what improvements we want to achieve. Otherwise, we're just guessing at the effects of the "improvements" we make, and you end up with organisations adopting RUP or eXtreme Programming on simple blind faith that it will make things "better".

Having stayed away from metrics for some time, I'm now coming to the conclusion that they are crucial to the whole business of "better, faster, cheaper". The response from my clients - particularly at senior management level - reaffirms this belief. After much talk about why agile would be "better", or why modeling is "good", and facing a fair degree of skepticism which is common from people who haven't tried these things before, I'm surprised at how quickly they change their minds when faced with a handful of simple measures that illustrate the real impact of best practices.

Take this suite of measures designed for a project dashboard:


    * Function Points/Iteration
    * $/Line of Code
    * Defects/Thousand Lines of Code
    * Design quality (function of maintainability, coupling and cohesion)
    * % $ spent on new features vs. change requests vs. bug fixes per iteration


Using this suite of metrics, code-and-fix approaches become much easier to spot. Typically, as the project progresses the amount of time/money spent fixing bugs increases, and the productivity (FP/Iteration) falls. At the same time the design quality also decreases, which further impacts productivity and costs. It's not uncommon to see projects finish up spending all their time fixing bugs and for the speed of delivery of new features to drop to zero. You might want to add other measures, like the hours worked by each developer per iteration, to see what the effect of longer hours is on productivity and quality. Again, the typical trend is to produce less when you work more. Most project managers just won't beleive you if you say that overtime won't help you hit the deadline. Metrics can help establish the reality, and that's what they're there for!

As a trend starts to appear (e.g., code-and-fix), managers and teams can act. You can spot code-and-fix long before the project is nearing the end and do something about it. Good developers will argue that, when the schedule is slipping, the best thing they can do is to put more effort into preventing bugs getting into the code. Taking the time to get it right is the best way to save time on a project. The less time you spend fixing bugs, the more time you can spend delivering new features or satisfying important change requests. Since avoiding bugs is usually much less effort than fixing them, every bug avoided can save enough time to fix a whole bunch of bugs later on. But this is an alien concept to a lot of managers. They just won't beleive you when you say that the best way to speed up is to take your time.

Going back to my well-worn golf analogy: the best way to speed up your game is to take your time over your shots. Frantically hacking away at the ball isn't going to help. I think the saying is:

More speed. Less haste.

Managers are equally incredulous when you tell them that hiring more developers won't improve productivity, or that overtime won't actually make much difference to the delivery date. I imagine you've had these arguments many, many times - and probably lost more often than not. The result can be a death march project which delivers rubbish software for a very high price, and saps your energy and enthusiasm for the job. The downward spiral of poor quality leading to lower productivity leading to longer hours and lower morale, leading to yet more avoidable mistakes and so on, is like the death roll of a crocodile - seemingly inescapable. And yet some people actually wrestle with crocodiles for a living. Maybe they know something we don't? Death marches are equally formidable, but I'm finding metrics a crucial weapon in defeating them.
Posted 15 years, 6 months ago on July 27, 2005