April 11, 2009
Being Skeptical About Metrics != Dismissing All Metrics
There's a growing and vocal tranche within the software development community who believe that all software metrics are bad.
Curiously, they don't believe that metrics are bad when their doctor tells them that their blood pressure is abnormally high, or when their union representative tells them that their salary is below the national average for their job description.
But attach any kind of number to anything related to the business of writing software and it's a different story. Metrics, to some, are the Devil's work and must be exorcised from software development.
It's true that metrics can and often are abused in management. Decades of cooked books, skewed studies and notoriously optimistic "government statistics" have conditioned many of us to mistrust numbers. And rightly so. Whenever anyone presents you with a measurement, it's wise to be skeptical and question what it might mean and how it was collected. But dismissing all measurements as a rule is not skepticism. It's just as much an act of blind faith as believing every statistic you see.
Software development without metrics is woolly and very, very highly subjective. It's like the Olympics without stopwatches or cookery without a set of scales. Metrics debunkers sometimes paint a picture of the other extreme as being the reality of metrics - namely total reliance on arbitrary numbers, even when they fly in the face of our subjective reality.
I would not advocate blindly accepting what your metrics tell you. But I would definitely not advocate ignoring the numbers, either. If Max Planck had thought like that, we'd never have ended up with quantum mechanics, which - measured in terms of the accuracy of its predictions - is our most successful science. Sometimes the reality is what the numbers say it is, and not what our eyes or our instincts tell us.
And sometimes, and especially in very complex matters, simple measurements hide a more subtle truth, and our instincts serve us best - provided we've had the time to develop them to the point where they can be trusted. Estimating is a great example of something where the first instinct of a senior programmer is just as likely - if not more likely - to be right than any number derived through some kind of empirical COCOMO nonsense.
But that doesn't apply to every aspect of software development. If a coverage tool tells you that 50% of your code isn't executed during a test run then you know that 50% of your code isn't being tested at all. If a static analysis tool tells you that one of your methods has 500 lines of code and a cyclomatic complexity of 80 then you know it will be practically untestable and very difficult to change. If JDepend tells you that you have a package full of concrete classes that is very heavily depended upon then you know that making changes to code in that package is likely to have a bigger knock-on effect because of the lack of substitutability.
Sure. Absolutely. I won't deny that metrics throw up false positives and can sometimes send us on wild goose chases. But I'll tell you this: I have never seen a project team that delivered high quality software that wasn't monitoring quality using some kind of measurements. I suspect that's because testable goals tend to be easier to aim for.
By all means, be a metrics skeptic. Question every number you see. If a metric tells you something about reality, go look at reality and use your hard-earned judgement to see if you agree with the metric.
But dismiss all software metrics out of hand at your peril. Because I know that software teams who don't measure quality tend not to deliver very good software. Successful development teams use metrics. Of course, there are plenty of teams who don't measure anything and who think they're successful. There are plenty of people who think they have psychic powers, too.
Posted 9 years, 4 months ago on April 11, 2009