September 11, 2014
Code Metrics, Continuous Inspection & What We Should Tell The BossSo, just a bit of time during the lunch break to jot down thoughts about a debate we had this morning while running a client workshop on Continuous Inspection. (Yes, that's a thing.)
ContInsp - as I have just decided to call it, because "CI" is already taken - is the practice of frequently monitoring the code quality of our software, usually through a combination of techniques like pair programming, static analysis and more rigorous code reviews, to give us early warning about problems that have recently been introduced. Code quality problems are bugs of a non-functional nature, and bugs have associated costs. Like functional bugs, code quality bugs tend to cost exponentially more to fix the longer we leave them in the code (because the code continues to grow around them, making them harder to fix.) Code smells that end up in the software have therefore a tendency to still be there years later, impeding the progress of developers in the future and multiplying the cost of change. For this reason,, we find that it can be better to catch these problems and deal with them earlier - the sooner the better. Hence, in the spirit of Extreme Programming, we turn the code inspections dial up to "eleven" and do them early and as often as we can afford to. Just as it helps with continuous testing, and continuous integration, automation also makes ContInsp more affordable and more viable.
A decent ContInsp set-up might incorporate a static code analysis tool into the build cycle, providing effectively a small suite of non-functional tests into their Continuous Integration regimen. If, say, someone checks in code that adds too many branches to a method, a red flag goes up. Some teams will even have it set so that the build will fail until the problem's fixed.
Anyhoo, much discussion of ContInsp, and code analysis, and metrics and what we should measure/monitor. But then the question was raised "What do we tell the boss?"
It's an important question. It seems, from my own experience, to be a potentially costly mistake to report code metrics to managers. Either they don't really understand them - in which case, we may as well be reporting eigenvalues for all the good it will do - or they don't care. Or, even worse, they do care...
I've seen many a time code metrics used as a stick with which to beat development teams. Code quality deteriorates, the managers beat them with the stick, and the code quality deteriorates even more, so they get a bigger stick. And so on.
Really, this kind of information is only practically useful for people who are in a position to make the needles move on the dials. That, folks, is just us.
So the intended audience for things like cyclomatic complexity, class coupling, fan in/out/shake-it-all-about is people working on the code. Ideally, these people will know what to do to fix code quality problems - i.e., refactoring (a sadly rare skillset, even today) - and who are empowered to do so when they feel it's necessary (i.e., the code smell is inhibiting change.)
My own experience has taught me never to have conversations with managers about either code quality or about refactoring. To me, these issues are as fundamental to programming as FOR loops. And when did you last have a conversation with your boss about FOR loops?
But managers do have a stake in code quality. That is to say, they have a stake in the consequences of code quality or the lack thereof.
If code is hard to read, or takes hours to regression test, or has the consistency of spaghetti, or is riddled with chunks of copied-and-pasted logic, then the cost of changing that code will be higher than if it was readable, simple, modular and largely duplication-free. They might not understand the underlying causes, but they will feel the effects. Oh boy, will they feel them?!
So we've been having a lively debate about this question: what do we tell the boss?
But perhaps that's the wrong question. Perhaps the question should really be: what should the boss tell us?
Because the really interesting data - for the boss and for us - is what impact our decisions have in the wider context. For example, what is the relative cost of adding or changing a line of code in the software, and how does it change as time goes on?
I've seen product teams brought to their knees by high cost of change. In actual fact, I've seen multi-billion dollar corporations brought to their knees by the high cost of changing code in a core product.
Another important question might be "How long does it take us to deliver change, from conception to making that change available for actual use?"
Or "How frequently can we release the software, and how reliable is it when we do?"
Or "How much of our money are we spending fixing bugs vs. adding/changing features?"
All of these are questions about the medium-to-long-term viability of the software solution. I think this is stuff we need to know, particularly if we believe - as I strongly do - that most of the real value in software is added after the first release, and that iterating the design is absolutely necessary to solving the problem and reaping the rewards.
The most enlightened and effective teams monitor not just code quality, but the effects of code quality, and are able to connect the dots between the two.
Ah, well. Braindump done. So much for lunch. Off to grab a sandwich.
Posted 4 years, 6 months ago on September 11, 2014