March 26, 2008
Test ASSURANCE Is The Real GoalThe problem with Test-driven Development is that it's got the word "test" in the title. It's actually not as much about tetsing your code as it is about using tests to specify what code you should write in the first place. But the presence of the dreaded "T" word confuses the issue, and hampers meaningful discussions or attempts to adopt TDD practices.
So, quite understandably, TDD advocates have been keen to ditch the "T" word to make the practices easier to promote. We now have Behaviour-driven Development, for example. And talking of examples, we also have Example-driven Development.
TDD, BDD, EDD, Use Case-driven Development and so on all fall under the more general banner of Scenario-driven Development. Ho hum. All good stuff.
I'm equally keen to disassociate myself from the term Test Coverage. Like "Test", the very mention of coverage sends waterfall traditionalists and code-and-fixers alike into a frenzy of objections and "ah, but"-isms.
The main objection, of course, is that - everybody sing along now - HIGH TEST COVERAGE DOESN'T NECESSARILY MEAN HIGH QUALITY CODE.
And they're absolutely right. Assert.IsTrue(true) isn't going to catch many bugs, is it?
As an indicator of quality, test coverage is about as useful as the number of police on the street is as an indicator of the level of crime. We also need to know how good the police are at catching criminals. Do they catch only 20% of murderers within 20 years of the crime? I wouldn't feel very safe if that were the case.
I'm looking not for high levels of coverage. That's not what the game's about. Just like TDD isn't actually about testing. Though testing is a part of TDD, undeniably.
I'm interested in the level of test assurance. If I wanted to gauge the level of assurance our police force offers me, I might commit 10 crimes and see how many times I get caught, and how quickly after the fact.
They tell me, for example, that in my neighbourhood, less than 10% of burglary investigations result in the thief being caught and prosecuted - successfully or unsuccessfully. That's what I call low assurance*.
We can ask the same question about our tests: what are the chances that our tests will catch a particular kind of bug within a given timeframe? Mutation testing is one way of gauging test assurance. By deliberately introducing bugs into our code, we can see which ones slip through the net of our automated (and manual) tests and inspections.
Coverage is a factor in assurance, and that fact isn't going to go away. Bugs that appear in code that is not tested - no matter how ineffectively - will not be caught. Full stop. At each stage of assurance (design, coding, UAT), they almost certainly will slip through to downhstream activities, where they will cost exponentially more to fix.
So coverage is only meaningful in so much as zero coverage offers zero assurance. But assurance is the real point of it. And there's more to that than coverage. (But not less to it, let's be clear about that.)
So from now on I resolve to talk about test assurance just as many experienced TDD-ers now talk about being "behaviour-driven" or "example-driven".
I'm sure the code-and-fixers will find some other objection, though. There's just no end to their ingenuity...
* I have no idea what the testing equivalent of a Community Support Officer is. Ideas on a postcard, please
Posted 2 weeks, 1 day ago on March 26, 2008