February 8, 2008

...Learn TDD with Codemanship

Lack Of Test Coverage Guarantees Lack Of Test Assurance

The argument that often gets trotted out when I ask developers to think about increasing their level of automated unit test coverage is that you can have buggy code even with 100% of the code covered.

They're absolutely right, of course. But is that a reason to leave code uncovered by your unit tests? It does sound rather like those people who argue that, because science can't prove anything completely, there's no value in approaching something scientifically (like alternative medicine, for example.)

Granted, just because a line of code is executed by a unit test, that doesn't means that it is guaranteed to be bug-free. But I can tell you one thing for certain - if a line of code isn't executed by a unit test, then it absolutely isn't tested - and that's guaranteed.

I prefer these days to talk about test assurance rather than coverage. The more relevant question is not "how much of our code is executed by tests", but "if we introduce a bug into our code, how likely is it that our tests will detect it?"

If 50% of your code isn't executed at all by the tests, then I can guarantee that there is at least a 50% probability that the bug will slip through, maybe to be caught in system testing or - worse still - by users in the released software. Since bugs cost exponentially more to fix the later they're detected, it makes sense to have very high levels of test assurance.

So, yes - I take my hat off to you - coverage isn't a great indicator of test assurance. But lack of coverage guarantees lack of assurance, so I would still want to find out what your coverage is. I may complement that with other indicators of test assurance (like defect injection rates, or mutation testing), but I would not ignore it just because it isn't perfect, and I most certainly would not leave code uncovered just because coverage isn't the be-all and end-all.

Posted 10 years, 5 months ago on February 8, 2008