December 6, 2008

...Learn TDD with Codemanship

More On Refactoring & Test Assurance

Picking up from my last blog post about refactoring and test assurance, I just wanted to break the problem down a little further to see if I can explain my position better.

Take an example system (by which I mean a network of behaviourally inter-dependent objects, bound by calls to each other's methods):



If I were to refactor class d by extracting it's method p() into a new class X, what methods would I need to retest to be sure I haven't broken anything?

Let's start in an idealised version of our problem, where every method has complete test assurance. That means that any change to behaviour, no matter how small, will cause at least one test to fail, and that change will be detected.

In the idealized version, I would only need to retest method n() on class c. With perfect test assurance, I can be totally confident that if no tests fail, then the behaviour of n() is completely unchanged, and any methods that depend on n cannot be affected.

But if we don't have perfect test assurance, then we cannot be totally sure that if our refactoring changed the externally visible behaviour of n() it would be picked up by the tests. Which means there's a finite possibility that n() could be broken and that, in turn, so could m().

Perfect test assurance is not achievable in the real world. High test assurance, sure. But not perfect. It's never 100% certain that bugs will be detected. Which means that it's never 100% certain that bugs can't spread and impact indirectly dependent code as a result of a refactoring.

It's a question of risk. Directly dependent code is most likely to be affected. Running tests for directly dependent code - if those tests provide a good level of assurance - would catch most problems. But not all.

Running tests for code that depends on code that depends on what we refactored would catch most of the rest of the undetected problems that our first line of defence might let slip through. But, since we're now talking about two degrees of separation, this is probably going to be an exponentially larger amount of code and an exponentially bigger suite of tests.

And it still might not catch all of the problems. Going to three degrees of separation would offer what I would consider to be a very dependable level of test assurance, provided average assurance at each level is adequate.

And again, we're talking about an exponentially larger amount of code and an exponentially bigger suite of tests. Probably most of the code and most of the tests.

And, because we're talking non-linear here, that would probably apply just as readily to 8 million lines of code as to 8,000. Which is one great argument for keeping software as small as possible.

If you want to run less tests between each refactoring, then writing less code, managing your dependencies and writing more effective tests would be some key levers you can pull.






Posted 9 years, 6 months ago on December 6, 2008