December 6, 2008

...Learn TDD with Codemanship

Should I Run All Tests After Every Refactoring?

When we run our suites of automated tests, we do it to reduce risk. Specifically, to reduce the risk that we may have broken some code somewhere without knowing it.

When we refactor one or more lines of code, we run the risk of breaking not only the code we refactored, but any code that directly or indirectly depends on it.

To reduce the risk of refactoring, then (and refactoring is all about reducing risk), we need to retest any code that could possibly have been broken.

A smart testing or refactoring framework would figure out which code that is, and then run only the tests required to verify that code still works.

Alas, tools like JUnit, NUnit, Resharper, IntelliJ, Eclipse and their ilk aren't quite that smart.

And, having analysed dependencies in hundreds of code bases, I know that I can't be trusted to reliably identify dependent code - code that could be broken - without the aid of automated analysis.

So I choose to run all of my tests for all of the projects/packages that could possibly have been affected by the refactoring.

If your architecture has carefully and thoughtfully managed package dependencies (packages are cohesive and loosely coupled, you have avoided cyclic dependencies, and so on) then hopefully this won't mean you have to run thousands and thousands of tests for all of your packages.

Sadly, though, most package architectures I've seen have been pretty darn sucky, with interpackage dependencies willy nilly. So a small refactoring in your domain package might mean having to retest the entire solution.

And if your tests run slow - and would you believe it, but folk who create sucky package architectures also have a tendency to write slow unit tests - then you have a bit of a practical problem when it comes to refactoring.

This can create a vicious circle: you can't afford the time it takes to run the tests, so you refactor less (or refactor less safely), and refactoring less means your dependencies are probably going to get worse over time, compounding the risk and escalating the problem.

My advice is that you have to take a view on this. Yes, refactoring is going to be slow and it's going to cost you time. But how much time are you wasting dealing with the consequences of not doing enough refactoring, and/or not refactoring safely?

I've usually found that the least painful medium-to-long term option is to suck it up and run the tests. By all means, focus your initial efforts on refactorings that will speed up the tests. You've probably got them reading stuff out of files and connecting to databases, haven't you? A few judiciously-placed test doubles - mocks, stubs, fakes etc - could have your tests flying. And just how much of your application logic relies on database access? Naughty architect! In your bed!!! You should be separating those concerns so that, as much as possible, logic happens in plain objects and persistence is a totally independent set of operations.

Anyway, the basic message here is that you can probably afford to spend more time running tests and devote more effort to refactoring than you think, because - just like the rest of us - you've greatly underestimated the cost you're already paying for skimping on these things.

If you're doing it right, over time your refactored tests will run quicker, and your unpicked package architecture will make it easier to select smaller subsets of your tests. Which means shorter test cycles, which means more refactorings, which means even faster tests and smaller test subsets. Etc etc. But you'll get none of this if you shy away from taking that initial hit.

And what will we do while we're waiting for our tests to run? Er - how about thinking about design, analysing the code, planning our next refactoring? Maybe forcing you to keep your hands off the code for a few minutes between each refactoring will turn out to be a good thing.

Or you could take up knitting, of course...

Posted 12 years, 5 months ago on December 6, 2008