August 31, 2016

...Learn TDD with Codemanship

Slow Running/Unmaintainable Automated Tests? Don't Ditch Them. Optimise Them.

It's easy for development teams to underestimate the effort they need to invest in optimising their automated tests for both maintainability and performance.

If your test code is difficult to change, your software is difficult to change. And if your tests take a long time to run - not unheard of for teams to have test suiites that take hours - then you won't run them very often. Slow-running tests can be a blocker to Continuous Delivery, because Continuous Delivery requires Continuous Testing, to be confident that the software is always shippable.

It's very tempting when your tests are slow and/or difficult to change to delete the source of your pain. But the reality is that these unpleasent side effects pale in comparison to the effects of not having automated tests.

We know from decades of experience that bugs are more likely to appear in code that is tested less well and less often. Ditching your automated tests opens the flood gates, and I've seen many times code bases rapidly deteriorate after teams throw away their test suites. I would much rather have a slow, clunky suite of tests to run overnight than no automated tests at all. The alternative is wilful ignorance, which doesn't make the bugs go away, regrettably.

Don't give in to this temptation. You'll end up jumping out of the frying pan and into the fire.

Instead, look at how the test code could be refactored to better accomodate change. In particular, focus on where the test code is directly coupled to the classes (or services, or UIs) under test. I see time after time massive amounts of duplicated interaction code. Refactoring this duplication so that interactions with classes under test happen in one place can dramatically reduce the cost of change.

And if you think you have too many individual tests, parameterized tests are a criminally under-utilised tool for consolidating multiple test cases into a single test method. You can buy yourself quite staggering levels of test assurance with surprisingly little test code.

When tests run slow, that's usually down to external dependencies. System tests, for example, tend to bring all of the end-to-end architecture into play as they're executed: databases, files, web services, and so on.

A clean separation of concerns in your test code can help bring the right amount of horsepower to bear on the logic of your software. A system test that checks a calculation is done correctly using data from a web service doesn't really need that web service in order to do the check. Indeed, it doesn't need to be a system test at all. A fast-running unit test for the module that does that work will be just spiffy.

Fetching the data and doing the calculation with it are two separate concerns. Have a test for one that gets test data from a stub. And another test - a single test - that checks that the implementation of that stub's interface fetches the data correctly.

We tend to find that interactions with external dependencies form a small minority of our software's logic, and should therefore only require a small minority of the tests. If the design of the software doesn't allow separation of concerns (stubbing, mocking, dummies), refactor it until it does. And, again, the mantra "Don't Repeat Yourself" can dramatically reduce the amount of integration code that needs to be tested. You don't need database connection code for every single entity class.

Of course, for a large test suite, this can take a long time. And too many teams - well, their managers, really - balk at the investment they need to make, choosing instead to either live with slow and unmaintainable tests for the lifetime of the software (which is typically a far greater cost), or worse still, to ditch their automated tests.

Because teams who report "we deleted our tests, and the going got easier" are really only reporting a temporary relief.







Posted 1 year, 1 month ago on August 31, 2016