December 18, 2014
A Simple "Trick" For Getting More Bang From Your Unit Test BucksI'm finishing off my year with a little bit of R&D.
My personal project is to try and carve out a practical progression from vanilla TDD using good old-fashioned one-unit-test-per-test-case triangulation, generalising the test code as I go. The aim is to find a way to go from the basics up to more heavyweight kinds of unit testing that potentially offer much higher assurance than even TDD can give us.
Typically, if I have two or more unit tests that are essentially the same test, but with different inputs and expected results, I'd refactor them into a single parameterised test, with unique test data for each of the original tests.
So far, so normal.
But I could go further and turn these parameterised tests into something that a tool like, say, JCheck could exploit to test against a potentially massive number of unique test data sets, randomly generated by the tool.
And in playing with this, I've discovered a useful little "trick" you can do in JUnit that allows us to have our cake and eat it - standard parameterised tests using JUnitParams when we need them, and the same tests run with JCheck when we want that.
Just add the annotations for either framework as desired/required, and then control which ones are applied by sub-classing the test fixture and using @RunWith() to specify whether these will be run as JUnitParams tests, or as JCheck tests.
Simples. But potentially very powerfuls.
Two things to note before I sign off: firstly, it becomes necessary to generalise our test assertions so that expected results are calculated rather than given as a parameter of each test case. This comes at a price - duplication of implementation code in particular - but I believe, when we need to go this extra mile or three, it can be a price worth paying. Some pitfalls with calculated expected results, though, are worth avoiding if you can. Most importantly, there can be a risks associated with using identical algorithms as in the implementation. if the implementation design is wrong, then using the same logic to calculate an expected answer is folly. Try finding a different way of computing what the answers should be.
The second thing is that it is practically possible to get from vanilla tests to parameterised tests to JCheck tests in small, safe steps (i.e., refactoring). Don't be tempted to jump in to the deep end and start re-writing the tests in a big, nasty chunk. It's quite a dance, especially with some of the "features" of JUnitParams and the way we feed it non-primitive parameter values like objects and collections, but it can be done with a bit of fancy refactoring footwork, I'm finding.
Posted 3 years, 8 months ago on December 18, 2014