February 10, 2013

...Learn TDD with Codemanship

Parameterising Unit Tests Without Framework Support

Pairing with my apprentice-to-be, Will, on Friday highlighted a problem with some unit testing frameworks, which is that they don't all offer built-in support for parameterised tests.

This isn't a major stumbling block to parameterising our tests. Back in the days when most frameworks didn't support them, we just used parameterised methods and called them from our tests (e.g., when looping through a list of test case data.)



If we wanted to refactor the test fixture above into a single paramaterised test, without using built-in framework support, we have simply to extract the body of one of the tests into its own method and introduce parameters for sequence index and expected result.



It's as easy as that, really. Well, almost.

What happens if one of our tests fails? With built-in support for parameterised tests, the framework would report which test case failed. But here, we'd only get a report of the assertion that failed. We would have to work backwards from that to deduce which test case failed. And if multiple test cases may expect the same result (e.g., the second and third Fibonacci numbers should be 1), or if we're asserting that some condition is true instead of comparing expected and actual outcomes, it may be ambiguous as to which test case actually failed.

So we can add a little extra information when assertions failed to make it clear exactly which test case we're talking about:



Another drawback with hand-rolled parameterised tests is that, with most unit test frameworks, when an assertion fails, the test stops executing. So if we wrap up the execution of multiple test cases in one test method, if the first case fails, we'll get no results for subsequent test cases.

To overcome this, we need to go further. One solution would be, instead of calling assert() functions, to remember the result of each check and keep a rolling score. If all the cases come up green, then we call pass() at the end. If any come up red, we call fail() and report all the test cases that failed when we do. At this point, of course, we'd be edging closer and closer to writing our own parameterised unit testing framework.

Fortunately, in JUnit, this isn't necessary. But programmers working with xUnit implementations that don't support parameterised tests may have to do it. My advice is, if you do, then consider adding it to the framework, too.

Lastly, Will has raised a good question: at what point would we consider parameterising our tests?

When we paired, we did it the old-fashioned way (working in Python) and then - as an exercise - I asked him to parameterise the tests after we'd completed the TDD exercise.

In the real world, I might have done it much earlier, when I could see the duplication that was emerging, and knowing that there'd be several more similar tests coming up.

You need to make a judgement call on whether to expend that effort or live with a bit of duplication. The more duplication there is, the easier that judgement call gets, but also the harder the refactoring gets. My tendency is to refactor early - often when I have just 2-3 examples. Look ahead and ask "how many more examples might there be that follow this pattern?"







Posted 1 week, 1 day ago on February 10, 2013