September 21, 2013

...Learn TDD with Codemanship

Duplicated Test Code & High Coupling

Just a quick post about the relationship between duplication in unit test code and high coupling between unit tests and implementation classes.

Let's consider the example of FizzBuzz:

Here, I've "forgotten" to refactor the test code to remove some obvious duplication. And in doing so, I've duplicated the knowledge of how to create an instance of FizzBuzz and get it to fizzBuzz integers for me in multiple places.

This means that if I change the way we use the FizzBuzzer, multiple tests will break.

If I refactor the duplication away (and we'll touch on different strategies for that later), then I can isolate that knowledge so that it only needs to change in one place. E.g.

No doubt, some of you will rightly be thinking "I'd use a parameterised test", but the effect is exactly the same; less duplication means less duplicated coupling.

On a side note, how many parameterised tests would you use in this example?

I'd use three. Answers as to why on a postcard, please.

As an addendum, a short discussion about granularity and "level" of tests:

Some people say that they "don't test classes" but instead "test behaviour". In languages like Java it's not possible to test behaviour without knowing where that behaviour's coming from. So you end up writing code to access said behaviours, whether you like it or not.

Other folk say "I only write acceptance tests" and everything's tested either at the system boundary or just below it. The consequences of doing this are apparent in thousands of projects. When tests fail, you have to go through the call stack and figure out what went wrong and where.

There's actually no way of completely decoupling tests from the things they are testing. All programming languages have mechanisms for packaging code into discrete units (e.g., methods and classes), and to access those features we must know where they are. There's no magic we can apply that makes tests completely implementation-independent.

So I still strongly recommend the rule of thumb of unit tests only having one reason to fail - it's the reason I haven't needed to use a debugger in years - because it has implications for the "composability" of your software (it's very closely related to the Single Responsibility Principle). That, plus a well-developed refactoring muscle, tends to lead to tests that are very useful and loosely coupled to the implementation design.


Kevlin Henney made a good point on That Twitter about the number of parameterised unit tests he would use in this example.

I've done FizzBuzz so many times, I tend now to not treat numbers that are divisible by 3 and 5 as a special case. I'm possibly being swayed by my own implementation:

But to sum this up in 3 parameterised tests - implying 3 variants of behaviour with no order of precedence - would be potentially misleading, and therefore for completeness I might now plump for an extra parameterised test for numbers that are divisible by 3 and 5 to explicitly illustrate that the Fizz comes before the Buzz in those cases.

Explicitly, I would have the following parameterised tests, then:

1. Integers that are divisible by 3 should be substituted with "Fizz"

2. Integers divisible by 5 with "Buzz"

3. Integers divisible by 3 and 5 with "FizzBuzz"

4. Otherwise, just return the integer as a string

(My original 3 were: divisible by 3 starts with "Fizz", divisible by 5 ends with "Buzz", otherwise integer as string.)

Posted 8 years, 9 months ago on September 21, 2013