January 1, 2015

...Learn TDD with Codemanship

Polymorphic Testing & A Pattern For Backwards Compatibility Checking

Good morning, and Alka Seltzer!

To kick off the new year, I'm thinking about advanced unit testing tools and techniques, which I've been devoting the last couple of months to developing a training workshop on.

In particular, this morning I'm thinking about applications of polymorphic testing, which a client of mine has been experimenting with to help them better ensure backwards compatibility on new releases of their libraries.

I'm very lucky to have been introduced to this early in my career, and having seen many, many teams stung by backwards compatibility gaffs in the libraries they use - and extrapolating the cost of millions of developers having to change their code to make it work again - it surprises me it's not more common.

Polymorphic testing very simply means writing tests that bind to abstractions - essentially, dependency injection where the dependency being injected is the object under test.

There are a number of reasons we might want to do this; e.g. testing that subclasses satisfy the contracts of their super-types (Liskov Substitution), testing that implementations satisfy the contracts implied by their interfaces (e.g., 3rd party device drivers), and so on.

Among the applications of polymorphic testing, though, backwards compatibility is usually overlooked. Indeed, let's face it: backwards compatibility is usually overlooked anyway.

Implementing a polymorphic test using something like JUnit or NUnit is pretty straightforward.



In this example, I've written an abstract test class that allows me to create concrete test subclasses that insert the correct implementation if Animal into the test. e.g., for a Tiger:



Or for a Snake:



Now, a very useful thing framework developers can do - which of course, almost none of them do - is to ship these kinds of polymorphic tests with their interfaces, so when we implement them in our own code (e.g., an JEE interface) we can test them against the base contracts that the framework expects. So, regardless of what your implementation specifically does, we can assure ourselves that it correctly plays the role of a "thing that implements that interface" regardless.

OS developers could have saved themselves a lot of heartache if they'd done this for device drivers decades ago. But that's a different rant for another day...

It ought to be standard practice to write polymorphic tests for abstract types under these sorts of circumstances (i.e., when you're expecting other developers to extend or implement them).

Another thing that ought to be standard practice is to write polymorphic tests - tests where the objects being tested are dependency-injected into the test code - for APIs.

Here there's a bit more that goes on under the hood, but the basic thinking is that it should be possible to build and run one version of the tests against a later version of the implementation code.

There are a number of different ways of doing this, and a short blog post can't really cover the fine detail, but here's a simple pattern to get you started.

In your build script - you have a build script, right? - have an optional target for getting the previous version of the tests from your VCS and building those against the latest version of the implementation code.

Mind truly boggled? It's a bit of slight of hand, really. In your build script, you check out and marge two versions of the code - the latest version of the source code, and the previous release version of the tests (the test code that corresponds to the last release of your software.)

It's for exactly this reason that I tell people to keep their source and test code cleanly separate. Just in case you were wondering.

So, in your backwards compatibility testing build, you check out the latest from SRC, and the last release version from TEST and merge it all into a single project to be built and tested.

If your old test code won't compile against the new source code, then your new code has fallen at the first hurdle.

If it does compile, then you should be able to run the unit tests - but only the unit tests written against the public API. Ongoing refactoring and a preference for using internal and private classes wherever possible should mean that the internals of your source code may have changed for the better. This is fine. Just as long as, to old client code from the outside, no change is visible.

And for that reason, I add an addendum to my "Keep your source and test code separate" mantra; namely, keep API-level tests and internal unit tests separate so we can easily disambiguate and run them separately in a build for just these purposes.

Now, the astute among you will ponder "why do these backwards compatibility tests need to be polymorphic if we're just compiling them against new versions of the code?" Good thinking, Batman!

Hark back to our OO design principles: the Open-Closed Principle (OCP) encourages us to extend our software from one release to the next not by modifying classes, but by extending them. This is generally a good idea, because it assures binary compatibility between releases. But syntactic coupling is only one of our concerns when it comes to backwards compatibility. We must also concern ourselves with semantic coupling, which brings us back to Liskov Substitution. These new subclasses must satisfy the contracts of their super-classes.

And it is the case that well-designed libraries tend to mostly expose abstractions so that, from one version to the next, client code shouldn't need to be recompiled and re-tested at all.

So why not rely on the basic kind of polymorphic testing illustrated above? It's all a question of living in the real world. In the real world, we don't always stick doggedly to the principles of Open-Closed and Liskov Substitution. We might wish to, of course. We might try. But generally we don't.

So, this is as much an aide memoir as anything else. The pattern is that all tests for public types should be polymorphic, and separated from the other unit tests, including implementations of those polymorphic tests, which - as they rely on internal details - really belong with whatever version of the code is current.

I encourage you to make this distinction clear, both in the style of unit test and in the organisation of the test code. It should be possible to build and test these polymorphic API tests against not just the code that was current when they were first written (including test implementations), but against any subsequent versions of the code, and in particular release versions.

Think of it as an extension of Liskov Substitution: an instance of any class can be substituted with an instance of any later versions of that class.

So, in some cases our API-level tests will be testing new implementations of types from previous versions. And in some cases, they will be testing modifications of previous versions. And, from their external perspective, it should be impossible to tell which is which.

Polymorphic testing gifts us the "flex points" to allow that to happen.


So there you have it; polymorphic testing a bit of build script chicanery can allow us to run old tests on new code, just as long as those tests bind strictly to public types.





Posted 3 years, 1 month ago on January 1, 2015