April 15, 2016

...Learn TDD with Codemanship

Compositional Coverage

A while back, I blogged about how the real goal of OO design principles is composability of software - the ability to wire together different implementations of the same abstractions to make our code do different stuff (or the same stuff, differently).

I threw in an example of an Application that could be composed of different combinations of database, external information service, GUI and reporting output.



This example design offers us 81 unique possible combinations of Database, Stock Data, View and Output for our application. e.g., A Web GUI with an Oracle database, getting stock data from Reuters and writing reports to Excel files.

A few people who discussed the post with me had concerns, though. Typically, in software, more combinations means more ways for our software to be wrong. And they're quite right. How do we assure ourselves that every one of the possible combinations of components will work as a complete whole?

A way to get that assurance would be to test all of the combinations. Laborious, potentially. Who wants to write 81 integration tests? Not me, that's for sure.

Thankfully, parameterised testing, with an extra combinatorial twist, can come to the rescue. Here's a simple "smoke test" for our theoretical design above:



This parameterised test accepts each of the different kind of component as a parameter, which it plugs into the Application through the constructor. I then use a testing utility I knocked up to generate the 81 possible combinations (the code for which can be found here - provided with no warranty, as it was just a spike).

When I run the test, it checks the trade price calculation using every combination of components. Think of it like that final test we might do for a car after we've checked all the individual components work correctly - when we bolt them all together, and turn the key in the ignition, does it go?.




The term I'm using for how many possible combinations of components we've tested is compositional coverage. In this example, I've achieved 100% compositional coverage, as every possible combination is tested.

Of course, this is a dummy example. The components don't really do anything. But I've simulated the possible cost of integration tests by building in a time delay, to illustrate that these ain't your usual fast-running unit tests. In our testing pyramid, these kinds of tests would be near the top, just below acceptance and system tests. We wouldn't run them after, say, every refactoring, because they'd be too slow. But we might run them a few times a day.

More complex architectures may generate thousands of possible combinations of components, and lead to integration tests (or "composition tests") that take hours to run. In these situations, we could probably buy ourselves pretty decent compositional coverage by doing pairwise combinations (and, yes, the testing utility can do that, too).

Changing that test to use pairwise combinations reduces the number of tests run to just 9.






Posted 1 year, 6 months ago on April 15, 2016