March 1, 2016
Test-driving "Invisible Work"Before I finally go to bed, just a few words that didn't quite fit into a tweet about invisible work in test-driven development.
If you've been on my TDD workshop, you'll have experienced how we can drive out a software design directly from customer tests by identifying the work that needs to get done (the outcomes in the "where" clause of a BDD test, for example), and assigning responsibility for that work to the module - or object or function or microservice or whatever - that has the data required to do that piece of work.
It's deceptively easy. List the jobs that need doing, assign them to where the data needed is, then figure out how the modules doing the work will need to interact to coordinate it end-to-end.
Not so easy, though, is when some of the work isn't visible from the outside. The example of Binary Search is a classic case in point. Stuff has to happen in a binary search that has no direct external signature. The search results are the same. The time taken may be much shorter, but there are many potentially efficient search algorithms that might satisfy our performance requirement. So, yes, we can write a test that fails if searching takes too long, but that doesn't tell us how our design should perform the search. It doesn't tell us to take the middle item, check if that matches, and if it doesn't then discard the half that's too low or too high and repeat that process until we find a match (or run out of items to check.)
Likewise with my implementation of a pairwise combination test case generator. Yes, all possible pairs of parameter values must be covered by the test cases. Yes, the number of tests cases should be orders of magnitude fewer than a complete set of combinations of all parameter values. Yes, it should run fast. But none of that tells me very much about what work needs to be done internally in order to achieve that. It doesn't tell me, for example, that I need to "audition" randomly-generated test cases to find the ones that offer the best coverage of value pairs. It doesn't tell me I need to generate a list of all possible value pairs, ticking each one off as they get covered by a test case. There's a whole bunch of work, as it turns out, that I had to figure out, with very few clues from the external perspective. There are all sorts of ways of achieving the same result, and they're all internal, "invisible" details. They may manifest themselves as speed, as memory footprint, as LOC, and other non-functional features - but that's just the issue; they're non-functional, and we therefore don't have a direct path from the tests to some code that does some work.
Think on that. Nighty night.
Posted 1 week, 2 days ago on March 1, 2016