January 6, 2012

...Learn TDD with Codemanship

TDD & Binary Search - Not As Easy As I'd Thought

So I was noodling about test-driving an implementation of Binary Search (to see how a divide-and-conquer algorithm could be TDD'd), and something interesting came up.

My usual starting point is the simplest failing test I can think of - that is, the one that would be easiest to get passing. In this case, I started with the case where the item I'm searching for isn't in the list:



Triangulation begins by very simply returning the value it expects: -1



So far, so easy.

Then I looked for the next nearest failing test. I figured an array of only one element, where the element is the item we're looking for:



And pass it thus:



The next nearest failing test I could think of was an array of 2 numbers. The first number couldn't be the key value we're searching for, because our code would already pass that test, so I make it the second item in the array:



(With a little bit of clean-up to localise the knowledge of how to interact with our BinarySearch object, of course. I'm a good Boy Scout.)

Now here's where I hit a snag. The simplest way to pass this test I instinctively thought of was to use a loop. This follows Uncle Bob's "Transformation Priority Premise", sort of. I went from a hard-coded value to looking at a variable and an IF, to using a loop. So, to my TDD instincts, this feels right:



It's simple. It passes the tests.

And that's just it. It will now pass all of my tests. Provided the array is sorted and contains unique integers (my pre-condition), any array and any key value will be either correctly found, or not found. No more failing tests.

But I ain't done yet. This obviously isn't Binary Search. If I created an array of, say, 1,000,000 integers, it might have to do 1,000,000 comparisons in the worst case. In terms of algorithmic complexity, it's O(N) complex. I want it be O(log2(N)).

But where TDD's concerned, I'm done. No more failing tests. Surely?

I've come up against problems like this before, and what I will ususally do when we realise that an implementation is going to be too inefficient is to start writing tests about the performance or efficiency we require.

As folk on That Twitter were pointing out (over and over again - curse you Twitter for having such a short memory!), we can actually make assertions about algorithmic complexity by finding a way to expose the number of comparisons our search does.

For example, how many comparisons should it need to do to discover that some key value isn't included in an arbitrary array of 10 integers? It should be no more than 4.

A simple brutish way of exposing this information could be to add a feature to BinarySearch that tells us how many comparisons it did for the last search:



And we can test it like this:



We could, of course, do something more sophisticated using mock objects that wouldn't require this slightly unsightly addition to the BinarySearch interface. But with integer arrays, we're a bit stuffed, frankly. So ugly and brutish it is. (I actually used a test spy injected into the constructor that collected the comparison count when I first did it, but then thought "oh, sod it - it's just integers!"). Mocks present another problem. If the things we're comparing are mock objects, there's some considerable hill-climbing to do to set up a sorted collection of unique comparable mock objects. Fiddly, and I'm not sure it's worth the effort.

I can now write tests that force me to optimise my search implemention until it's properly O(log2(N)). But these new tests don't give me any real clues as to what that final algorithm could look like, just that it has to be X efficient. So, while it's a step towards test-driven algorithm design, it's not test-driven algorithm design. The tests don't tell me what code I need to write.

Now, this is an artificial exercise. I know I wanted to end up with Binary Search. If I know the answer I want to get, technically it's not "design" and not really TDD. But that wasn't my goal. I wanted to try and figure out how I could have test-driven Binary Search, had I not known that what I was going to end up with that specific algorithm.

Is TDD a useful tool for algorithm design? To be honest, I've come away from this little exercise with my doubts that it is, in this case.

Some algorithms wear their hearts on their sleeves, when the internal design of the algorithm is a close match for it's externally-visible behaviour. e.g., an algorithm to convert integers into roman numerals (or Farenheit into Celcius, etc).

Binary Search, on the other hand, is a secretive little bugger. You can't tell it's Binary Search just by what it does, and therefore it's hard to drive its design from the outside.

Some argue - rightly, I think - that the O(log2(N) requirement is a feature of Binary Search. I'm inclined to agree, but it doesn't point towards that algorithm, any more than "I want to pay income tax less than X%" points towards moving to Switzerland. The requirement and the implementation are more orthogonal.

Now I took a different tack doing it again this morning, and made a little progress towards understanding how, had I been tasked with implementing an efficient search algorithm I might have let a different sequence of tests lead me towards Binary search, or some other divide-and-conquer algorithm. So I haven't lost all hope yet that TDD isn't going to help in designing new algorithms. And certainly adding tests that fail if our algorithm isn't performing as we need it to can help constrain us, steering us away from inefficient algorithms early in the process.

It's been a fascinating process, and as I noodle with other kinds of well-known algorithms I'm fully expecting now to come up against other challenges. And while it can be frustrating, I take that as a sign that I'm learning something.

My hope is that having 250 smart and passionate programmers working on algorithms and the cross-over with craftsmanship and Clean Code will throw up all sorts of interesting problems, and potentially new solutions. To quote Simon Pegg in Star Trek: "It's exciting!"

Finally, as an aside, here's another approach I've used many times to constrain test-driven designs with non-functional requirements, which is very simple and equally brutish, but allows us to think in terms of things a user will experience (like "what's the maximum time this search can take?"):





I've also used similar techniques to write tests that fail if a certain memory footprint is exceeded, and I'm sure it could be applied to other kinds of performance requirements. IF YOU HAVE THEM - I should add. (Don't go making them up and optimising willy-nilly!)










Posted 6 years, 2 months ago on January 6, 2012