April 2, 2015

...Learn TDD with Codemanship

"It Works" Should Mean "It Works In the Real World"

In software development, what we call "testing" usually refers to the act of checking whether what we created conforms to the specification for what we were supposed to build.

To illustrate, let's project that idea onto to some other kind of functional product: motor cars.

A car designer comes up with a blueprint for a car. The specification is detailed, setting out the exact bodyshape and proportions, down to what kind of audio player it should have.

The car testers come in and inspect a prototype, checking off all the items in the specification. Yes, it does indeed have those proportions. Yes, it does have the MP3 player with iPod docking that the designer specified.

They do not start the car. They do not take it for a test drive. They do not stick it in a wind tunnel. If it conforms to the specification, then the car is deemed roadworthy and mass manufacturing begins.

This would, of course, be very silly. Dangerous, in fact.

Whether or not the implementation of the car comforms to the specification is only a meaningful question if we can be confident that the specification is for a design that will work.

And yet, software teams routinely make the leap from "it satisfies the spec" to "okay, ship it" without checking that the design actually works. (And these are the mature teams. Many more teams don't even have a specification to test from. Shipping for them is a complete act of faith and/or indifference.)

Shipping software then becomes itself an act of testing to see if the design works. Which is why software has a tendency to become useful in its second or third release, if it ever does at all.

This is wrongheaded of us. If we were making cars - frankly, if we were making cakes - this would not be tolerated. When I buy a car, or a cake, I can be reasonably assured that somebody somewhere drove it or tasted it before mass manufacture went ahead.

What seduces us is this notion that software development doesn't have a "manufacturing phase". We ship unfinished products because we can change them later. Customers have come to expect it. So we use our customers as unpaid testers to figure out what it is we should have built. Indeed, not only are they unpaid testers, many of them are paying for the privelege of being guinea pigs for our untried solutions.

This should not be acceptable. When a team rolls our a solution across thousands of users in an organisation, or millions of users on the World Wide Web, those users should be able to expect that it was tested not just for conformance to a spec, but tested to find out if the design actually works in the real world.

But it's vanishingly rare that teams bother to pay their customers this basic courtesy. And so, we waste our users' time with software that may or may not work, making it their responsibilty to find out.

My way of doing it requires that teams continuously test their software by deploying each version into an environment similar to a motor vehicle test track. If we're creating, say, a sales automation solution, we create a simulated sales office environment - e.g., a fake call centre with a couple of trained operators, real telephones and mock customers triggering business scenarios gleaned from real life - and put the software through its paces on the kinds of situations that it should be able to handle. Or, rather, that the users should be able to handle using the software.

Like cars, this test track should be set up to mimic the environment in which the software's going to be used as faithfully as we can. If we were test driving a car intended to be used in towns and cities, we might create a test track that simulates high streets, pedestrian crossings, school runs, and so on.

We can stretch these testing environments, testing our solutions to destruction. What happens if our product gets a mention on primetime TV and suddenly the phones are ringing off the hook? Is the software scalable enough to handle 500 sales enquiries a minute? We can create targeted test rigs to answer those kinds of questions, again driven by real-world scenarios, as opposed to a design specification.

There are hard commercial imperatives for this kind of testing, too. Iterating designs by releasing software is an expensive and risky business. How many great ideas have been killed by releasing to a wide audience too early? We obsess over the opportunity cost of shipping too late, but we tend to be blind to the "embarrassment cost" of shipping products that aren't good enough yet. If anything, we err on the former, and should probably learn a bit more patience.

By the time we ship software, we ought to be at least confident that it will work 99% of the time. Too often, we ship software that doesn't work at all.

Yes, it passes all the acceptance tests. But it's time for us to move on from that to a more credible definition of "it works"; one that means "it will work in the real world". As a software user, I would be grateful for it.



Posted 2 years, 8 months ago on April 2, 2015