March 4, 2006

...Learn TDD with Codemanship

Waterfall Is BAD

If you don't like swimming upstream, perhaps a more "agile" approach is for you...

It wasn't that long ago when the books told us that, to build software, we should capture the complete requirements, do a complete design, implement it completely in code, test it completely, and then release it to the end users - who will no doubt be joyous upon receipt of said software.

We call this "waterfall development" because it resembles a cascade of work outputs being passed on from one phase of development to the next. If we take the term literally for a moment, we can get a picture of how waterfall development works, and what its implications are.

Imagine a sequence of waterfalls. At the top, a requirements analyst speaks directly to the end users about what they want from the software. The analyst writes it all down, seals the requirements specification in a bottle, and launches the bottle down the first waterfall to the designers waiting below.

The designers open the bottle and read the requirements specification, and create a design that they think will satisfy it. They then seal the requirements and the design specifications into the bottle and launch it down the next waterfall to the developers.

The developers read the requirements and the design specifications, and implement them using the chosen technologies. They then add their finished software to the bottle and release down the next waterfall to the testers.

The testers read the requirements and design specifications and create tests to check that the software satisfies them. They perform their tests, and when the software is deemed to be "good enough", the testers seal it back into the bottle (possibly with a user manual) and release it down the final - very high - waterfall to the end users.

The waterfall approach is fundamentally flawed. If a problem is spotted downstream in the requirements, or the design, or the code, it's a considerable climb back upstream to fix it. The further downstream the problem gets before it's spotted, the bigger the climb back up.

If a tester spots a coding error, then they have to climb back up through change control, release management and re-testing to get the bug fixed. If they spot a design flaw, then they have to climb back up through an extra layer of change control - for the code and the design. If the problem is down to the original requirements, then it may well involve going back to the analysts, who will have to go back to the end users to find out what they really wanted. If the problem isn't spotted until the software's been released to the end users - well, that's a very big climb indeed.

But waterfall planning simply doesn't allow for mistakes. By scheduling testing "at the end" - just before the software is supposed to be released - managers ignore the consequences of what the testers might find. A common symptom of this is "Almost Done" Syndrome (ADS). If the project plan says that testing starts in the last 15% of the schedule, then project managers will say that they are "85% done" when testing starts. The business now has expectations of an imminent release. Like Pavlov's dogs, they start to salivate!

But the team are only "85% done" if testing doesn't throw up any significant problems. And since no testing has been done yet, nobody has any idea what problems there might be.

The key drawback to waterfall development is lack of feedback. Before testing starts, nobody has any idea if the developers have built the right thing, or if they've built it right. It's like playing golf with your eyes closed: nobody has any idea how many holes they've actually finished, and therefore nobody has any idea what their score is or even when the game will be over.

Typically, testing throws up all sorts of bugs, and the choice is either to fix the ones that will make a release impossible, or ignore them and release code that doesn't work. Teams that gallantly choose to fix the bugs face a miserable few months of fixing, testing, more fixing and more testing. The long climb upstream after each testing cycle can end up taking most of the time, and the whole project slows down to a painful crawl - a phenomenon known as "thrashing" (all effort and no progress).

If they choose not to fix many of the bugs, these will come back to haunt them in the next release and beyond. Time spent fixing these bugs - while more waterfall development introduces all-new bugs - is time that could have been spent delivering valuable new features. In most respects, waterfall development is a great way to build brand new legacy systems.
Posted 14 years, 10 months ago on March 4, 2006