June 21, 2006

...Learn TDD with Codemanship

Bug-free Part II - Building The Right Thing

Last month I posted some suggestions on techniques to help you deliver (almost) bug-free code, given a clear specification. And quite rightly, some people emailed me and exclaimed that they have never, ever been lucky enough to get a clear set of requirements in the first place.

Having covered techniques for building it right, today I'm going to suggest 5 more techniques to help you build the right thing.

First, though, let's get something absolutely straight - you will NEVER, EVER pin the requirements down 100% first time. It is not a failure to have to make changes to a system because of user feedback. It is inevitable. And if you want to make valuable software, then it is highly desirable to seek that kind of feedback. Stop beating yourself up over it. And stop beating your customers up over it. And stop letting your customers beat you up over it. Software develpoment is a learning process. Learning requires feedback. Sometimes the only feedback worth acting on is from people using the delivered software. I don't know about you, but when I test drive a car, I test drive the finished product - not a sketch or a scale model or written specification. I want to sit in the driver's seat and feel the car as a drive it. Don't deny your users or yourself the same critical opportunity!

The first technique I'm going to recommend for building the right thing is:

Evolutionary Design - deliver in small increments, and get feedback from the users with every delivery. Adapt the design based on that feedback. keep what works, change what doesn't and add a bit more functionality with every small release. The critical variables in evolutionary design are:

a. Length of iterations
b. Frequency of iterations
c. Quality of feedback

Experience has taught me that it's better to have shorter and more frequent iterations than to have more time in each iteration to try and get things right. The fastest way to the optimum solution is through feedback, not through more thorough analysis, design or planning.

Testable Specifications - a good spec is a refutable spec. It must be obvious when the implementation doesn't satisfy the specification. If the spec is wrong, then we can adapt. If the code is wrong, we can fix it. If the spec is woolly, we're can't be 100% sure what needs adapting or fixing. The most direct route to testable specifications is to write tests and use them as specifications. These tests should be executable if at all possible - 1. because then we can execute them quickly and consistently many, many times, and 2. because executable tests must be written in some kind of executable language, and would therefore be totally unambiguous, just like our code.

Scenarios - are a vital tool to help us explore how the software will be used with our customers. People - myself included - struggle to comprehend subtle and sophisticated abstract specifications, but we have less trouble understanding concrete examples. Automated tests are one example of using scenarios to help us define what the software should (and shouldn't) do. They do have the disadvantage of requiring the customer to work with an executable language, so they are essentially pair programming the tests with us. This is why the majority of customers on Agile projects don't write the actual test scripts themselves. There are other, friendlier ways of exploring scenarios. If you're building a video library, for example, (and hey, aren't we all?), you could go a long way with a selection of videos, some boxes and various other domain-specific paraphernalia to "act out" the scenarios. However you do it, keep it concrete and get everyone involved.

UI Prototypes - are essential in joint application design. Two techniques I find especially helpful are UI storyboards (the UI equivalent of scenarios) and screenflows (the UI equivalent of state machines). Be very careful, though. This is UI design, not requirements analysis. Try not to jump straight in with drop-downs and combo(bongo) boxes before you've had a chance to explore the logic of the system first. Which leads me to:

Essential Use Cases (scenarios) - are descriptions (or simulations, or test scripts) of usage scenarios that don't include any implementation details. They might state that "the sales person selects the video title from the list of available titles", rather than "the user clicks on the title in the ListBox and hts the 'OK' button". Your domain-specific dressing-up box might be invaluable here.

I'm confident that if you follow those 5 suggestions, your software will be a closer fit to your customer's needs. They do require a close partnership with the customer, and that is unavoidable. Stop whining. It's a fact of life.

I can offer one final set of suggestions that go beyond what most texts offer:

Project Scorecard - if you've read Tom Gilb's book Competitive Engineering, you will be familiar with the idea of goal-driven software engineering. My experience of projects is that the ones with clear business goals have a tendency to do better than those that don't. It's no secret that I share his outlook on this.

Model Office - armed with clear business goals, your next step might be to recreate key business scenarios in some kind of simulated environment. Seeing how your software works in a realistic business context is a fantastically powerful way of figuring out what really works (and what really doesn't). The problem with traditional software testing is that it's a bit like taking a car for a test drive on a purpose-built test track. On the track it might handle like a dream, but on a busy Saturday afternoon at the shops, when you're trying to park the thing in a space just a nat's whisker longer than the vehicle, you might end up realising that this really isn't the car for you. Same goes for software testing. It might perform like a dream in its purpose-built test harness, but how does it handle on rough business terrain? This is where a model Office can help. It has some fringe benefits, too. It can be used as a training tool, and also as an aid for requirements analysis. Let's face it, no amount of chitter-chatter and whiteboard shenanigans can come even close to communicating the compelxities and subtleties of the real thing. Having a dedicated environment for studying the problem domain could prove very valuable. Finally, a Model Office makes a mean integraton and end-to-end testing environment. Many business processes will involve multiple systems, and where better to see how they all fit together than in a replica of the live business environment?

Unlike the techniques for bulding it right, building the right thing will require customer and management buy-in. I wish I could say "just do it", but I can't. It's not going to be easy, but the rewards can be huge.
Posted 14 years, 6 months ago on June 21, 2006