May 1, 2005

...Learn TDD with Codemanship

Agile Planning - Part II

In the first post, we compared the planning process to a round of golf, where each hole had a relative difficulty - EASY, MEDIUM or HARD. We assigned points for each level of difficulty - 1,2 and 3 respectively - and used very simple calculations to tell us how many blocks of 1 hour (iterations) we would need to complete the game, based on how many iterations it had taken us to get to where we were at some arbitrary point in the game.

The first question this system raises is "can we know how long the game will take before we start to play?", and the answer is emphatically NO! The same goes for software projects. Our estimates will improve the more iterations we go through. We need data about previous iterations to make better predictions about future iterations.

Of course, you might use data from a previous project, but that only works if all the factors are exactly the same - same team, same technology, same kind of application, same architecture, same customer, and so on. Since no two projects are exactly alike, it stands to reason that data from previous projects is only loosely applicable to future projects. At best, we can get a ball park figure for the length of the project. For example, an average web application built in .NET with a team of 4 developers might take roughly a year. But it might take 2 years. Or it might take 8 months. But it won't take 2 weeks, and it won't take 20 years. An order of magnitude is the best we can hope for in the early stages of any project.

Applying the golf analogy to software development requires us to identify the holes and their relative difficulty. In golf, the holes are the objectives we have to hit. The objectives in software projects are functionality, so the holes we have to get the ball into are features we have to deliver.

In many projects, these features are summarised as use cases - so in our examples we will be concerned with use cases and the relative difficulty in satisfying them.



Again, planning is not an exact science, so our estimates of the effort required to implement each use case will ony be a ball park EASY, MEDIUM or HARD. You must resist the temptation to estimate how long each use case will take (eg, 2 weeks for 3 developers) because that figure will inevitably be wrong. More importantly, managers and customers hear deadlines when they hear dates or times. If you say "two weeks" they'll expect delivery in exactly two weeks time. Don't give them the ammunition to tie you to unrealistic deadlines. A simple points system will help you avoid this.

How we plan software releases is exactly the same as how we plan our game of golf. At the start, the best we can hope for is an order of magnitude - any attempt to fix scope and set a deadline at this point is almost guaranteed to lead to disappointment and the perception of failure.

There is, however, one significant difference - in a game of golf we have to play the holes in the prescribed order. I mentioned in the previous post that this game of golf was slightly strange, in that players win points for each hole they finish (ie, the aim of the game is to score as many points as possible.) Each hole is assigned a relative value that depends in no way on the relative difficulty, or on the order in which holes are completed.



Let's change the rules of our game: now the aim is to score as many points as possible in a fixed amount of time, and you can play the holes in any order you like. How would you play this version of the game? That's right, you'd play the highest scoring holes first.

This version of the game is much closer to your average software project. Not every software feature is as valuable to the customer as the others, and quite often we have fixed deadlines we have to meet. The aim of the software development game is to deliver as much value as possible with as little effort as possible. So we might naturally tackle the features with the highest value first.



The key to effective planning is prioritisation - increasing the chances of delivering the most valuable features with the time and resources available. If, by the deadline, you've only had time to deliver 50% of the functionality, wouldn't you rather it was the 50% that made up 80% of the system's business value?

As I mentioned in the last post, the planning process relies on objective measures of actual progress. In our version of golf, you don't get any points until the ball is in the hole. In software, a feature isn't complete until it's actually been delivered. How do we know for sure that a feature has actually been delivered? The answer is TESTING. When you say that you have completed 50% of the functionality, that means that 50% of the functionality has been delivered for testing and has passed the tests (and continues to do so as new features are delivered - creating a strong argument for regression testing with every delivery.)

Have you ever worked on a project where the project manager claimed that 80% of the system was complete, but nobody had actually seen anything working? You can bet your bottom dollar the project won't be finished with just 20% more time and effort!!!

A key aspect of planning that customers and project managers have difficulty accepting is uncertainty - though there is undeniably uncertainty in every project WITHOUT EXCEPTION.

We cannot know with 100% certainty exactly how much we will deliver in a fixed amount of time with fixed resources. Nobody, and I mean NOBODY can do it. Odd, then, that we see so many fixed-price, fixed-scope, fixed-deadline projects. What's entirely predictable is that we see so many of these projects fail in some way - either to deliver everything, deliver on time or deliver within budget.

There are 4 variables in project planning:


    * Time
    * Scope
    * Cost
    * Quality


Actually, we can, and should, remove one of these variables - quality. The choice to deliver lower quality software is the rocky road to ruin, and there's plenty of evidence to suggest that higher quality software actually costs less to deliver (to a point). If you practice an effective policy of defect prevention - perhaps by practicing test-driven development, for example - you may well find that the same features actually take a little less time to deliver in the long run.

So, that leaves us with 3 planning variables - time, scope and cost. Uncertainty in all projects means that we cannot know all 3 at the same time (until the project's actually over). If we fix the deadline, and fix the cost, we may have to reduce the scope if we later discover that we can't fit it all in. If we fix the scope and the cost, we have to accept that it may take longer to deliver than we planned. And so on.

Now let's eliminate one more variable - cost. If you have 3 developers working on a project, and you discover that they are going to take twice as long to deliver as planned, is the solution to hire 3 more developers? One of the earliest laws of software engineering is that adding more developers to a late project makes it later.



Every person you add to a project increases the lines of communication (and therefore the overhead) almost exponentially. Developing software is an information business, and in information businesses, communication is a very significant overhead. Every person you add creates a quantum leap in the amount of communication needed to keep everybody in the picture - doubly so because each new team member has more and more to catch up on the later they join. There is a measurably decreased return for every person you add. Eventually, the productivity gains from adding more people become negligable, and in large teams can actually become counterproductive as new team members take up valuable time from existing team members - literally sucking productivity out of the team. (Note also that inexperienced developers can have exactly the same effect on small teams).

So throwing money at a project whose schedule is slipping rarely helps.

That just leaves time and scope. In this day and age, being late to market (with a new piece of software or an improved business process/service) can be disastrous. Fixed deadlines will probably need to stay fixed. That leaves scope as your most realistic release planning variable. If the release is running behind schedule, the best option is to postpone the least valuable features until a later release. That's why prioritisation is the key to effective release planning. It's almost inevitable that something will get chopped, and you want to be sure it's not something vital.

In the next post, we'll take a look at tasks and iteration planning and introduce a lightweight card system for the planning process...

Part I
Part III


Posted 15 years, 9 months ago on May 1, 2005