May 2, 2005

...Learn TDD with Codemanship

Agile Planning - Part III

In the last two posts we've concerned ourselves with the longer-term picture of release planning. We saw that it is futile to attempt to plan tasks for a whole release - who will be doing what and when - in detail because the plan is guaranteed to change many times. Such is the nature of any complex endevour.

Instead, a release plan is made up of goals which are prioritised according to their relative business value. The aim of release planning is to deliver as much value as possible as quickly as possible. The analogy we used was a version of golf, where points are awarded for completing each hole and, in a strictly limited period of time, we have to score as many points as we can. We are allowed to play the holes in any order, so we can play the highest-scoring holes first to improve our chances of getting the highest overall score before the deadline.

Similarly, we should ask the customer to prioritise the goals of a software project. In the example, we chose use cases to represent these goals, and we tackled the use cases starting with the most valuable.

We also learned that, out of the 4 planning variables - Time, Cost, Scope and Quality - quality should never be sacrificed (especially as it's often a false economy) and adding more people to a late project usually makes it later, meaning that spending more money (since the biggest expense is people) won't usually bring a release any earlier. That leaves us just time and scope to play with, and the consequences of delaying make scope the most realistic planning variable on most projects.

In our golf game, we assumed that no points are awarded until you get the ball in the hole. Similarly, a software feature is not complete until it's provably been delivered. We objectively gauge progress through testing, and if a feature is not successfully delivered AND tested then it is 0% complete. Developers are notoriously optimistic when asked to gauge their progress, and relying on testing is the best way to avoid the "99% done" syndrome that plagues so many projects. In many cases, 99% of a working feature means 0% business value delivered, and since the aim of the game is to deliver business value, we must measure progress entirely in those terms.

Iteration Planning & Tasks

Release planning helps us play the long game more effectively, but in the short term - each week and every day - we need to co-ordinate the team to get actual work done. A golfer has to play shots to get the ball to the green and into the hole. He needs to plan each shot, even though the outcome might not take the ball where he expected it to go.

If our game is broken up into small blocks of time, a player may choose to plan the shots he will take in each block. Planning any further ahead would probably be a waste of effort because - even after just one shot - the entire plan may need to change (if, for example, his ball lands in the rough). At best, he can plan where he'd like to be by the end of the block of time. He may not get there, and the long-term plan may need to be adjusted, but without a short term plan of action he'll definitely get nowhere.

In software projects, tasks are the mechanism for taking us forward. Each iteration has a plan of action - planning in that detail any further would be a similar waste of effort. So, while each release is made up of prioritised goals, each iteration is made up of prioritised tasks that - hopefully - take us some way towards meeting those goals.

Just as we estimated the relative difficulty of meeting project goals, we should estimate the relative effort required to complete each task. In eXtreme Programming, this is often measured in what are called Ideal Engineering Days. These are just gut-instinct estimates of how many days a task might take without any interruptions or other impediments. They are NOT a commitment to how long the task will actually take or when it will be complete. You might just as well estimate in slices of pizza or sea monkeys.

In XP, we like to use the simplest tools we can find, so for planning I have often just used a system of coloured cards, stickers and labeled card boxes to manage the planning and tracking of projects.

In my simple system, green cards are used to plan the implementation of new functionality - in this example, use case scenarios. The diagram describes the kinds of simple, succint information the card needs to capture. WARNING: these cards are not designed to capture the full requirements, they are merely a vehicle to plan the implementation of requirements. They are a reminder to go speak to the person who knows what the full requirements are.

Every task has an associated acceptance test to let us know when it is complete. We use coloured stickers to track the progress of each task - RED meaning that the task is assigned and in progress, AMBER/YELLOW meaning that the developer believes the code is ready for acceptance testing, and GREEN meaning that the test(s) passed and the task is now complete. The person responsible for acceptance testing is the only person allowed to put a green sticker on a card. In most cases, this is the person who wrote or requested the card in the first place. For new functionality, this ought to be the customer. If it is not, then you're probably not getting requirements or feedback from the right person!

As well as new functionality, we will also be called upon to make changes to existing functionality or to fix bugs. In our simple system, we use pink cards to capture such requests and they can be scheduled along with requests for new functionality, and estimated and tracked in the same way.

On top of new functionality and change requests, we will also need to schedule significant pieces of miscellaneous work that doesn't fall under either. We can use blue cards to capture these.

So the plan for each iteration is made up of green, pink and blue cards representing different kinds of task. To manage the iteration planning process, we will need 4 queues to place these tasks into, representing different stages of the process:

    * Pre-planned - tasks that might be scheduled in future iterations
    * Planned - tasks that are scheduled in the current iteration
    * Ready For Testing - tasks with YELLOW/AMBER stickers that need to be acceptance tested
    * Complete - tasks from the current iteration that have been given green stickers

Each task queue can be stored in a simple card box, clearly labeled so everybody knows what they're for.

The iteration planning process is very straightforward:

1. Just before the iteration planning session, take the cards from the Completed box, add up the total estimates of effort on the cards. This tells you how much you will be able to schedule for the next iteration, using the principle of Yesterday's Weather - "What will the weather be like tomorrow?" "Probably like it was today".

For posterity, wrap these completed cards up with an elastic band, putting a white card on top with the iteration number (eg, iteration #11) and the total estimated effort. Also, take any cards from the Planned box and put them back into the Pre-planned box. Leave any cards awaiting testing where they are.

2. In the planning session - which should be attended by everybody involved - ask the customer to select cards from the Pre-Planned box that they would like to have tackled in this iteration. Make sure every card has an estimate of effort on it - if not before the meeting them ask the person/people volunteering to do that task to make an estimate (it doesn't matter if it's wrong - just as long as it's in the right ball park)

3. The customer can select cards with estimates no greater than the total effort of the cards completed in the previous iteration. It is their job to select the most valuable tasks, just as it is there job to prioritise the goals of the release.

4. Place the selected cards into the Planned box. You will take tasks from this box in the next iteration.

5. Execute the release plan acceptance tests for all features you believe have been delivered, and update your measure of overall progress (and change the release plan if necessary). THIS IS IMPORTANT!!! If possible display the release plan publicly so people can't bury their heads in the sand about how the project's really going.

When someone picks up a task, they should place a RED sticker on it to make sure everbody knows that task has been assigned. They should immediately go and speak to the person who wrote the card and, at the very least, agree the acceptance tests. When they think they've done the work, they put a YELLOW/AMBER sticker on it and put the card in the Ready For Testing box. The person responsible for acceptance testing the card puts a GREEN sticker on it only if it passes the tests, and then places it into the Completed box to be counted at the end of the current iteration.

TIP: When we get lost in big pieces of work, we often lose focus. Try to ensure every task can be completed in one iteration. Tasks that might take longer should be broken down into subtasks.

The whole process may seem laughably simple compared to much more sophisticated systems of project planning, but it works with the uncertainty and complexity of software development - which many of the more established systems ignore to their detriment. I know from experience that the system produces much better results, measured in terms of business value delivered and customer satisfaction.

The hard part is not understanding the system, but persuading teams to use it. I'm afraid, as with many good agile practices, the proof of the pudding is in the eating. You have to try it to believe it can work. It's a leap of faith, but one that's well worth taking, in my opinion.

Part I
Part II
Posted 16 years, 4 months ago on May 2, 2005