April 30, 2005

...Learn TDD with Codemanship

Agile Planning - Part I

A lot of software development teams get their knickers in a twist about project planning when faced with considerable uncertainty. The natural instinct of some project managers is to try to plan everything in detail for the entire project - telling us exactly who should be doing what tasks and when, often months ahead.

The problem with these detailed, long-term plans is that we just don't know enough to plan that far ahead and get it right. On many projects, we find developers doing completely different work than that prescribed for them by the plan literally 24 hours after it was created or updated. Some project managers make the mistake of thinking that detail = control, and it can easily sink a project as developers struggle to stick to the plan in the face of changing circumstances.

Arguably, long-term detailed project plans are a waste of the team's time. By the time you've created them, they're already out of date.

An agile approach to project planning embraces the reality of changing circumstances, and accepts that - while it's advisable to have long-term goals - it's only really possible to plan who will be doing what in the short term.

This approach is based on techniques we used on a recent project to develop Java-based delivery tracking systems. While it initially raised some eyebrows outside of the team (many people have an issue with projects being run with boxes of coloured cards and stickers!), it proved very successful and eventually won over the less progressive managers - of which there were few, I should add.

The approach we took to planning is based on practices from eXtreme Programming. Most particularly, we adopted a practice called The Planning Game and applied XP ideas to release planning (long term) and iteration planning (short term). We quickly came to appreciate how release plans were made up of headline goals (eg, key use cases or acceptance tests), and iterations were made up of tasks (eg, new functionality, change requests, bug fixes, documentation, build automation etc). We could not then, and still can't to this day, find a direct causal link between the execution of tasks in iterations and the achieving of goals in releases.

This is the biggest myth about planning - that plans are made up of tasks that lead you, like clockwork, to your deliverables and milestones. As we'll see, that mode of planning is like trying to plan your route to the shops by saying "first I shall stand up, then I shall put my left leg in front of my right leg until the foot hits the ground, then I shall regain my balance, then I shall...", and so on.

The best way I've found to describe release planning vs. iteration planning - Goals vs. Tasks - is to use the analogy of a game of golf. (Admittedly, it's a very odd game where you score points for completing each hole, but bear with me!)



Imagine this game of golf, where each hole has a relative difficulty - EASY, MEDIUM and HARD. We assign weights to each level of difficulty, that very roughly measure the effort required to complete each hole. The total difficulty of the course is the sum of the relative difficulties of each hole. This is not an exact measure, but then, planning - in software development and golf - is not an exact science.

Now, let's say that we play this game in blocks of one hour - taking a break between each block. Let's call these blocks "iterations". After 6 iterations - phew, that's a long game (let's assume the players are as bad at golf as I am) - we have managed to complete 11 holes.

A quick back-of-an-envelope calculation informs us that we probably need another 5 iterations to complete the course.

NOTE - A vital part of planning is being 100% sure that a goal has been achieved (or a task has been completed). In golf, this means getting the ball in the hole. Similarly, in software development, we must also judge our progress by whether or not the ball is demonstrably in the hole. It's too easy to rely on developers' assessments of how complete features or tasks are. I've lost count of the number of times I've heard "it's 99% done" because the developer had written most of the code needed to implement some feature. This is like a golfer saying he's "99% done" when he gets the ball on the green. But putting can be a fiddly business. It may have taken only two shots to get 99% of the way to the hole. But it might take another 2 to actually get the ball in the hole. The best way to know if a software feature is complete is to test it. We'll see how acceptance tests can act as a more objective - and therefore useful - measure of progress.

In the next part, we'll look at how the golf analogy can be applied to software development, as well as discussing the role of testing in planning, prioritisation and the 4 project planning variables (and how there's really only one on most projects)...

Part II




Posted 15 years, 9 months ago on April 30, 2005