November 30, 2005
The Metrics Design ProcessOne of the key differentiators between traditional process improvement and agile process improvement is the way in which performance metrics are designed. More often than not, in traditional process-oriented SPI, metrics are selected and applied off the shelf. This is fine if:
a. The metrics are suitable for your SPI goals, and
b. The metrics actually work in the first place!
Work I've been doing in the last 9 months with a range of established metrics has drawn me to the conclusion that many of them are not up to scratch. This is, I suspect, because people started with what they could measure, and then tried to attach some kind of performance or quality goal to it later. That's why some metrics are solutions looking for problems.
In Agile SPI, while it helps to have a catalogue of proven metrics to hand - let's not reinvent the wheel if we can avoid it - we start with our goals. The design of our metrics is driven entirely by what we're trying to achieve.
The metrics design process is lengthier and more rigorous than some would choose to go, but there are very practical reasons why it needs to be this way. In self-organising teams, metrics can encourage certain kinds of behaviour, so you need to be very careful what you wish for.
Here's one way of doing it:
* First of all, agree a set of 5-6 goals for improvement. Your goals need to strike a balance between the 4 pillars of performance - time, scope, cost and quality. If all your goals are about meeting deadlines, then the risk - nay, the certainty - is that you will end up sacrificing other aspects of performance (cost, quality and scope) to achieve your goals for timeliness.
* Order your goals by importance, and start with the most important goal first
* With those goals in mind, collect samples of project data to see what kind of things are available to be measured right now.
* For each goal, do the following:
i. Ask yourself how you will know when you've achieved this goal? What state will your project or software be in (what is the post-condition of your goal?)
ii. Express in terms of the things affected by your goal what this outcome will be? (e.g., increased story points per iteration) - this is your initial metric
iii. Test your initial metric by exploring performance scenarios. The most obvious scenario to start with is the easiest way of satisfying the metric (for example, increase story point estimates). Think test-driven development: do the simplest thing you can think of to satisfy the metric. In other words, do exectly what the metric tells you to do. If the metric tells you to do the wrong thing, then there's something wrong with the design of your metric. These scenarios are often referred to as "gaming".
iv. Redesign your metric to avoid each gaming scenario. For example, if story estimates are going up, then factor this into the metric (e.g., story points per iteration / average points per story in iteration )
v. Every time you redesign the metric, ask yourself what the metric is telling you to do. If it's effectively the same as your original goal, then you can move on to the next metric. If not, iterate again until it is.
What you will undoubtedly find is that your understanding of the goals will become clearer as you go through this process. You will also find that metrics need to be testable, both because they need to be tested, but also because they must not be misunderstood.
It is quite likely that the sample data you collected won't cover every metric, and that you may need to start collecting some new kinds of data to implement your metrics. You need to consider the practicalities of this when you're designing the metrics. Make a note of any new piece of data and consult with whoever you need to in order to assess if it's reasonable or practical to start collecting that information. If you can't get the data, you can't calculate the metric! You should also keep a running score of all the individual pieces of data you're going to need, and rough estimates of the cost or difficulty of collecting them. I use a spreadsheet with a row for each datum - against which I record the metrics it will be used in, it's source, rough cost per collection, and whether or not it can be collected automatically (which is the ideal).
If you intend to automate the reporting of metrics (and the collection of their underlying data), then you might want to consider formalising them using, for example, UML and the Object Constraint Language. To do this, you will need to create a model of the thing you are measuring - code, builds, tests, versions, iterations, requirements, and so on. This too will help you gain a deeper understanding of your goals, as well as of your development practices.
Posted 15 years, 8 months ago on November 30, 2005