May 22, 2006

...Learn TDD with Codemanship

Complex Capability

What's the difference between performance and capability? For years I didn't really distinguish between the two. If a team delivers X new features every week with Y quality at Z cost, then that's their capability, as far as I was concerned.

More recently I've been a little less sure that they're the same thing at all. If your pet lion doesn't kill you, does that make it a harmless pussy cat? It probably could kill you. It just hasn't (yet).

And if a team delivers poor performance, does that mean they're incapable of doing better? A core tenat of Agile Governance is that you only believe what you can see. It's not delivered when they say it's delivered. It's delivered when you can see it's delivered.

Can we see capability? We only have the team's actual performance to go on, after all. And in practical terms, who cares what a team could achieve if they never actually achieve it?

So, yes - in a logical sense, capabiity and performance aren't the same thing. Performance is the realization of capability. But it's a little like testing. You can only know what the software does for the tests you actually run. We can only know the real capability of a team when they actually achieve that level of performance - even if it's just for a brief moment.

Now that I come to think of it, maybe the relationship between capability and performance is much stronger than I thought. In Agile SPI I proposed the concept of a capability attractor. Performance of teams can vary quite dramatically and seemingly randomly over time (e.g., over a series of iterations), but if we plot key aspects of performance (productivity, quality, cost) for a large number of iterations, we notice they cluster together around a point in performance space.

This complex relationship between performance data points and the capability attractor is what underpins Agile SPI. Over time, as the team gets better at delivering higher quality software in less time for less money, the attractor moves across performance space.

But is the attractor the team's actual capability? I don't think so. I think it's more subtle than that. The data points define a surface in performance space, and the team's actual performance is confined within that surface. I think the surface somehow describes the team's capability. At some point on the surface, productivity is highest, but what are quality and cost when that is the case? And at some other point on the surface, costs are lowest. And at another point, quality is best. Maybe those points are much closer together than we might expect...

If you've bought a pair of speakers recently, you might have glanced at the documentation describing their performance characteristics. You might be interested in frequency vs. volume vs. signal-to-noise ratio. It's the same kind of thing. If we take measurements in small increments, sweeping across a wide range of audio frequencies at different volume settings, we can build a 3D surface within which the equipment's performance is confined. It describes the speakers' capability.

Development team performance is much more complex and unpredictable than a pair of speakers, of course. If I play the same tone at the same volume, the signal-to-noise ratio will stay pretty much the same each time. (Okay, over time the equipment will wear down and the surface in performance space will deform, which is a good way of predicting equipment failure in simple machinery). But even in very complex systems, patterns will emerge, and an overall complex capability will emerge out of the data noise.
Posted 14 years, 7 months ago on May 22, 2006