December 17, 2007

...Learn TDD with Codemanship

Project Quality Charter - Setting Out To Deliver Quality

I'm one of those people who reckons that you really do get what you measure. Which is why we have to be very careful what we wish for.

Take something as innocuous as unit test coverage. Teams that set out to achieve measurably high coverage - 90% or more - tend to do so with surprising ease.; Whereas teams that don't check their coverage tend to achieve far more modest levels - often below 35%. Even teams who claimed to be doing "test-driven development", but who don't routinely measure their coverage, tend to fall far short of 90%.

Teams that routinely measure method length thend not to write long methods. Teams that routinely monitor coupling and cohesion tend to deliver loosely coupled and highly cohesive modules and packages. And so on.

I've observed two factors that seem to make a difference in this respect:

1. Setting out to achieve some testable goal
2. Routinely monitoring progress towards that goal

It's not magic or rocket science (or voodoo, even). You just take aim, give it your best shot, and check the outcome regularly, then steer your efforts accordingly.

Of course, it can all go horrifically wrong, especially when indicators are badly designed, or when the feedback is poorly applied (e.g., with a big stick). But I've found over the years that teams who set clear goals and who monitored output regularly (and humanely) tend to achieve much more than teams who just "feel their way through it" (like the teams who thought they were doing TDD, but weren't).

When you're starting a new development, you have an opportunity to set the bar high and to aim for a level of quality that the vast majority of teams never come even close to. Not because you're better than all those teams. But simply because you set out to do it. It seems intent is half the battle.

For this to happen, everyone on the project has to sign up to the goals, agree to the measures, and play their part in achieving them. That includes developers, architects, analysts, managers, testers, UI experts, infrastructure and operations experts, and - yes - the business stakeholders. A tall order.

What I recommend these days is the creation of a binding project quality charter. This is a document that sets clear goals for quality, metrics and targets. It also lays out the strategies that could be applied to help the team achieve those goals (e.g., test-driven development, pair programming, UI storyboards, automated user acceptance testing, visual modeling, root cause analysis, performance engineering techniques, usability testing, guided inspections etc), and the strategies that should kick in if quality falls below agreed acceptable levels.

The charter could be broken up into a handful of quality perspectives (e.g., maintainability, usability, scalability, correctness, robustness, and so on) and dashboards could be created for each perspective, with the input of experts and key stakeholders in each one (e.g., the UI designer and the customer in a usability dashboard).

Once the draft charter has been agreed by the project stakeholders, you print of one copy and get them all to sign it. This is now a binding agreement. To change the charter, you will need to get agreement from all stakeholders. This turns the usual picture on its head: on most software projects teams have to justify persuing quality when other priorities get in the way (e.g., schedule). with a binding charter, teams must justifiy choosing not to persue quality.

How do you like them onions?


Posted 12 years, 11 months ago on December 17, 2007