April 16, 2005
Software Process Improvement - Refactoring The Way We WorkI had a very productive meeting yesterday with a potential client, where we discussed in some depth the process of improving software development processes.
It struck me that there's little or no difference between the process of improving business processes and software process improvement - though it is odd how we tend not to see it that way. Perhaps we feel there's something special about software development, but I'm not sure why.
The software process improvement (SPI) process is quite straightforward, and certainly no different from business process improvement.
The aim of SPI is to make teams "better" at delivering software, just as the aim of CRM is to make organisations "better" at delivering customer service. The first step in SPI is to agree on a clear definition of what we mean by "better". Do we mean faster? Do we mean cheaper? Do we mean less bugs after release? Do we mean a better architecture? Before we do anything, we need to identify what it is we actually want to be "better" at.
I'm a great fan of test-driven development. I find it incredibly helpful to start out with a testable definition of what's required, and some way of knowing when I've achieved it. Testable goals are the key to doing only what's required to meet those goals, and are a good thing for all sorts of other reasons. I've seen dozens of SPI projects lose their way because nobody clearly understood what they were trying to improve, and nobody knew for sure when improvements had been achieved or what the value/extent of those improvements were. Many of those efforts tended to degenerate into fairly poinltess exercises in adopting processes just for the sake of it. Just as TDD helps you to avoid writing code that never needed to be written, testable goals for SPI helps to avoid making "improvements" that never needed to be made.
So, when we're thinking about what we mean by "better", it's a very good idea to design the tests we'll use to let us know when we've actually achieved real improvements. Process improvement tests come in the form of performance measures. So, designing your performance measures is a critical step in SPI. Be warned, though, that performance measurement is something of a black art. It's very easy to measure the wrong things, and even easier to design measures that encourage developers to do the wrong things - especially if they are rewarded for meeting the targets you set. There's a saying in business strategy - "you get what you measure" - which I like to paraphrase as "be careful what you wish for". You may, for example, measure time to delivery at the expense of software quality. Indeed, many development teams are rewarded for meeting tight deadlines. Few are rewarded for meeting realistic deadlines with higher quality software. When designing your measures and setting targets, you must address the balance of needs for the short, medium and long term, and take into consideration a wide range of perspectives - yours, the users, IT management, the project sponsors, vendors, the public at large and so on. Designing good measures it the key to making SPI work. It's also the part of SPI that often gets overlooked or botched.
Once we know what we mean by "better", and have some idea of the value of each kind of improvement, we can begin to prioritise our efforts to change things. Just as we should ask the customer on a software project to prioritise their requirements so that we can tackle the most important ones first, we must also ask the customer of SPI (the organisation that stands to gain from improvements) to prioritise the goals for the SPI project.
Again, we should apply iterative and incremental development to SPI just as we should to software development itself. A great many SPI projects fail because they attempted to do everything in one "big bang" implementation. The temptation to adopt a whole bunch of changes in one bite is understandable. Many well-known software development methods give the impression of being set menus. You're either doing all of it, or you're not doing it properly. So organisations try in vain to change their analysis and design processes, their development practices, testing, configuration management, project management and all the rest in one go - usually in an attempt to adopt an off-the-shelf software development methodology like RUP, DSDM or eXtreme Programming.
But SPI is not about adopting NEW processes. It's about improving existing processes. It's the process equivalent of refactoring, where we improve the design of existing code through a series of controlled, reversible "tweaks" that eventually add up to big improvements. Much of the principles of refactoring apply equally well to SPI.
Certainly, to refactor a piece of code, you need to understand that code before you can begin. Process improvement also benefits greatly from a thorough understanding of the existing processes. You must begin at the beginning - as Duncan Pierce often tells me. In improving anything, the beginning is what exists now. Another very common mistake made in SPI is to ignore existing processes and treat software development as greenfield in that organisation, implementing brand new processes instead of improving existing ones. If no process currently exists, then that's fine. But even the most immature software teams have processes. If the team exists, then the processes exist, and we need to work to understand what they are.
So, what kind of SPI process do we have so far?
1. Establish testable goals in the form of measures and targets (what do we mean by "better"?)
2. Prioritise the goals and tackle them in small, manageable increments
3. Start by understanding the existing processes and follow a refactoring-like process to improve them through small, reversible changes
This model of SPI is much more like a finger buffet than a set menu. Rather than setting out to implement RUP or adopt Test-driven Development (though I highly recommend that you gradually move to TDD if you haven't already), we're setting out to deliver lasting long-term change through a series of small incremental improvements, probably borrowing from a wide range of methods and practices to find the "tweaks" that will help us achieve what we want.
Posted 15 years, 10 months ago on April 16, 2005