June 5, 2005

...Learn TDD with Codemanship

Agile Software Process Improvement III

In previous posts, we looked at how the agile values of simplicity, communication, feedback and courage might be applicable to software process improvement projects. We also examined how some practices from the most widely used agile method, eXtreme Programming, could be applied to the SPI process.

In this post, we will continue by looking at two more key XP practices: pair programming and continuous integration.

Pair Programming

In eXtreme Programming, the "eXtreme" comes from the way in which best practices are taken to their ultimate logical conclusion - sort of like turning the volume up to 11. If it's a good idea to test early and often, it must be a great idea to start with testing and test continuously. If it's a good idea for teams to communicate often, then it must be a great idea to communicate all the time.

Pair programming is essentially peer reviews turned up to 11. If it's a good idea to review each other's code regularly, it must be a great idea to review it continuously. The basic premise is that two people work on the code at the same time - one of them is the "driver" - the person who types at the keyboard and actually writes the code. The other is the "navigator" who watches everything the driver does, and steers with the benefit of greater detachment. Many developers, and many more managers, have a big problem with pair programming. To the uninitiated, it looks like 2 people doing the work of 1. Surely, in an approach that recommends doing the minimum work possible for the maximum benefit, this is at odds with the XP mantra?

Well, statistically, no it isn't. For such a cold and logical discipline as computer programming, it's notoriously subjective. We often percieve only the work done to build a feature in the first place, and somehow our minds block out all the extra rework we had to do to get the code working - with a decent design that won't cause us problems later on - and past UAT. Harking back to the golf analogy I used to describe agile planning, we tend to think only of the effort required to get the ball on the green, and we are usually in denial about how long it took us to get the ball from the green into the hole.

Rework is a major component of all the effort put into building software. On average we spend 40% of our time fixing code we've already written. I don't count changes to requirements in that, of course. Rework is effort that - to some extent, but never entirely - could have been avoided. Pair programming helps reduce the overall effort - and therefore the overall cost - of building useful software by helping to avoid a percentage of the mistakes people tend to make on their own. It also helps because, if it leads to a better design, the code will be easier to change in the future - whether that's reworks or whether that's new functionality or change requests. The research done into software engineering and the famous Capability Maturity Model, while not being entirely applicable to agile development, does at least illustrate clearly the business case for continual peer review, and therefore for pair programming. (Provided you do it effectively, that is. Two inexperienced developers paired together tend not to be much better than one inexperienced developer - like the blind leading the blind).

So, there's a definite benefit to pair programming. But how might the same principle apply to SPI? Well, much more directly than you might think. Remember, it's not the SPI consultant who has to improve - though, heaven knows we could all learn more! It's the target organisation. To make real differences to the way a team works, there's less value in just writing down what you want them to do. Pairing is a much more effective way of transferring knowledge and skills, as well as getting a good feel for how the team is coping with the new information. As I occasionalluy put it, in order to remould a development team, you need to get your hands on the clay. Working directly with developers, analysts, testers, UI designers, project managers and everybody else, doing real work day-in and day-out, is the quickest way to change the way they work.

So, if your goal is to get developers doing test-driven development, then pairing with them while they write code is the best way to make sure they really get the hang of it. Similarly, OOA/D takes practice, and pairing with analysts and designers as they model at a whiteboard is by far the quickest means I've found for turning novices into competent and confident modelers.

Indeed, the best way I've found for implementing new or improved software develoment processes and best practices is to get people doing it themselves as soon as possible, with constant guidance in those critical early stages. Very often, when I'm wearing my SPI consultant hat, it feels very much like being the navigator in a pair programming situation. (Only with a bit of training thrown to get them started.)

On a number of occasions, I've gone beyond pairing to implement a development process. It can often take months to get newly-formed teams to a point where they're genuinely productive, but a kickstart at the beginning can shorten that period of "forming, storming and norming" dramatically. I get the whole team into a room with a laptop and a projector, and we all go through the whole development cycle - including requirements analysis, planning and UAT - several times, with people taking it in turns at the keyboard. With each new test scenario, not only do we flesh out the architecture of the system - saving us the pain of trying to adopt a new architecture in one go, which rarely works - but we also flesh out the development process, starting with the basic capability to deliver working code and introducing new practices and processes (e.g., OOA/D, continuous integration, refactoring etc) when the need for them eventually arises. Not only does that help the team learn at a reasonable pace, bt it also helps enormously in building their understanding of why[/b/ these things are useful and [b]when they should apply them.

I find that 3-4 weeks of team programming can help the basic approach to bed in, and after that we all have a common frame of reference, based on common experiences, with which to communicate as the project, and the SPI process, continues. Again, the economics of team programming speak for themselves: productivity in 3-4 weeks, or in 3-4 months. But, as with pair programming, manay managers see an entire team doing the work of one person, and as a result these highly valuable exercises are rare on real projects.

In any case, the XP practice of pair programming definitely has a major role to play in agile SPI.

Continuous Integration

In a previous post, I proposed that - just as the rule in XP is to always have a working system - the rule in agile SPI is to always have a working process: that is, a process that delivers working software. In XP, we turn the volume up on early and frequent integration of code changes into the shared repository from which a working system can be built, so that it happens right from the start and all the time.

The main reason for doing this is again economical: the less changes you need to integrate, the easier it is. The more often you integrate, the less changes there will be in each integration cycle. Good XP developers will integrate their code several times a day, making sure it works with any other changes that have been integrated in the meantime before they commit theirs.

This avoids the very common phenomena of integration hell, that long period near the end of many projects where a team of developers - or even several teams of developers - try to commit weeks or months worth of work in one go, only to discover the 1001 reasons why their code won't work with everybody elses. Commits on that scale are practically unworkable. There's just too many issues, and when the system inevitably breaks, there could be 1001 changes that might have borken it. You may have to investigate and eliminate them all before the system is working again. In the meantime, who else can commit their changes? If Johnny commits all his 1001 changes, and then has to spend 5 days fixing integration problems, what happens when Jilly tries to commit her 1001 changes? Who broke the code? Johnny or Jilly? Now, if Johnny checks in 2-3 changes, and Jilly waits until she kows that Johhny hasn't borken the system, then updates her local copy of the code with Johnny's changes and runs the tests to make sure her changes won't break the system before she commits them - I'm sure you can see how much smoother integration might go! If Johnny or Jilly breaks the system, then there's only 2 or 3 possible causes they would need to investigate. If Jilly waits for Johnny's changes to be verified through building and testing the system, then there's far less danger of her updating her local copy and breaking her version of the system - meaning less productive time wasted waiting for Johnny to fix it.

It makes commercial sense to integrate as few changes as possible, and therefore to integrate as often as you comfortably can. It also makes very good commercial sense to build and test the code every time somebody commits their changes to make sure you still have a working system.

How might this apply to SPI? Well, let's draw some analogies to help us compare. Firstly, in CI we strive to always have a working system - that is, a system that builds and passes all of its tests. In agile SPI, we strive to always have a working process - that is, a process that delivers working software and that meets all of its performance targets (even they are few and very relaxed).

In CI, developers work on local copies of the code and integrate them into a shared repository that represents THE system under development. In Agile SPI, consultants work with teams and with individuals. You may take a developer away with you to work on their TDD, but the ultimate goal is to build a consistent approach that works for the whole team, and across multiple teams in larger organisations. CI is about being disciplined over how changes are propogated. If I change the way one developer works, I should wish to see that change propogated to the rest of the target organisation. As with XP, it makes less practical sense to try to propogate 1001 changes to the development process in one go. Ideally, changes should be propogated in small, manageable chunks. So, the first application of continuous integration to SPI is in ensuring that changes to the process are introduced in the smallest units possible.

This rules out attempting to adopt the Unified Process or even XP in one go. Instead, we should attempt to adopt some aspect of the process and wait until that is bedded in and working effectively before we move on to the next aspect we wish to adopt. For example, we might choose to adopt simple use cases. We might not want to worry about such elaborations as pre and post-conditions, or relationships between use cases at this point, they can come later. Adopting the smallest changes implies rolling out these changes more often. So how often should we expect to "integrate" in agile SPI and how small should the changes be?

In continuous integration, the rule is that you must alawys have a working system - validated by builds and tests, and in agile API, the rule is that you must always have a working process - validated by performance measures. How often can we realistically test our process and know if we've broken it? If we're measuring productivity - and I would suggest that productivity would be the first measure we should put in place - then does it make sense to use daily measures of productivity to test if our process is still working? Well, probably not. Productivity is a lot like the weather. Today it's cloudy. Tomorrow it might be sunny with scattered showers. But this month is likely to be - on average - warm and sunny. If I were looking for trends in the temperature, I wouldn't look at two consecutive days, because it might lead me to conclude that the temperature is rising by 10 degrees every day!

The frequency of measurement in agile SPI should help to establish trends, rather than confuse us with aberrations. Measuring productivity for each iteration makes more sense, though that also probably doesn't give us enough data to establish a trend. But we want to make the smallest changes we can, and as often as we comfortably can - so iterations of 1-3 weeks will probably be the best compromise (or else it could take years to adopt something as simple as XP!!!)

There is one arena where it might be safer to adopt changes to the process at once, and that's with team programming. There, changes are propogated almost immediately because everybody is present. Since a working process will involve several practices, team programming is the best way to establish that initial capability.

After team programming, improvements can be rolled out at a measured pace - 1 or 2 with every iteration, and propogation of those changes can be done through ongoing pairing and coaching. The results with each improvement might not be spectacular measured in the short term, but you'd be amazed what organisations can achieve over the months and years that follow. In many respects, agile SPI is like boiling frogs... You turn up the heat just a tiny, imperceptible amount, and wait a while for the frog to aclimatise. It doesn't notice that things are getting warmer, but eventually the water starts to boil and the frog at no point attempts to jump out.

In the same way, development teams can go from cold and ineffective to boiling hot without feeling any real pain, by introducing small and regular improvements and measuring their effects.

In Conclusion

Of course, there are more practices that make up eXtreme Programming - and it would be fun to explore them all and see how they might apply to software process improvement. But I hope from these short posts you get the gist of Agile SPI, and how agile principles and practices can be applied to other kids of projects.

If you're interested in finding out more about Agile SPI, and how it might benefit your organisation or consulting practice, then email me at jason@parlezuml.com

Part I
Part II


Posted 15 years, 8 months ago on June 5, 2005