June 27, 2011
Continuous Delivery is a Platform for Excellence, Not Excellence ItselfIn case anyone was wondering, I tend to experience a sort of "heirarchy of needs" in software development. When I meet teams, I usually find out where they are on this ladder and ask them to climb up to the next rung.
It goes a little like this:
0. Are you using a version control system for your code? No? Okay, things really are bad. Sort this out first. You'd be surprised how much relies on that later. Without the ability to go back to previous versions of your code, everything you do will carry a much higher risk. This is your seatbelt.
1. Do you produce working software on a regular basis (e.g., weekly) that you can get customer feedback on? No? Okay, start here. Do small releases and short iterations.
2. How closely do you collaborate with the customer and the end users? If the answer is "infrequently", "not at all", or "oh, we pay a BA to do that", then I urge them to get regular direct collaboration with the customer - this means programmers talking to customers. Anything else is a fudge.
3. Do you agree acceptance tests with the customer so you know if you've delivered what they wanted? No? Okay, then you should start doing this. "Customer collaboration" can be massively more effective when we make things explicit. Teams need a testable definition of "done": it makes things much more focused and predictable and can save an enormous amount of time. Writing working code is a great way to figure out what the customer really needed, but it's a very expensive way to find out what they wanted.
4. Do you automate your tests? No? Well, the effect of test automation can be profound. I've watched teams go round and round in circles trying to stabilise their code for a release, wasting hundreds of thousands of pounds. The problem with manual testing (or little or noe testing at all), is that you get very long feedback cycles between a programmer making a mistake and that mistake being discovered. It becomes very easy to break the code without finding out until weeks or even months later, and the cost of fixing those problems escalates dramatically the later they're discovered. Start automating your acceptance tests at the very least. The extra effort will more than pay for itself. i've never seen an instance when it didn't.
5. Do your programmers integrate their code frequently, and is there any kind of automated process for building and deploying the software? No? Software development has a sort of metabolism. Automated builds and continuous integration are like high fibre diets. You'd be surprised how many symptoms of dysfunctional software development miraculously vanish when programmers start checking inevery hour or three. It will also be the foundation for that Holy Grail of software development, which will come to later.
6. Do your programmers write the tests first, and do they only write code to pass failing tests? No? Okay, this is where it gets more serious. Adopting Test-driven Design is a none-trivial undertaking, but the benefits are becoming well-understood. Teams that do TDD tend to produce mucyh more reliable code. They tend to deliver more predictably, and, in many cases, a bit sooner and with less hassle. They also often produce code that's a bit simpler and cleaner. Most importantly, the feedback we get from developer tests (unit tests) is often the most useful of all. When an acceptance test fails, we have to debug an entire call stack to figure out what went wrong and pinpoint the bug. Well-written unit tests can significantly narrow it down. We also get feedback far sooner from small unit tests than we do from big end-to-end tests, because we write far less code to pass each test. Getting this feedback sooner has a big effect on our ability to safely change our code, and is a cornerstone in sustaining the pace of development long enough for us to learn valuable lessons from it.
Now, before we continue, notice that I called it "Test-driven Design", and not "Test-driven Development". Test-driven Development is defined as "Test-driven Design + Refactoring", which brings us neatly on to...
7. Do you refactor your code to keep it clean? The thing about Agile that too many teams overlook is that being responsive to change is in no small way dependent on our ability to change the code. As code grows and evolves, there's a tendency for what we call "code smells" to creep in. A "code smell" is a design flaw in the code that indicates the onset of entropy - growing disorder in the code. Examples of code smells include things like long and complex methods, big classes or classes that do too many things, classes that depend too much on other classes, and so on. All these things have a tendency to make the code harder to change. By aggressively eliminating code smells, we can keep our code simple and malleable enough to allow us to keep on delivering those valuable changes.
8. Do you collect hard data to help objectively measure how well you're doing 1-7? If you come to me and ask me to help you diet (though God knows why you would), the first thing I'm going to do is recommend you buy a set of bathroom scales and a tape measure. Too many teams rely on highly subjective personal feelings and instincts when assessing how well they do stuff. Conversely, some teams - a much smaller number - rely too heavily on metrics and reject their own experience and judgement when the numbers disagree with their perceptions. Strike a balance here: don't rely entirely on voodoo, but don't treat statistics as gospel either. Use the data to inform your judgement. At best, it will help you ask the right questions, which is a good start towards 9.
9. Do you look at how you're doing - in particular at the quality of the end product - and ask yourselves "how could we do this better?" And do you actually follow up on those ideas for improving? Yes, yes, I know. Most Agile coaches would probably introduce retrospectives at stage 0 in their heirarchy of needs. I find, though, that until we have climbed a few rungs up that ladder, discussion is moot. Teams may well need them for clearing the air and for personal validation and ego-massaging and having a good old moan, but I've seen far too many teams abuse retrospectives by slagging everything off left, right and centre and then doing absolutely nothing about it afterwards. I find retrospectives far more productive when they're introduced to teams who are actually not doing too badly, actually, thanks very much. and I always temper 9 with 8 - too many retrospectives are guided by healing crystals and necromancy, and not enough benefit from the revealing light of empiricism. Joe may well think that Jim's code is crap, but a dig around with NDepend may reveal a different picture. You'd be amazed how many truly awful programmers genuinely believe it's everybody elses' code that sucks.
10. Can your customer deploy the latest working version of the software at the click of a mouse whenever they choose to, and as often as they choose to? You see, when the code is always working, and when what's in source control is never more than maybe an hour or two away from what's on the programmer's desktops, and when making changes to the code is relatively straightfoward, and when rolling back to previous versions - any previous version - is a safe and simple process, then deployment becomes a business decision. They're not waiting for you to debug it enough for it to be usable. They're not waiting for smal changes that should have taken hours but for some reason seem to take weeks or months. They can ask for feature X in the morning, and if the team says X is ready at 5pm then they can be sure that it is indeed ready and, if they choose to, they can release feature X to the end users straight away. This is the Holy Grail - continuous, sustained delivery. Short cycle times with little or no latency. The ability to learn your way to the most valuable solutions, one lesson at a time. The ability to keep on learning and keep on evolving the solution indefinitely. To get to this rung on my ladder, you cannot skip 1-9. There's little point in even trying continuous delivery if you're not 99.99% confident that the software works and that it will be easy to change, or that it can be deployed and rolled back if necessary at the touch of a button.
Now at this point you're probably wondering what happened to user experience, scalability, security, or what about safety-critical systems, or what about blah blah blah etc etc. I do not deny that these things can be very important. But I've learned from experience that these are things that come after 1-10 in my heirarchy of needs for programmers. That's not to say they can't be more important to customers and end users - indeed, user experience is often 1 on their list. But to achieve a great user experience, software that works and that can evolve is essential, since it's user feedback that will help us find the optimal user experience.
To put it another way, on my list, 10 is actually still at the bottom of the ladder. Continuous delivery and ongoing optmisation of our working practices is a platform for true excellence, not excellence itself. 10 is where your journey starts. Everything before that is just packing and booking your flights.
Posted 1 week, 6 days ago on June 27, 2011