May 4, 2016

Learn TDD with Codemanship

Scaling Kochō for the Enterprise



Unless you've been living under a rock, you'll no doubt have heard about Kochō. It's the new management technique that's been setting the tech world on fire.

Many books, blogs and Hip Hop ballets have been written about the details of Kochō, so it's suffice for me to just quickly summarise it here for anyone who needs their memory refreshing.

Kochō is an advanced technique for scheduling and tracking work that utilises hedgehogs and a complex network of PVC tubes. Task cards are attached to the hedgehogs - by the obvious means - and then they're released into the network to search for cheese or whatever it is that hedgehogs eat. The tubes have random holes cut out above people's desks. When a hedgehog falls through one of these holes, the person at that desk removes the task card and begins work. Progress is measured by asking the hedgehogs.

So far, we've mainly seen Kochō used successfully on small teams. But the big question now is does it scale?

There are many practical barriers to scaling Kochō to the whole enterprise, including:

* Availability of hedgehogs
* Structural weakness of large PVC tube networks
* Infiltration of Kochō networks by badgers
* Shortage of Certified Kochō Tubemasters

In this blog post, I will outline how you can overcome these hurdles and scale Kochō to any size of organisation.

Availability of hedgehogs


As Kochō has become more and more popular, teams have been hit by chronic hedgehog shortages. This is why smart organisations are now setting up their own hedgehog farms. Thankfully, it doesn't take long to produce a fully-grown, Kochō-ready hedgehog. In fact, it can be done in just one hour. We know it's true, because the organiser of the Year Of Hedgehogs said so on TV.

Structural weaknesses of large PVC tube networks


Steel-reinforce them.

Infiltration of Kochō networks by badgers


Regrettably, some managers have trouble telling a badger from a hedgehog. Well, one mammal is pretty much the same as another, right? Weeding out the badgers on small Kochō teams is straightforward. But as team sizes grow, it becomes harder and harder to pay enough attention to each individual "hedgehog" to easily spot imposters.

Worry not, though. If you make the holes bigger, badgers can work just as well.

Carry on. As you were.

Shortage of Certified Kochō Tubemasters



Many teams employ CKTs to keep an eye on things and ensure the badgers - sorry, "hedgehogs" - are following the process correctly. But, if hedgehogs are in short supply these days, CKTs are like proverbial hen's teeth.

Only a few teams dare try Kochō without a CKT. And they have learned that you don't actually need one... not really.

In fact, Kochō can work perfectly well without CKTs, tube networks, hedgehogs, or Kochō. Indeed, we're discovering that not doing Kochō scales best of all.





September 17, 2014

Learn TDD with Codemanship

The 4 C's of Continuous Delivery

Continuous Delivery has become a fashionable idea in software development, and it's not hard to see why.

When the software we write is always in a fit state to be released or deployed, we give our customers a level of control that is very attractive.

The decision when to deploy becomes entirely a business decision; they can do it as often as they like. They can deploy as soon as a new feature or a change to an existing feature is ready, instead of having to wait weeks or even months for a Big Bang release. They can deploy one change at a time, seeing what effect that one change has and easily rolling it back if it's not successful without losing 1,001 other changes in the same release.

Small, frequent releases can have a profound effect on a business' ability to learn what works and what doesn't from real end users using the software in the real world. It's for this reason that many, including myself, see Continuous Delivery as a primary goal of software development teams - something we should all be striving for.

Regrettably, though, many software organisations don't appreciate the implications of Continuous Delivery on the technical discipline teams need to apply. It's not simply a matter of decreeing from above "from now on, we shall deliver continuously". I've watched many attempts to make an overnight transition fall flat on their faces. Continuous Delivery is something teams need to work up to, over months and years, and keep working at even after they've achieved it. You can always be better at Continuous Delivery, and for the majority of teams, it would pay dividends to improve their technical discipline.

So let's enumerate these disciplines; what are the 4 C's of Continuous Delivery?

1. Continuous Testing

Before we can release our software, we need confidence that it works. If our aim is to make the software available for release at a moment's notice, then we need to be continuously reassuring ourselves - through testing - that it still works after we've made even a small change. The secret sauce here is being able to test and re-test the software to a sufficiently high level of assurance quickly and cheaply, and for that we know of only one technical practice that seems to work: automate our tests. It's for this reason that a practice like Test-driven Development, which leaves behind a suite of fast-running automated tests (if you're doing TDD well) is a cornerstone of the advice I give for transitioning to Continuous Delivery.

2. Continuous Integration

As well as helping us to flag up problems in integrating our changes into a wider system, CI is also fundamental to Continuous Delivery. If it's not in source control, it's going to be difficult to include it in a release. CI is the metabolism of software development teams, and a foundation for Continuous Delivery. Again, automation is our friend here. Teams that have to manually trigger compilation of code, or do manual testing of the built software, will not be able to integrate very often. (Or, more likely, they will integrate, but the code in their VCS will likely as not be broken at any point in time.)

3. Continuous Inspection

With the best will in the world, if our code is hard to change, changing it will be hard. Code tends to deteriorate over time; it gets more complicated, it fills up with duplication, it becomes like spaghetti, and it gets harder and harder to understand. We need to be constantly vigilant to the kind of code smells that impede our progress. Pair Programming can help in this respect, but we find it insufficient to achieve the quality of code that's often needed. We need help in guarding against code smells and the ravages of entropy. Here, too, automation can help. More advanced teams use tools that analyse the code and detect and report code smells. This may be done as part of a build, or the pre-check-in process. The most rigorous teams will fail a build when a code smell is detected. Experience teaches us that when we let code quality problems through the gate, they tend to never get addressed. Implicit in ContInsp is Continuous Refactoring. Refactoring is a skill that many - let's be honest, most - developers are still lacking in, sadly.

Continuous Inspection doesn't only apply to the code; smart teams are very frequently showing the software to customers and getting feedback, for example. You may think that the software's ready to be released, because it passes some automated tests. But if the customer hasn't actually seen it yet, there's a significant risk that we end up releasing something that we've fundamentally misunderstood. Only the customer can tell us when we're really "done". This is a kind of inspection. Essentially, any quality of the software that we care about needs to be continuously inspected on.

4. Continuous Improvement

No matter how good we are at the first 3 C's, there's almost always value in being better. Developers will ask me "How will we know if we're over-doing TDD, or refactoring?", for example. The answer's simple: hell will have frozen over. I've never seen code that was too good, never seen tests that gave too much assurance. In theory, of course, there is a danger of investing more time and effort into these things than the pay-offs warrant, but I've never seen it in all my years as a professional developer. Sure, I've seen developers do these things badly. And I've seen teams waste a lot of time because of that. But that's not the same thing as over-doing it. In those cases, Continuous Improvement - continually working on getting better - helped.

DevOps in particular is one area where teams tend to be weak. Automating builds, setting up CI servers, configuring machines and dealing with issues like networking and security is low down on the average programmer's list of must-have skills. We even have a derogatory term for it: "shaving yaks". And yet, DevOps is pretty fundamental to Continuous Delivery. The smart teams work on getting better at that stuff. Some get so good at it they can offer it to other businesses as a service. This, folks, is essentially what cloud hosting is - outsourced DevOps.

Sadly, software organisations who make room for improvement are in a small minority. Many will argue "We don't have the time to work on improving". I would argue that's why they don't have the time.







September 12, 2014

Learn TDD with Codemanship

Exothermic vs. Endothermic Change - Why Coaches Should Be A Match, Not The Sun

An analogy I sometimes use to explain my approach to promoting positive change in software development organisations is the difference between exothermic and endothermic reactions.

If you think back to high school chemistry, an exothermic reaction is that generates heat from within. The combustion of fuels like petrol and wood is an example of an exothermic reaction. Sitting around a campfire on a cold night is one way in which we benefit from exothermic reactions.

Conversely, an endothermic reaction is one that draws energy (heat) in from its surroundings to make it go. For example, photosynthesis in plants is an endothermic reaction powered by the sun.

The key here to understanding the Codemanship way is to appreciate that if the sun stops shining then photosynthesis stops, too. Whereas a campfire may keep on burning until all the useful fuel - in the form of combustible carbohydrates - is used up. In the case of the campfire, the reaction is triggered by an outside force - e.g., a match - but once the fire's going it sustains itself from within. An endothermic reaction needs continued outside stimulation - a constant input of external energy - or it stops.

Projecting that idea - albeit spuriously - on to fostering change in dev teams, as an outside force I would rather be a match lighting a campfire than the sun driving chemical reactions in a plant. (The two are, of course, related. The energy we're burning on the campfire came from the sun via photosynthesis, but that's the problem with analogies.)

My approach is to turn up, inject a big dose of external energy into the system, and try to get a fire started. For that, we need the system to have its own fuel. This is the people, and their energy and enthusiasm for doing things better.

The conditions need to be right, or once I stop injecting energy, the reaction will stop, too. Many development teams are the equivalent of damp wood, their enthusiasm having been dampened by years of hard grind and demotivation. They need some preparing before we can light the fire.

The calories to burn, though, are always there. It's not easy becoming even a mediocre software developer. There would have been a time when anyone who does it for living was enthused and motivated to work through the pain and learn how to make programs work. That enthusiasm is rarely lost forever, even though it may often be buried deep beneath the battle-scarred surface.

So my focus tends to be on recapturing that joy of programming so that the latent energy and enthusiasm can be more easily ignited, starting a self-sustaining process of change that mostly comes from within the teams and doesn't have to be continually driven from outside.

This is why, putting specific practices and technical knowledge aside, Codemanship is chiefly about addressing developer culture. Workshops on TDD, refactoring, OO design and all manner of goodly Extreme stuff are really just hooks on which to hang that hat: an excuse to have conversations about what being a software developer means to you, about the developer culture in your organisation, and to do more than a little rabble rousing. That you leave having learned something about red-green-refactor is arguably less important than if you leave thinking "I'm as mad as hell, and I'm not going to take it any more!"

This is all because I believe that writing software can be, and for some people, is the best job in the world. Well, maybe not the best - but it's certainly got the potential to be a great way to make a living. I wake up every day thankful that I get to do this. It pains me to see developers who've had that beaten out of them by the school of hard knocks.

Real long-term change seems always to come from within. It has to be a self-sustaining process, driven almost unconsciously by teams who love what they do. Ultimately, it's something teams must do for themselves. All I can do is light the match.




Learn TDD with Codemanship

Exothermic vs. Endothermic Change - Why Coaches Should Be A Match, Not The Sun

An analogy I sometimes use to explain my approach to promoting positive change in software development organisations is the difference between exothermic and endothermic reactions.

If you think back to high school chemistry, an exothermic reaction is that generates heat from within. The combustion of fuels like petrol and wood is an example of an exothermic reaction. Sitting around a campfire on a cold night is one way in which we benefit from exothermic reactions.

Conversely, an endothermic reaction is one that draws energy (heat) in from its surroundings to make it go. For example, photosynthesis in plants is an endothermic reaction powered by the sun.

The key here to understanding the Codemanship way is to appreciate that if the sun stops shining then photosynthesis stops, too. Whereas a campfire may keep on burning until all the useful fuel - in the form of combustible carbohydrates - is used up. In the case of the campfire, the reaction is triggered by an outside force - e.g., a match - but once the fire's going it sustains itself from within. An endothermic reaction needs continued outside stimulation - a constant input of external energy - or it stops.

Projecting that idea - albeit spuriously - on to fostering change in dev teams, as an outside force I would rather be a match lighting a campfire than the sun driving chemical reactions in a plant. (The two are, of course, related. The energy we're burning on the campfire came from the sun via photosynthesis, but that's the problem with analogies.)

My approach is to turn up, inject a big dose of external energy into the system, and try to get a fire started. For that, we need the system to have its own fuel. This is the people, and their energy and enthusiasm for doing things better.

The conditions need to be right, or once I stop injecting energy, the reaction will stop, too. Many development teams are the equivalent of damp wood, their enthusiasm having been dampened by years of hard grind and demotivation. They need some preparing before we can light the fire.

The calories to burn, though, are always there. It's not easy becoming even a mediocre software developer. There would have been a time when anyone who does it for living was enthused and motivated to work through the pain and learn how to make programs work. That enthusiasm is rarely lost forever, even though it may often be buried deep beneath the battle-scarred surface.

So my focus tends to be on recapturing that joy of programming so that the latent energy and enthusiasm can be more easily ignited, starting a self-sustaining process of change that mostly comes from within the teams and doesn't have to be continually driven from outside.

This is why, putting specific practices and technical knowledge aside, Codemanship is chiefly about addressing developer culture. Workshops on TDD, refactoring, OO design and all manner of goodly Extreme stuff are really just hooks on which to hang that hat: an excuse to have conversations about what being a software developer means to you, about the developer culture in your organisation, and to do more than a little rabble rousing. That you leave having learned something about red-green-refactor is arguably less important than if you leave thinking "I'm as mad as hell, and I'm not going to take it any more!"

This is all because I believe that writing software can be, and for some people, is the best job in the world. Well, maybe not the best - but it's certainly got the potential to be a great way to make a living. I wake up every day thankful that I get to do this. It pains me to see developers who've had that beaten out of them by the school of hard knocks.

Real long-term change seems always to come from within. It has to be a self-sustaining process, driven almost unconsciously by teams who love what they do. Ultimately, it's something teams must do for themselves. All I can do is light the match.




January 20, 2014

Learn TDD with Codemanship

What is Customer-Driven Development, Anyway?

After that last blog post, a couple of people have asked "is 'Customer-driven Development' a thing?"

Well, it almost was. Let's take a trip down memory lane...

In the Dark Ages, before we went all Agile and Lean and Wotsit-driven, an idea kicked around that sought to reimagine software development as a system that only did things in response to external stimuli from customers.

Every process, every practice, every activity was to be triggered by some customer action (or some other event determined by the customer, like a milestone, a deadline, a business rule or trigger and so on.)

Yes; you can probably tell that this has the architect's fingerprints all over it. Naive as we were, we genuinely believed - well, some of us did, at any rate - that the way teams developed software could be modeled and shaped using the same techniques we used to model and shape the software itself. Development resources and artefacts were objects. Processes were state machines that acted on those objects, or were enacted by those objects. Development teams were systems, with use cases. Software development use cases were triggered by actors - agents outside of the system.

Crazy as it was, out of all that a handful of us nurtured the germ of the idea of Customer-driven Development (though we didn't call it that at the time).

Think of your development teams a server. It sits and listens, ticking over, waiting for a customer to make a request. It might come in the form of a feature request, or a bug report, or a request for the software to be released, that sort of thing.

Today, a better analogy might be that the development team is the engine of a motorboat, and the customer is the captain who steers the boat. When the customer says "go this way", the engine propels the boat that way. Okay, that's not a great analogy, either.

You get the idea, though; the customer is in the driving seat. The customer drives development.

For true Customer-driven Development, though, the customer needs suitable controls to drive with. If they lack the necessary hands-on control, there's a danger they just become passengers. I see that a lot. The customer is being driven by the developers, or by the project manager, or a product owner, or a business analyst. And they don't necessarily end up where they wanted to go, especially when these "taxi drivers" have ideas of their own about the direction the software should be taking.

To reduce this risk, customers need to be at the wheel. And the controls need to be designed with the customer in mind. Hence my previous post.




August 23, 2013

Learn TDD with Codemanship

Software Ideas & Their Tendency Towards Ubiquity

One marked way in which ideas in software development sometimes behave like religious movements is their tendency towards ubiquity.

It all starts innocently enough, with some bright spark saying "hey, you know what's worked for me?" Usually, it's a point solution to a specific problem, like writing the test before we write the code, or scheduling work so that developers pick up the next most important task from the queue as soon as they've completed the last one.

Simple ideas to solve particular problems.

Religions too, can start out with a simple idea like "hey, let's all treat each other the way we'd wish others to treat us" and so on.

But before we know it, the thought leaders of these religious movements are asking questions like "What does God have to say about wearing Nike on a Thursday?" and "What sort of toppings are acceptable on a low-sodium bagel?" and their religion starts to burrow its way into every aspect of our daily lives, dictating everything from beard length to when we can and cannot eat certain kinds of dairy products. Not unsurprisingly, the original underlying idea can get lost, and we end up with religious zealots who will gleefully nail you to a tree for wearing the wrong kind of underpants during a month with a 'Y' in the name, but who seem to have no hang-ups about nailing people to trees in the first place.

So, too, do some ideas burrow their way into other aspects of the way we write software. There seems to be a built-in predaliction for some ideas - usually methodological, but I've seen it happen with tools, too - to grow to become all-encompassing, and for the original underlying idea to get forgotten.

And I can understand the motivations behind this; particularly for consultants. A hammer gets a much larger potential market if we claim it can tell the time, too. We can dramatically extend the scope of our influence by making what we're experts in apply to just about everything.






September 16, 2012

Learn TDD with Codemanship

Are Woolly Definitions Of "Success" At The Heart Of Software Development's Thrall To Untested Ideas?

In the ongoing debate about what works and what doesn't in software development, we need to be especially careful to define what we mean by "it worked".

In my Back To Basics paper, I made the point that teams need to have a clear, shared and testable understanding of what is to be achieved.

Without this, we're a ship on a course to who-knows-where, and I've observed all manner of ills stemming from this.

Firstly, when we don't know where we're supposed to be headed, steering becomes a fruitless exercise.

It also becomes nigh-on impossible to gauge progress in any meaningful way. It's like trying to score an archery contest with an invisible target.

To add to our worries, teams that lack clear goals have a tendency to eat themselves from the inside. We programmers will happily invent our own goals and persue our own agendas in the absence of a clear vision of what we're all meant to be aiming for.

This can lead to excess internal conflict as team members vie to stamp their own vision on a product or project. Hence an HR system can turn into a project to implement an "Enterprise Service Bus" or to "adopt Agile".

Since nobody can articulate what the real goals are, any goal becomes more justifiable, and success becomes much easier to claim. I've met a lot of teams who rated their product or project as a "big success", much to the bemusement of the end users, project sponsors and other stakeholders, who can take a very different view.

There are times when we can display all the misplaced confidence and self-delusion of an X Factor contestant who genuinely seems to have no idea that they're singing out of tune and dancing like their Dad at a wedding.

Much of the wisdom we find on software development comes from people, and teams, who are basing their insights on a self-endowed sense of success. "We did X and we succeeded, therefore it is good to X" sort of thing.

Here's my beef with that: first off, it's bad science.

It's bad science for three reasons: one is that one data point doesn't make a trend, two is that perhaps you have incorrectly attributed your success to X rather than one of the miriad other factors in software development, and three is that can we really be sure that you genuinely succeeded?

If I claim that rubbing frogspawn into your eyes cures blindness, we can test that by rubbing frogspawn into the eyes of blind people and then measuring the accuity of their eyesight afterwards.

If, on ther hand, I claim that rubbing frogspawn into your eyes is "a good thing to do", and that after I rubbed frogspawn into my eyes, I got "better" - well, how can we test that? What is "better"? Maybe I rubbed frogspawn into my eyes and my vocabulary improved.

My sense is that a worrying proportion of what we read and hear about "things that are good to do" in software development is based on little more than "how good (or how right) it felt" to do them. Who knows; maybe rubbing fresh frogspawn in your eyes feels great. But that has little bearing on its efficacy as a treatment.

Without clear goals, it's not easy to objectively determine if what we're doing is working, and this - I suspect - is the underlying reason why so much of what we know, or we think we know, about software development is so darned subjective.

Teams who've claimed to me that they're "winning" (perhaps because of all the tiger blood) have turned out to be so wide of the mark that, in reality, the exact oppsosite was true. These days, when I hear proclamations of great success, it's usually a precursor to the whole project getting canned.

The irony is that those few teams who knew exactly what they were aiming for often measure themselves more brutally against their goals, and are more pessimistic, despite in real terms being more "winning" than teams who were prematurely doing their victory lap.

This, I suspect, has also contributed to the dominance of subjective ideas in software development. Ideas backed up by objective successes seem to be expressed more tentatively and with more caveats than ideas backed up by little more than feelgood and tiger blood, which are expressed more confidently and in more absolute terms.

The naked ape in all of us seems to respond more favourably to people who present their ideas with confidence and a greater sense of authority. In reality, many of these ideas have never really been put to the test.

Once an idea's gained traction, there can be benefits within the software development community to being its originator or a perceived expert in it. Quickly, vested interests build up and the prospect of having their ideas thoroughly tested and potentially debunked becomes very unattractive. The more popular the idea, and the deeper the vested interests, the more resistance to testing it. We do not question whether a burning bush really could talk when we're in the middle of a fundraising drive for the church roof...

It's saddening to see, then, that in the typical lifecycle of an idea, publicising it often preceds testing it. More fools us, though. We probably need to be much more skeptical and demanding of hard evidence to back these ideas up.

Will that happen? I'd like to think it could, but the pessimist in me wonders if we'll always opt for the shiny-and-new and leave our skeptical hats at home when sexy new ideas - with sexy new acronyms - come along.

But a good start would be to make the edges of our definition of "success" crisper and less forgiving.





April 19, 2012

Learn TDD with Codemanship

Enough With The Movements! Movements Are Stupid.



I've been around the block a few times as a software developer, and as such I've witnessed several movements in the industry come and go.

Each movement (object technology, patterns, component-based, model-driven, Agile, service-oriented, Lean, craftsmanship etc etc) attempts to address a genuine problem, usually. And at the core of every movement, there's a little kernel of almost universal truth that remains true long after the movement that built upon it fell out of favour with the software chattering classes.

The problem I perceive is that this kernel of useful insight tends to become enshrouded in a shitload of meaningless gobbledygook, old wives tales and sales-speak, so that the majority of people jumping on to the bandwagon as the movement gains momentum often miss the underlying point completely (often referred to as "cargo cults").

Along with this kernel of useful insights there also tends to be a small kernel of software developers who actually get it. Object technology is not about SmallTalk. Patterns are not about frameworks. Components are not about COM or CORBA. Model-driven is not about Rational Rose. SOA is not about web services. Agile is not about Scrums. Responsibility-driven Design is not about mock objects. Craftsmanship is not about masters and apprentices or guilds or taking oaths.

In my experience, movements are a hugely inefficient medium for communicating useful insights. They are noisy and lossy.

My question is, do we need movements? When I flick through my textbooks from my physics degree course, they don't read as a series of cultural movements within the physics community. What is true is true. If we keep testing it and it keeps working, then the insights hold.

What is the problem in switching from a model of successive waves of movements, leaving a long trail of people who still don't get it, and possibly never will, to a model that focuses on testable, tested, proven insights into software development?

I feel for the kid who comes into this industry today - or on any other day. I went through the exact same thing before I started reading voraciously to find out what had come before. They may be deluged with wave after wave of meaningless noise, and every year, as more books get published about the latest, greatest shiny thing, it must get harder and harder to pick out the underlying signal from all the branding, posturing and reinvention of the wheel.

You see, it's like this. Two decades of practice and reading has inexorably led me to the understanding that very little of what I've learned that's genuinely important wasn't known about and written about before I was even born. And, just as it it is with physics, once you peel away the layers of all these different kinds of particle, you discover underlying patterns that can be explained surprisingly succinctly.

For those who say "oh, well, software development's much more complicated than that", I call "bullshit". We've made it much more complicated than it needs to be. It's a lot like physics or chess (both set-theoretic constructs where simple rules can give rise to high complexity, just like code): sure, it's hard, but that's not the same as complicated. The end result of what we do as programmers can be massively complicated. But the underlying principles and disciplines are simple. Simple and hard.

We do not master complexity by playing up to it. By making what we do complicated. We master complexity by keeping it simple and mastering how software comes about at the most fundamental level.

Logic is simple, but algorithms can be complex. A Turing Machine is simple, but a multi-core processor is complex. Programming languages are simple, but a program can be highly complex. Programming principles are simple, but can give rise to highly complex endevours.

Complexity theory teaches us that to shape complex systems, we must focus on the simple underlying rules that give rise to them. At its heart, software development has a surprisingly small core of fundamental principles that are easy to understand and hard to master, many of which your average programmer is blissfully unaware.

True evolution and progress in software development, as far as I can see, will require us to drop the brands, dump the fads and the fashions, and focus on what we know - as proven from several decades of experience and several trillion lines of code.




February 29, 2012

Learn TDD with Codemanship

The Maturity Model Maturity Model (MMMM)

There are various maturity models in this business of wares that we call "soft". Maturity models fulfil a vital role in providing reassurance to managers who have no intention of actually improving anything ever, as well as providing incomes for consultants who might otherwise starve or be forced to work in the sex industry or something or other.

But there's a glaring hole in the maturity model market, namely that there's no maturity model for maturity models.

Fear not, though, as tonight I'm going to fill that glaring hole with my own Maturity Model Maturity Model (MMMM)

MMMM has 5 levels of maturity:

Level 1 - Ad hoc: You have invented a method or process that teams are pretending to adopt, but have yet to provide any guidance on how effectively they are pretending to adopt it.

Level 2 - Certifying: You offer training and certification in your method or process that informs organisations that people are properly qualified to pretend to adopt it

Level 3 - Certifier Certifying: You offer training and certification to people that lets organisations know that they are qualified to train and certify other people in the effective pretence of adopting your method or process

Level 4 - Upselling: You offer certification of whole organisations as well as individual people in the effective pretence of adopting your method or process. You have a checklist of things organisations must appear to be doing (evidenced by them having a document somewhere that says that they do it) that creates the convincing impression that they have actually adopted your method or process.

Level 5 - Reproducing: You offer certification to organisations that tells other organisations that they are qualified in the certification of organisations in the effective pretence of adopting your method or process, as well as certifying organisations in the certification of certifying organisations.

Level 6 - Expanding: You offer certification in things you know nothing about that are tenuously connected with the adoption of your method or process (e.g., the Correct Use Of Office Furniture Maturity Model), and continue to add arbitrary levels of maturity up to and including "Level Infinity - Transcendent Beings Of Pure Energy & Thought". Level 6 of MMMM is, of course, an illustration of Level 6 of MMMM.





June 27, 2011

Learn TDD with Codemanship

Continuous Delivery is a Platform for Excellence, Not Excellence Itself

In case anyone was wondering, I tend to experience a sort of "heirarchy of needs" in software development. When I meet teams, I usually find out where they are on this ladder and ask them to climb up to the next rung.

It goes a little like this:

0. Are you using a version control system for your code? No? Okay, things really are bad. Sort this out first. You'd be surprised how much relies on that later. Without the ability to go back to previous versions of your code, everything you do will carry a much higher risk. This is your seatbelt.

1. Do you produce working software on a regular basis (e.g., weekly) that you can get customer feedback on? No? Okay, start here. Do small releases and short iterations.

2. How closely do you collaborate with the customer and the end users? If the answer is "infrequently", "not at all", or "oh, we pay a BA to do that", then I urge them to get regular direct collaboration with the customer - this means programmers talking to customers. Anything else is a fudge.

3. Do you agree acceptance tests with the customer so you know if you've delivered what they wanted? No? Okay, then you should start doing this. "Customer collaboration" can be massively more effective when we make things explicit. Teams need a testable definition of "done": it makes things much more focused and predictable and can save an enormous amount of time. Writing working code is a great way to figure out what the customer really needed, but it's a very expensive way to find out what they wanted.

4. Do you automate your tests? No? Well, the effect of test automation can be profound. I've watched teams go round and round in circles trying to stabilise their code for a release, wasting hundreds of thousands of pounds. The problem with manual testing (or little or noe testing at all), is that you get very long feedback cycles between a programmer making a mistake and that mistake being discovered. It becomes very easy to break the code without finding out until weeks or even months later, and the cost of fixing those problems escalates dramatically the later they're discovered. Start automating your acceptance tests at the very least. The extra effort will more than pay for itself. i've never seen an instance when it didn't.

5. Do your programmers integrate their code frequently, and is there any kind of automated process for building and deploying the software? No? Software development has a sort of metabolism. Automated builds and continuous integration are like high fibre diets. You'd be surprised how many symptoms of dysfunctional software development miraculously vanish when programmers start checking inevery hour or three. It will also be the foundation for that Holy Grail of software development, which will come to later.

6. Do your programmers write the tests first, and do they only write code to pass failing tests? No? Okay, this is where it gets more serious. Adopting Test-driven Design is a none-trivial undertaking, but the benefits are becoming well-understood. Teams that do TDD tend to produce mucyh more reliable code. They tend to deliver more predictably, and, in many cases, a bit sooner and with less hassle. They also often produce code that's a bit simpler and cleaner. Most importantly, the feedback we get from developer tests (unit tests) is often the most useful of all. When an acceptance test fails, we have to debug an entire call stack to figure out what went wrong and pinpoint the bug. Well-written unit tests can significantly narrow it down. We also get feedback far sooner from small unit tests than we do from big end-to-end tests, because we write far less code to pass each test. Getting this feedback sooner has a big effect on our ability to safely change our code, and is a cornerstone in sustaining the pace of development long enough for us to learn valuable lessons from it.

Now, before we continue, notice that I called it "Test-driven Design", and not "Test-driven Development". Test-driven Development is defined as "Test-driven Design + Refactoring", which brings us neatly on to...

7. Do you refactor your code to keep it clean? The thing about Agile that too many teams overlook is that being responsive to change is in no small way dependent on our ability to change the code. As code grows and evolves, there's a tendency for what we call "code smells" to creep in. A "code smell" is a design flaw in the code that indicates the onset of entropy - growing disorder in the code. Examples of code smells include things like long and complex methods, big classes or classes that do too many things, classes that depend too much on other classes, and so on. All these things have a tendency to make the code harder to change. By aggressively eliminating code smells, we can keep our code simple and malleable enough to allow us to keep on delivering those valuable changes.

8. Do you collect hard data to help objectively measure how well you're doing 1-7? If you come to me and ask me to help you diet (though God knows why you would), the first thing I'm going to do is recommend you buy a set of bathroom scales and a tape measure. Too many teams rely on highly subjective personal feelings and instincts when assessing how well they do stuff. Conversely, some teams - a much smaller number - rely too heavily on metrics and reject their own experience and judgement when the numbers disagree with their perceptions. Strike a balance here: don't rely entirely on voodoo, but don't treat statistics as gospel either. Use the data to inform your judgement. At best, it will help you ask the right questions, which is a good start towards 9.

9. Do you look at how you're doing - in particular at the quality of the end product - and ask yourselves "how could we do this better?" And do you actually follow up on those ideas for improving? Yes, yes, I know. Most Agile coaches would probably introduce retrospectives at stage 0 in their heirarchy of needs. I find, though, that until we have climbed a few rungs up that ladder, discussion is moot. Teams may well need them for clearing the air and for personal validation and ego-massaging and having a good old moan, but I've seen far too many teams abuse retrospectives by slagging everything off left, right and centre and then doing absolutely nothing about it afterwards. I find retrospectives far more productive when they're introduced to teams who are actually not doing too badly, actually, thanks very much. and I always temper 9 with 8 - too many retrospectives are guided by healing crystals and necromancy, and not enough benefit from the revealing light of empiricism. Joe may well think that Jim's code is crap, but a dig around with NDepend may reveal a different picture. You'd be amazed how many truly awful programmers genuinely believe it's everybody elses' code that sucks.

10. Can your customer deploy the latest working version of the software at the click of a mouse whenever they choose to, and as often as they choose to? You see, when the code is always working, and when what's in source control is never more than maybe an hour or two away from what's on the programmer's desktops, and when making changes to the code is relatively straightfoward, and when rolling back to previous versions - any previous version - is a safe and simple process, then deployment becomes a business decision. They're not waiting for you to debug it enough for it to be usable. They're not waiting for smal changes that should have taken hours but for some reason seem to take weeks or months. They can ask for feature X in the morning, and if the team says X is ready at 5pm then they can be sure that it is indeed ready and, if they choose to, they can release feature X to the end users straight away. This is the Holy Grail - continuous, sustained delivery. Short cycle times with little or no latency. The ability to learn your way to the most valuable solutions, one lesson at a time. The ability to keep on learning and keep on evolving the solution indefinitely. To get to this rung on my ladder, you cannot skip 1-9. There's little point in even trying continuous delivery if you're not 99.99% confident that the software works and that it will be easy to change, or that it can be deployed and rolled back if necessary at the touch of a button.

Now at this point you're probably wondering what happened to user experience, scalability, security, or what about safety-critical systems, or what about blah blah blah etc etc. I do not deny that these things can be very important. But I've learned from experience that these are things that come after 1-10 in my heirarchy of needs for programmers. That's not to say they can't be more important to customers and end users - indeed, user experience is often 1 on their list. But to achieve a great user experience, software that works and that can evolve is essential, since it's user feedback that will help us find the optimal user experience.

To put it another way, on my list, 10 is actually still at the bottom of the ladder. Continuous delivery and ongoing optmisation of our working practices is a platform for true excellence, not excellence itself. 10 is where your journey starts. Everything before that is just packing and booking your flights.