November 25, 2014

...Learn TDD with Codemanship

Continuous Inspection II - Planning & Executing CInsp

In this second blog post about Continuous Inspection (CInsp, for short), I want to look at how we might manage the CInsp process to get the most value from it.

While some develoment teams are now using CInsp tools to analyse their code to get early warnings about code quality problems when they're easier and cheaper to fix, it's fair to say that this area of the develoment discipline has to date evaded the principles that we apply to other kinds of requirements.

Typically, as a kind of work, CInsp is ad hoc, unplanned, untracked and most teams who do it have only a very vague idea of what kind of cost it has and what kind of benefits they're reaping from it.

CInsp is rarely prioritised, leaving the field wide open to waste a lot of time and effort on activities that add little or no value.

Non-functional requirements obey the same laws as functional ones, which is why we need to attack them using the same principles and techniques.

In this post, I want to examine how we plan and execute CInsp on projects starting from scratch. (In a future post, I'll talk about applying CInsp to existing code bases with a build-up of code quality issues.)

Continuous Inspection Requirements.

There are an infinite number of properties we could look for in our code, but some have value in finding and most don't. Rather than waste our time arbitrarily searching our code for "stuff", it's important we have a clear idea of what it is we're looking for and why.

Extreme Programming, for example, has a perfectly usable mechanism for describing the things we want to inspect for, and the benefits of catching those kinds of code quality problems early.

A Code Quality Story is a non-functional user story that briefly summarises a code quality "bug" we wish to avoid and the pay-off we might expect if we can avoid introducing it into our code.



Note first of all that I've chosen here to use a blue index card. This might be in a system where we write functional user stories on green cards, report bugs on red cards, and record other outcomes - "miscellaneous tasks", like setting up the build and implementing code quality gates - on blue cards.

Why do this? Well, I've found it very useful to know roughly how much of a team's time is split between delivering working features, fixing bugs (ideally, zero time), and "shaving yaks" when the yaks being shaved are sufficiently large and not part of the work of delivering specific features.

The importance of the effort split becomes apparent as time goes by and the software evolves. A healthy project is one where the proportion of effort devoted to delivering working features remains relatively constant. What typically happens on teams who set out at an unsustainable pace is that they begin development with their time devoted mostly to the green cards, and after a few months most of their time is spent tackling red cards and making a lot less progress on new features. This is a good indicator of the rising cost of change we're seeking to avoid, so we can sustain the pace of development and deliver value for longer. This information will help us better judge how well-spent the time devoted to things like CInsp is.

So we have a placeholder for our code quality requirement in the form of a blue index card. What next?

Planning Continuous Inspection

This is where I, and a lot of teams, have gone wrong in the past. What we should never, ever do is allow the customer to choose when and whether we tackle non-functional requirements. And in "customer" I include proxy customers like business analysts and project managers. The overwhelmingly common experience of development teams is that purely technical issues, like code quality, get sidelined by non-technical stakeholders.

We must not give them the chance to drop our Feature Envy story in favour of a story about, say, sorting columns in an HTML table if we strongly believe, as professionals, that avoiding Feature Envy is important. If, as the evidence suggests, care taken over code quality helps to maintain productivity and deliver greater value over time, then we risk presenting customers with a confusing false dichotomy between work that enhances quality and work that directly delivers working features.

The analogy I use is to pretend we're running a restaurant using the planning practices of Extreme Programming.

Every job that needs doing gets written on a card, and placed into a backlog of outstanding work. There will be user stories like "Take table 3's order" and "Serve french fries and beer to table 7" and "Get the bill for table 12". These are stories about work that will make the restaurant money.

There will also be stories like "Wash the dishes in the sink" and "Clean out pizza oven" and "Repaint sign over door". These are about tasks that cost money, but don't directly bring in revenue by themselves.

If we allowed our restaurant's shareholders - who themselves have never worked in a restaurant, but they have a stake in it as a business - to prioritise what stories get done at the expense of others in a world where backlogs always outweigh the available time and resources, then there's a very real danger that the kitchen will rarely get cleaned, the sign above the door will fade until nobody can see it, and we'll run out of clean plates halfway through service.

The temptation for teams who are driven solely by the priorities of non-technical stakeholders is that non-functional issues like code quality will only get tackled when a crisis emerges that blocks progress on functional requirements. i.e., we don't wash up until we run out of plates, or we don't clean the kitchen until the inspector shuts us down, or we don't repaint the sign until the customers have stopped coming in.

One thing we've learned about writing software is that it's cheaper and easier to tackle problems proactively and catch them earlier. Sadly, too many teams are left lurching from one urgent crisis to the next, never getting the chance to get ahead of the issues.

For this reason, I strongly advise against involving non-technical stakeholders in planning CInsp. (As well as other technical work.)

Now put yourself in the diner's shoes: you pick up the menu, and every dish lists all of the tasks restaurant staff have to do in order to deliver it. Let's say we charge £11 for fish and chips, with a clean grill, mopping the floors, cashing up that evening, doing the accounts, getting up early to take delivery of fresh fish, and so on.

Two questions:

1. If we hadn't told them, would the diner even care?

2. If we make it the diner's business, are we inviting them to negotiate the price of the fish and chips down by itemising what goes in to running the restaurant? ("I'll have the fish & chips, but I'm not paying for your trainee chef's college course" etc)

The world is full of work that needs doing, but nobody thinks they should pay for. In order for the world to keep turning, for fish & chips to appear on our dining tables, this work has to get done one way or another, and it has to be paid for.

The way a restaurant squares this circle is to build it into the cost of the meal and to not present diners with a choice. Their choice is simple: don't like the price, don't order the dish.

Likewise in software development, there's a universe of tasks that need doing that do not directly end with a working feature being delivered to the customer's table. We must build this work into the price ("feature X will take 3 days to deliver") and avoid presenting the customer with bewildering choices that, in reality, aren't choices at all.

So planning Continuous Inspection is something that happens within the team among technical stakeholders who understand the issues and will be doing the work. This is good advice for any non-functional requirements, be they about build automation, internal training or hiring developers. This is just "stuff that has to happen" so we can deliver working software reliably, economically and sustainably.

The key thing, to avoid teams disappearing up their own backsides with the technical stuff, is to make sure we're all absolutely clear about why we're doing it. Why are we automating the build? Why are we writing a tool that generates code? Why are we sending half the team to the Software Craftsmanship conference? (Some companies send entire teams.) And the answer should always be something of value to the customer, even if that value might not be realised for months or years.

In practice, we have planning meetings - especially in the early stages of a project - that are for technical stakeholders only. Lock the doors. Close the blinds. Don't tell the boss. (I have literally experienced running around offices looking for rooms where the developers can have these discussions in private, chased by the project manager who insists on sitting in. "Don't mind me. I won't interfere." Two seconds later...)

Such meetings give teams a chance to explicitly discuss code quality and to thrash out what they mean by "good code" and "bad code" and establish a shared set of priorities over code quality. It's far better to have these meetings - and all the inevitable disagreements - at the start, when we can take steps to prevent issues, than to have them later when we can only ask "what went wrong?"

Executing Continuous Inspection

On new software, the effort in Continuous Inspection tends to be front-loaded, and with good reason.

As I've mentioned a few times already, it tends to be far cheaper to tackle code quality "bugs" early - the earlier the better. This means that adding new code quality requirements later in development tends to catch problems when they're much more expensive to fix, so it makes sense to set the quality bar as high as we can at the start.

There's good news and there's bad news. First, the bad news: on a new project, from a standing start, it's going to take considerable effort to get automated code inspections in place. It will vary greatly, depending on the technology stack, availability of tools, experience levels in the team, and so on. But it's not going to take an afternoon. So you may be faced with having to hide a big chunk of effort from non-technical stakeholders if you attempt to start development (from their perspective, when they're actively involved) at the same time as putting CInsp in place. (Same goes for builds, CI, and a raft of other stuff that we need to get up and running early on.)

Another very strong recommendation from me: have at least one iteration before you involve the customer. Get the development engine running smoothly before you wind down the window and shout "Where to, guv'nor?" They may be less than impressed to discover that you just need to build the engine before you can set off. Delighting customers is as much about expectations as it is about actual delivery.

Going back to the restaurant analogy, consider why restaurants distinguish between "service" and "preparation". Service may start at 6pm, but the chefs have probably been there since 9am getting things ready for that. If they didn't, then those first orders might take hours to reach the table. Too many development teams attempt the equivalent of starting service as the ingredients are being delivered to the kitchen. We need to do prep, too, before we can start taking orders.

Now, for the good news: the kinds of code quality requirements we might have on one, say, JEE project are likely to be similar on another JEE project. CInsp practitioners tend to find that they can get a lot of reuse out of code quality gates they've already developed for previous projects. So, over months and years, the overall cost of getting CInsp up and running tends to decrease quite significantly. If your technology stack remains fairly stable over the years, you may well find that getting things up and running can eventually become an almost push-button process. It takes a lot of investment to get there, though.

Code Quality stories work the same way as user stories in their execution. We plan what stories we're going to tackle in the current timebox in the same way. We tackle them in pairs, if possible. We treat them purely as placeholders to have a conversation with the person asking for each story. And, most importantly, we agree...

Continuous Inspection Acceptance Tests

Going back to our Feature Envy code quality story, what does the developer who write that story mean by "Feature Envy"?

Here's the definition from Martin Fowler's Refactoring book:

"A classic [code] smell is a method that seems more interested in a class other than the one it is in. The most common focus of the envy is the data."

It's all a bit handwavy, as is usually the case with software design wisdom. A human being using their intelligence, experience and judgement might be able to read this, look at some code and point to things that seem to them to fit the description.

Programming a computer to do it, on the other hand...

This is where we can inhabit our customer's world for a little while. When we ask our customer to precisely decribe a business rule, we're putting them on the spot every bit as much as a computable definition of Feature Envy might put me and you on the spot. In cold, hard, computable terms: we don't quite know what we mean.

When the business problem we're solving is about, say, mortgages or video rentals or friend requests, we ask the customer for examples that illustrate the rule. Using examples, we can establish a shared vocabulary - a language for expressing the rule - explore the boundaries, and pin down a precise computable understanding of it (if there is one.)

We shouldn't be at all surprised that this technique also works very well for rules about our code. Ask the owner of a code quality story to track down some classic examples of code that breaks the rule, as well as code that doesn't (even if it looks at first glance like it might).

This is where the real skill in CInsp comes into play. To win at Continuous Inspection, development teams need to be skilled as reasoning about code. This is not a bad skill for a developer to have generally. It helps us communicate better, it helps us visualise better, it makes us better at design, at refactoring, at writing tools that work with code. Code is our domain model - the business objects of programming.

Using our code reasoning skills, applied to examples that will form the basis of acceptance tests, we can drive out the design of the simplest tool possible that will sound the alarm when the "bad" examples are considered, while silently allowing the "good" examples to pass through the quality gate.

As with functional user stories, we're not done until we have a working automated quality gate that satisfies our acceptance tests and can be applied to new code straight away.

In the next blog post, we'll be rolling up our sleeves with an example Continuous Inspection quality gate, implementing it using a variety of tools to demonstrate that there's often more than one way to skin the code quality cat.









Posted 3 weeks, 2 days ago on November 25, 2014