May 26, 2015

Ditch The Backlog and Start Iterating!

Goals.

Yes, those.

The old ways have a habit of sneaking back in through the back door. And so it is with Agile Software Development that, despite all our protestations about being iterative and open to feedback and change, The Big PlanTM found its way back cunningly disguised as the backlog.

The reality is that most Agile teams are not iterating their designs in rapid feedback cycles, but instead are incrementally working their way through a plan for a solution that was cooked up by what we used to call "requirements analysts" - generally speaking, people who talk to the customer to find out what they want and draw up a specification - right at the start.

The backlog on many teams doesn't change much. And this is because the goal of each small frequent delivery is not to try out the software and see how it can be made better in the next delivery, but to test each delivery to check that it conforms to The Big PlanTM.

The box-ticking exercise of user acceptance testing usually just asks "is that what we agreed?" The software isn't tested for real, by real users, working on real problems to ask "is that what we really need?"

And so it is that many Agile teams still get that skip-ful of feedback when the "iterated" solution finds its way into the real world. To all intents and purposes, that's a Big Bang release. Y'know, the kind we thought we'd stopped doing.

Better to get that kind of feedback throughout. Better also to shift focus from The Big PlanTM to actual end user goals, not a list of system features that someone believed would meet those goals (if they ever thought to ask what those goals were.)

Imagine we're working for an airline. We turn up for work and are presented with a backlog of feature requests for an online check-in facility. Dutifully, we work our way through this backlog, agreeing acceptance tests with our customer and ticking them off one by one. Eventually, the system goes live. At which point we discover that, because all of the flights we operate are long-haul, and therefore almost all our passengers need to check in baggage for the hold, we've had almost zero impact on the time it takes to check-in.

What we could have been doing, instead of working our way through The Big PlanTM, is working our way towards reducing check-in times. If that was the original goal, then that's what we should have been iterating towards.

This is actually a founding principle of Agile, before it was called "Agile". Tom Gilb's ideas about evolutionary project management, dating back to the late 1980s, clearly highlight the need to focus on goals, not plans. Each iteration needs to bring us closer to the goal, and we need to test and measurre progress not by software delivered against plan - I mean, damn, there was a major clue right there! - but by progress towards reaching our goals.

Instead of putting all our faith in the online check-in solution that was presented to us, we could have been focusing on those baggage check-in queues and streamlining them. The solution might not even involve software. In which case we swallow our pride and acknowledge we don't have the answer, instead of wasting a big chunk of time and money pretending we do.

This requires a different relationship with the customer, where developers like us are just one part of a cross-discipline team tasked with solving problems that may or may nor involve software. We should be incentivised to write software that really achieves something, and to be prepared to change direction when we learn that we're on the wrong track.

The first step in that journey is to ditch the backlog. Put a match to The Big PlanTM.

Instead of plans, have goals; a handful of headline requirements that really are requirements - to reduce check-in times, to detect and treat heart disease sooner, to save 1p on the cost of manufacturing a widget, to get 20% more children in Africa through school, or whatever the goal is that someone thinks is valuable enough to invest the kind of money software costs to create and maintain in. We ain't done until the goal's been achieved at least to some extent, or until we've abandoned the goal.

That requires developers to play an integral part in a wider - and probably longer-term - game. We are not actors who turn up and say the lines someone else wrote on a set someone else built. We write our lines and build our sets and then act them out to an audience whose feedback should determine what happens next in the story.




April 8, 2015

Reality-driven Development - Creating Software For Real Users That Solve Real Problems In the Real World

It's a known fact that software development practices cannot be adopted until they have a pithy name to identify the brand.

Hence it is that, even though people routinely acknowledge that it would be a good idea for development projects to connect with reality, very few actually do because there's no brand name for connecting your development efforts with reality.

Until now...

Reality-driven Development is a set of principles and practices aimed at connecting development teams to the underlying reality of their efforts so that they can create software that works in the real world.

RDD doesn't replace any of your existing practices. In some cases, it can short-circuit them, though.

Take requirements analysis, for example: the RDD approach compels us to immerse ourselves in the problem in a way traditional approaches just can't.

Instead of sitting in meeting rooms talking about the problem with customers, we go out to where the problem exists and see and experience it for ourselves. If we're tasked with creating a system for call centre operatives to use, we spend time in the call centre, we observe what call centre workers do - pertinent to the context of the system - and most importantly, we have a go at doing what the call centre workers do.

It never ceases to amaze me how profound an effect this can have on the collaboration between developers and their customers. Months of talking can be compressed into a day or two of real-world experience, with all that tacit knowledge communicated in the only way that tacit knowledge can be. Requirements discussions take on a whole different flavour when both parties have a practical, first-hand appreciation of what they're talking about.

Put the shoe on the other foot (and that's really what this practice is designed to do): imagine your customer is tasked with designing software development tools, based entirely on an understanding they've built about how we develop software purely based on our description of the problem. How confident are you that we'd communicate it effectively? How confident are you that their solutions would work on real software projects? You would expect someone designing dev tools to have been a developer at some point. Right? So what makes us think someone who's never worked in a call centre will be successful at writing call centre software? (And if you really want to see some pissed off end users, spend an hour in a call centre.)

So, that's the first practice in Reality-driven Development: Real-world Immersion.

We still do the other stuff - though we may do it faster and more effectively. We still gather user stories as placeholders for planning and executing our work.. We still agree executable acceptance tests. We still present it to the customer when we want feedback. We still iterate our designs. But all of these activities are now underpinned with a much more solid and practical shared understanding of what it is we're actually talking about. If you knew just how much of a difference this can make, it would be the default practice everywhere.

Just exploring the problem space in a practical, first-hand way can bridge the communication gap in ways that none of our existing practices can. But problem spaces have to be bounded, because the real world is effectively infinite.

The second key practice in Reality-driven Development is to set ourselves meaningful Real-world Goals: that is, goals that are defined in and tested in the real world, outside of the software we build.

Observe a problem in the real world. For example, in our real-world call centre, we observe that operatives are effectively chained to their desks, struggling to take regular comfort breaks, and struggling to get away at the end of a shift. We set ourselves the goal of every call centre worker getting at least one 15-minute break every 2 hours, and to work a maximum of 15 minute's unplanned overtime at the end of a day. This goal has nothing to do with software. We may decide to build a feature in the software they use that manages breaks and working hours, and diverts calls that are coming in just before their break is due. It would be the software equivalent of when the cashier at the supermarket checkout puts up one of those little signs to dissuade shoppers from joining their queue when they're about to knock-off.

Real-world Goals tend to have a different flavour to management-imposed goals. This is to be expected. If you watch any of those "Back to the floor" type TV shows, where bosses pose as front-line workers in their own businesses, it very often the case that the boss doesn't know how things really work, and what the real operational problems are. This raises natural cultural barriers and issues of trust. Management must trust their staff to drive development and determine how much of the IT budget gets spent. This is probably why almost no organisation does it this way. But the fact remains that, if you want to address real-world problems, you have to take your cues from reality.

Important, too, is the need to strike a balance in your Real-world Goals. While we've long had practices for discovering and defining business goals for our software, they tend to suffer from a rather naïve 1-dimensional approach. Most analysts seek out financial goals for software and systems - to cut costs, or increase sales, and so on - without looking beyond that to the wider effect the software can have. A classic example is music streaming: while businesses like Spotify make a great value proposition for listeners, and for major labels and artists with big back catalogues, arguably they've completely overlooked 99.9% of small and up-and-coming artists, as well as writers, producers and other key stakeholders. A supermarket has to factor in the needs of suppliers, or their suppliers go out of business. Spotify has failed to consider the needs of the majority of musicians, choosing to focus on one part of the equation at the expense of the other. This is not a sustainable model. Like all complex systems, dynamic equilibrium is usually the only viable long-term solution. Fail to take into account key variables, and the system tips over. In the real world, few problems are so simple as to only require us to consider one set of stakeholders.

In our call centre example, we must ask ourselves about the effect of our "guaranteed break" feature on the business itself, on its end customers, and anyone else who might be effected by it. Maybe workers get their breaks, but not withut dropping calls. Or without a drop in sales. All of these perspectives need to be looked at and addressed, even if by addressing it we end up knowingly impacting people in a negative way. Perhaps we can find some other way to compensate them. But at least we're aware.

The third leg of the RDD table - the one that gives it the necessary balance - is Real-world Testing.

Software testing has traditionally been a standalone affair. It's vanishingly rare to see software tested in context. Typically, we test it to see if it conforms to the specification. We might deploy it into a dedicated testing environment, but that environment usually bears little resemblance to the real-world situations in which the software will be used. For that, we release the software into production and cross our fingers. This, as we all know, pisses users off no end, and rapidly eats away at the goodwill we rely on to work together.

Software development does have mechanisms that go back decades for testing in the real world. Alpha and Beta testing, for example, are pretty much exactly that. The problem with that kind of small, controlled release testing is that it usually doesn't have clear goals, and lacks focus as a result. All we're really doing is throwing the software out there to some early adopters and saying "here, waddaya think?" It's missing a key ingredient - real-world testing requires real-world tests.

Going back to our Real-world Goals, in a totally test-driven approach, where every requirement or goal is defined with concrete examples that can become executable tests, we're better off deploying new versions of the software into a real-world(-ish) testing environment that we can control completely, where we can simulate real-world test scenarios in a repeatable and risk-free fashion, as often as we like.

A call centre scenario like "Janet hasn't taken a break for 1 hour and 57 minutes, there are 3 customers waiting in the queue, they should all be diverted to other operators so Janet can take a 15-minute break. None of the calls should be dropped" can be simulated in what we call a Model Office - a recreation of all or part of the call centre, into which multiple systems under development may be deployed for testing and other purposes.

Our call centre model office simulates the real environment faithfully enough to get meaningful feedback from trying out software in it, and should allow us to trigger scenarios like this over and over again. In particular, model offices enable us to exercise the software in rare edge cases and under unusually high peak loads that Alpha and Beta testing are less likely to throw up. (e.g., what happens if everyone is due a break within the next 5 minutes?)

Unless you're working on flight systems for fighter aircraft or control systems for nuclear power stations, it doesn't cost much to set up a testing environment like this, and the feedback you can get is worth far more.

The final leg of the RDD table is Real-world Iterating.

So we immerse ourselves in the problem, find and agree real-world goals and test our solutions in a controlled simulation of the real world. None of this, even taken together with existing practices like ATDD and Real Options, guarantees that we'll solve the problem - certainly not first time.

Iterating is, in practice, the core requirements discipline of Agile Software Development. But too many Agile teams iterate blindly, making the mistake of believing that the requirements they've been given are the real goals of the software. If they weren't elucidated from a real understanding of the real problem in the real world, then they very probably aren't the real goals. More likely, what teams are iterating towards is a specification for a solution to a problem they don't understand.

The Agile Manifesto asks us to value working software over comprehensive documentation. Realty-driven Development widens the context of "working software" to mean "software that testably solves the user's problem", as observed in the real world. And we iterate towards that.

Hence, we ask not "does the guaranteed break feature work as agreed?", but "do operatives get their guaranteed breaks, without dropping sales calls?" We're not done until they do.

This is not to say that we don't agree executable feature acceptance tests. Whether or not the software behaves as we agreed is the quality gate we use to decide if it's worth deploying into the Model Office at all. The software must jump the "it passes all our functional tests" gate before we try it on the "but will it really work, in the real world?" gate. Model Office testing is more complex and more expensive, and ties up our customers. Don't do it until you're confident you've got something worth testing in it.

And finally, Real-world Testing wouldn't be complete unless we actually, really tested the software in the real real world. At the point of actual deployment into a production environment, we can have reasonably high confidence that what we're putting in front of end users is going to work. But that confidence must not spill over into arrogance. There may well be details we overlooked. There always are. So we must closely observe the real software in real use by real people in the real world, to see what lessons we can learn.

So there you have it: Reality-driven Development

1. Real-world Immersion
2. Real-world Goals
3. Real-world Testing
4. Real-world Iterating


...or "IGTI", for short.


March 19, 2015

Requirements 2.0 - Make It Real

This is the second post in a series to float radical ideas for changing the way we handle requirements in software development. The previous post was Ban Feature Requests

In my previous post, I put forward the idea that we should ban customers from making feature requests so that we don't run the risk of choosing a solution too early. For example, in a user story, we'd get rid of most of the text, just leaving the "So that..." clause to describe why the user wants the software changed.

Another area where there's great risk of pinning our colours to a specific solution is in the collaboration between a customer and a UI/UX designer. The issue here is that things like wireframes and UI mock-ups tend to be the first concrete discussion points we put in front of customers. Up to this point, it's all very handwavy and vague. But seeing a web page with a text box and a list and some buttons on it can make it real enough to have a more meaningful discussion about the problem we're trying to solve.

This would be fine if we didn't get so attached to those designs. But, let's face it, we do. We get very attached to them, and then the goal of development transforms into "what must we do in order to realise that design?", when in reality, we're still exploring the problem space.

So, we need some way to make our ideas concrete, so we can have meaningful discussions about the problem, without presenting the customer with a design for a solution.

Here's what I do, when the team and the customer are willing to play ball:



I make it real by... well... making it real. I call this Tactile Modeling. (No doubt by tomorrow afternoon, some go-getting young hipster will have renamed it "Illustrating Requirements Using Things You Can See and Hold In Your Hand-driven Development". But for now, it's Tactile Modeling.)

Now, I'm old enough to remember when we were all so young and stupid we really thought that visual models in notations like UML would serve this purpose. Yeah, I know. It's like watching old movies of women smoking next to their babies. Boy, were we dumb!

But the idea of being able to concretely explore examples and business scenarios in a practical way can carry real power to break down the communication barriers; far more effectively than our current go-to techniques like agreeing acceptance tests in some airless meeting room with a customer who is pulling domain facts out of thin air half the time.

So, if we're talking about a system for managing a video library, let's create a video library and explore real-world systems for managing it. Let's get some videos. Let's get some shelves to put them. Let's get some boxes and folders and sticky-tape and elastic bands and build a video library management system out of real actual atoms and stuff, and explore how it works in different scenarios.

And instead of drawing boxes and arrows and wireframes and wizardry up on the whiteboard or in a modelling tool (like PowerPoint, for example), let's whip out our camera phones and take snaps at key steps and take videos to show how a process works and stick them in the Wiki for everyone to see.

And let's not sit in meeting rooms going "blah blah blah must be scalable etc etc", let's have our discussions inside this environment we've created, so we're surrounded by the problem domain, and at any point requiring clarification, the clarifier can jump up and show us what they mean, so that we can all see it (using our eyes).

As our understanding evolves, and we start to create software to be used in some of these scenarios to help the end users in their work, we can deploy that software into this fake video library and gradually swap out the belt-and-braces information systems with slick software, all the while testing to see that we're achieving our goals.

Now, I know what some of you are thinking: "but our problem domain is all abstract concepts like 'currency', 'option' and 'ennui'. " Well, here's the good news. Movies are an abstract concept. Sure, they come in boxes sometimes, or on cassettes. But that's just the physical representation - the medium - through which that concept is expressed. It's the same movie whether we download it as a file, buy it on a disc or get someone to paint it as a mural. That's what separates us from the beasts of the jungle. Well, that and the electrified fence around our compound. But mostly, it's our ability to express abstract concepts like money, employment contract and stock portfolio that we've built our entire civilisation on. Money can be represented by little pieces of paper with numbers written on them. (A radical idea, I know, but worth a try sometime.) And so on.

There is always a way to make it practical: something we can pick up and look at and manipulate and move to model the information in the system, be it information about hospital patients, or about chemicals components in self-replicating molecules, or about single adults who are looking for love.

Of course, there's more to it than that. But you get the gist, I'm sure. And we'll look at some of that in th next post, no doubt. In particular, the idea of a model office: a simulated testing and learning environment into which we should be deploying our software to see how it fairs in something approaching the real world.

Wanna have a meaningful conversation about requirements? Then make it real.


Requirements 2.0 - Ban Feature Requests

This is the first post in a series that challenges the received wisdom about how we handle requirements in software development.

A lot of the problems in software development start with someone proposing a solution too early.

User stories in Agile Software Development are symptomatic of this: customers request features they want the software to have, qualifying them with a "so that..." clause that justifies the feature with a benefit.

Some pundits recommend turning the story format around, so the benefit comes first, a bit like writing tests by starting with the assertion and working backwards.

I'm going to suggest something more radical: I believe we should ban feature requests altogether.

My format for a user story would only have the "so that..." clause. Any mention of how that would be achieved in the design of the software would be excluded. The development team would figure out the best way to achieve that in the design, and working software would be iterated until the user's goal has been satisfied.

It's increasingly my belief that the whole requirements discipline needs to take a step back from describing solutions and their desired features or properties, to painting a vivid picture of what the user's world will look like with the software in it, with a blank space where the software actually goes.

Imagine trying to define a monster in a horror movie entirely through reaction shots. We see the fear, we hear the screams, but we never actually see the monster. That's what requirements specs, in whatever form they're captured, should be like. All reaction shots and no monster.

Why?

Well, three reasons:

1. All too often, we find ourselves building a solution to a problem that's never been clearly articulated. Iterating designs only really works when we iterate towards clear goals. Taking away the ability to propose solutions (features) early forces customers (and developers) to explicitly start by thinking about the problem they're trying to solve. We need to turn our thinking around.

2. The moment someone says "I want a mobile app that..." or "When I click on the user's avatar..." or even, in some essential way, "When I submit the mortgage application..." they are constraining the solution space unnecessarily to specific technologies, workflows and interaction designs. Keeping the solution space as wide open as possible gives us more choices about how to solve the user's problem, and therefore a greater chance of solving it in the time we have available. On many occasions when my team's been up against it time-wise, banging our heads against a particular technical brick wall, when we took a step back and asked "What are we actually trying to achieve here?" and the breakthrough came when we chose an easier route to giving the users what they really needed.

3. End users generally aren't software designers. For the exact same reason that it's not such a great idea to specify a custom car for me by asking "What features do you want?" or for my doctor to ask me "What drugs would you like?", it's probably best if we don't let users design the software. It's not their bag, really. They understand the problem. We do the design. We play to our strengths.

So there you have it. Ban feature requests.




February 2, 2015

Stepping Back to See The Bigger Picture

Something we tend to be bad at in the old Agile Software Development lark is the Big Picture problem.

I see time and again teams up to their necks in small details - individual users stories, specific test cases, individual components and classes - and rarely taking a step back to look at the problem as a whole.

The end result can often be a piecemeal solution that grows one detail at a time to reveal a patchwork quilt of a design, in the worst way.

While each individual piece may be perfectly rational in its design, when we pull back to get the bird's eye view, we realise that - as a whole - what we've created doesn't make much sense.

In other creative disciplines, a more mature approach is taken. Painters, for example, will often sketch out the basic composition before getting out their brushes. Film producers will go through various stages of "pre-visualisation" before any cameras (real or virtual) star rolling. Composers and music producers will rough out the structure of a song before worrying about how the kick drum should be EQ'd.

In all of these disciplines, the devil is just as much in the detail as in software development (one vocal slightly off-pitch can ruin a whole arrangement, one special effect that doesn't quite come off can ruin a movie, one eye slightly larger than the other can destroy the effect of a portrait in oil - well, unless you're Picasso, of course; and there's another way in which we can be like painters, claiming that "it's mean to be like that" when users report a bug.)

Perhaps in an overreaction to the Big Design Up-Front excesses of the 1990's, these days teams seem to skip the part where we establish an overall vision. Like a painter's initial rough pencil sketch, or a composer's basic song idea mapped out on an acoustic guitar, we really do need some rough idea of what the thing we're creating as a whole might look like. And, just as with pencil sketches and song ideas recorded on Dictaphones, we can allow it to change based on the feedback we get as we flesh it out, but still maintain an overall high-level vision. They don't take long to establish, and I would argue that if we can't sketch out our vision, then maybe - just maybe - it's because we don't have one yet.

Another thing they do that I find we usually don't (anywhere near enough) is continually step back and look at the thing as a whole while we're working on the details.

Watch a painter work, and you'll see they spend almost as much time standing back from the canvas just looking at it as they do up close making fine brush strokes. They may paint one brush stroke at a time, but the overall painting is what matters most. The vase of flowers may be rendered perfectly by itself, but if its perspective differs from its surroundings, it's going to look wrong all the same.

Movie directors, too, frequently step back and look at an edit-in-progress to check that the footage they've shot fits into a working story. They may film movies one shot at a time, but the overall movie is what matters most. An actor may perform a perfect take, but if the emotional pitch of their performance doesn't make sense at that exact point in the story, it's going to seem wrong.

When a composer writes a song, they will often play it all the way through up to the point they're working on, to see how the piece flows. A great melody or a killer riff might seem totally wrong if it's sandwiched in between two passages that it just doesn't fit in with.

This is why a recommend to the developers that they, too, routinely take a step back and look at how the detail they're currently working on fits in with the whole. Run the application, walk through user journeys that bring you to that detail and see how that feels in context. It may seem like a killer feature to you, but a killer feature in the wrong place is just a wrong feature.





November 25, 2014

Continuous Inspection II - Planning & Executing CInsp

In this second blog post about Continuous Inspection (CInsp, for short), I want to look at how we might manage the CInsp process to get the most value from it.

While some develoment teams are now using CInsp tools to analyse their code to get early warnings about code quality problems when they're easier and cheaper to fix, it's fair to say that this area of the develoment discipline has to date evaded the principles that we apply to other kinds of requirements.

Typically, as a kind of work, CInsp is ad hoc, unplanned, untracked and most teams who do it have only a very vague idea of what kind of cost it has and what kind of benefits they're reaping from it.

CInsp is rarely prioritised, leaving the field wide open to waste a lot of time and effort on activities that add little or no value.

Non-functional requirements obey the same laws as functional ones, which is why we need to attack them using the same principles and techniques.

In this post, I want to examine how we plan and execute CInsp on projects starting from scratch. (In a future post, I'll talk about applying CInsp to existing code bases with a build-up of code quality issues.)

Continuous Inspection Requirements.

There are an infinite number of properties we could look for in our code, but some have value in finding and most don't. Rather than waste our time arbitrarily searching our code for "stuff", it's important we have a clear idea of what it is we're looking for and why.

Extreme Programming, for example, has a perfectly usable mechanism for describing the things we want to inspect for, and the benefits of catching those kinds of code quality problems early.

A Code Quality Story is a non-functional user story that briefly summarises a code quality "bug" we wish to avoid and the pay-off we might expect if we can avoid introducing it into our code.



Note first of all that I've chosen here to use a blue index card. This might be in a system where we write functional user stories on green cards, report bugs on red cards, and record other outcomes - "miscellaneous tasks", like setting up the build and implementing code quality gates - on blue cards.

Why do this? Well, I've found it very useful to know roughly how much of a team's time is split between delivering working features, fixing bugs (ideally, zero time), and "shaving yaks" when the yaks being shaved are sufficiently large and not part of the work of delivering specific features.

The importance of the effort split becomes apparent as time goes by and the software evolves. A healthy project is one where the proportion of effort devoted to delivering working features remains relatively constant. What typically happens on teams who set out at an unsustainable pace is that they begin development with their time devoted mostly to the green cards, and after a few months most of their time is spent tackling red cards and making a lot less progress on new features. This is a good indicator of the rising cost of change we're seeking to avoid, so we can sustain the pace of development and deliver value for longer. This information will help us better judge how well-spent the time devoted to things like CInsp is.

So we have a placeholder for our code quality requirement in the form of a blue index card. What next?

Planning Continuous Inspection

This is where I, and a lot of teams, have gone wrong in the past. What we should never, ever do is allow the customer to choose when and whether we tackle non-functional requirements. And in "customer" I include proxy customers like business analysts and project managers. The overwhelmingly common experience of development teams is that purely technical issues, like code quality, get sidelined by non-technical stakeholders.

We must not give them the chance to drop our Feature Envy story in favour of a story about, say, sorting columns in an HTML table if we strongly believe, as professionals, that avoiding Feature Envy is important. If, as the evidence suggests, care taken over code quality helps to maintain productivity and deliver greater value over time, then we risk presenting customers with a confusing false dichotomy between work that enhances quality and work that directly delivers working features.

The analogy I use is to pretend we're running a restaurant using the planning practices of Extreme Programming.

Every job that needs doing gets written on a card, and placed into a backlog of outstanding work. There will be user stories like "Take table 3's order" and "Serve french fries and beer to table 7" and "Get the bill for table 12". These are stories about work that will make the restaurant money.

There will also be stories like "Wash the dishes in the sink" and "Clean out pizza oven" and "Repaint sign over door". These are about tasks that cost money, but don't directly bring in revenue by themselves.

If we allowed our restaurant's shareholders - who themselves have never worked in a restaurant, but they have a stake in it as a business - to prioritise what stories get done at the expense of others in a world where backlogs always outweigh the available time and resources, then there's a very real danger that the kitchen will rarely get cleaned, the sign above the door will fade until nobody can see it, and we'll run out of clean plates halfway through service.

The temptation for teams who are driven solely by the priorities of non-technical stakeholders is that non-functional issues like code quality will only get tackled when a crisis emerges that blocks progress on functional requirements. i.e., we don't wash up until we run out of plates, or we don't clean the kitchen until the inspector shuts us down, or we don't repaint the sign until the customers have stopped coming in.

One thing we've learned about writing software is that it's cheaper and easier to tackle problems proactively and catch them earlier. Sadly, too many teams are left lurching from one urgent crisis to the next, never getting the chance to get ahead of the issues.

For this reason, I strongly advise against involving non-technical stakeholders in planning CInsp. (As well as other technical work.)

Now put yourself in the diner's shoes: you pick up the menu, and every dish lists all of the tasks restaurant staff have to do in order to deliver it. Let's say we charge £11 for fish and chips, with a clean grill, mopping the floors, cashing up that evening, doing the accounts, getting up early to take delivery of fresh fish, and so on.

Two questions:

1. If we hadn't told them, would the diner even care?

2. If we make it the diner's business, are we inviting them to negotiate the price of the fish and chips down by itemising what goes in to running the restaurant? ("I'll have the fish & chips, but I'm not paying for your trainee chef's college course" etc)

The world is full of work that needs doing, but nobody thinks they should pay for. In order for the world to keep turning, for fish & chips to appear on our dining tables, this work has to get done one way or another, and it has to be paid for.

The way a restaurant squares this circle is to build it into the cost of the meal and to not present diners with a choice. Their choice is simple: don't like the price, don't order the dish.

Likewise in software development, there's a universe of tasks that need doing that do not directly end with a working feature being delivered to the customer's table. We must build this work into the price ("feature X will take 3 days to deliver") and avoid presenting the customer with bewildering choices that, in reality, aren't choices at all.

So planning Continuous Inspection is something that happens within the team among technical stakeholders who understand the issues and will be doing the work. This is good advice for any non-functional requirements, be they about build automation, internal training or hiring developers. This is just "stuff that has to happen" so we can deliver working software reliably, economically and sustainably.

The key thing, to avoid teams disappearing up their own backsides with the technical stuff, is to make sure we're all absolutely clear about why we're doing it. Why are we automating the build? Why are we writing a tool that generates code? Why are we sending half the team to the Software Craftsmanship conference? (Some companies send entire teams.) And the answer should always be something of value to the customer, even if that value might not be realised for months or years.

In practice, we have planning meetings - especially in the early stages of a project - that are for technical stakeholders only. Lock the doors. Close the blinds. Don't tell the boss. (I have literally experienced running around offices looking for rooms where the developers can have these discussions in private, chased by the project manager who insists on sitting in. "Don't mind me. I won't interfere." Two seconds later...)

Such meetings give teams a chance to explicitly discuss code quality and to thrash out what they mean by "good code" and "bad code" and establish a shared set of priorities over code quality. It's far better to have these meetings - and all the inevitable disagreements - at the start, when we can take steps to prevent issues, than to have them later when we can only ask "what went wrong?"

Executing Continuous Inspection

On new software, the effort in Continuous Inspection tends to be front-loaded, and with good reason.

As I've mentioned a few times already, it tends to be far cheaper to tackle code quality "bugs" early - the earlier the better. This means that adding new code quality requirements later in development tends to catch problems when they're much more expensive to fix, so it makes sense to set the quality bar as high as we can at the start.

There's good news and there's bad news. First, the bad news: on a new project, from a standing start, it's going to take considerable effort to get automated code inspections in place. It will vary greatly, depending on the technology stack, availability of tools, experience levels in the team, and so on. But it's not going to take an afternoon. So you may be faced with having to hide a big chunk of effort from non-technical stakeholders if you attempt to start development (from their perspective, when they're actively involved) at the same time as putting CInsp in place. (Same goes for builds, CI, and a raft of other stuff that we need to get up and running early on.)

Another very strong recommendation from me: have at least one iteration before you involve the customer. Get the development engine running smoothly before you wind down the window and shout "Where to, guv'nor?" They may be less than impressed to discover that you just need to build the engine before you can set off. Delighting customers is as much about expectations as it is about actual delivery.

Going back to the restaurant analogy, consider why restaurants distinguish between "service" and "preparation". Service may start at 6pm, but the chefs have probably been there since 9am getting things ready for that. If they didn't, then those first orders might take hours to reach the table. Too many development teams attempt the equivalent of starting service as the ingredients are being delivered to the kitchen. We need to do prep, too, before we can start taking orders.

Now, for the good news: the kinds of code quality requirements we might have on one, say, JEE project are likely to be similar on another JEE project. CInsp practitioners tend to find that they can get a lot of reuse out of code quality gates they've already developed for previous projects. So, over months and years, the overall cost of getting CInsp up and running tends to decrease quite significantly. If your technology stack remains fairly stable over the years, you may well find that getting things up and running can eventually become an almost push-button process. It takes a lot of investment to get there, though.

Code Quality stories work the same way as user stories in their execution. We plan what stories we're going to tackle in the current timebox in the same way. We tackle them in pairs, if possible. We treat them purely as placeholders to have a conversation with the person asking for each story. And, most importantly, we agree...

Continuous Inspection Acceptance Tests

Going back to our Feature Envy code quality story, what does the developer who write that story mean by "Feature Envy"?

Here's the definition from Martin Fowler's Refactoring book:

"A classic [code] smell is a method that seems more interested in a class other than the one it is in. The most common focus of the envy is the data."

It's all a bit handwavy, as is usually the case with software design wisdom. A human being using their intelligence, experience and judgement might be able to read this, look at some code and point to things that seem to them to fit the description.

Programming a computer to do it, on the other hand...

This is where we can inhabit our customer's world for a little while. When we ask our customer to precisely decribe a business rule, we're putting them on the spot every bit as much as a computable definition of Feature Envy might put me and you on the spot. In cold, hard, computable terms: we don't quite know what we mean.

When the business problem we're solving is about, say, mortgages or video rentals or friend requests, we ask the customer for examples that illustrate the rule. Using examples, we can establish a shared vocabulary - a language for expressing the rule - explore the boundaries, and pin down a precise computable understanding of it (if there is one.)

We shouldn't be at all surprised that this technique also works very well for rules about our code. Ask the owner of a code quality story to track down some classic examples of code that breaks the rule, as well as code that doesn't (even if it looks at first glance like it might).

This is where the real skill in CInsp comes into play. To win at Continuous Inspection, development teams need to be skilled as reasoning about code. This is not a bad skill for a developer to have generally. It helps us communicate better, it helps us visualise better, it makes us better at design, at refactoring, at writing tools that work with code. Code is our domain model - the business objects of programming.

Using our code reasoning skills, applied to examples that will form the basis of acceptance tests, we can drive out the design of the simplest tool possible that will sound the alarm when the "bad" examples are considered, while silently allowing the "good" examples to pass through the quality gate.

As with functional user stories, we're not done until we have a working automated quality gate that satisfies our acceptance tests and can be applied to new code straight away.

In the next blog post, we'll be rolling up our sleeves with an example Continuous Inspection quality gate, implementing it using a variety of tools to demonstrate that there's often more than one way to skin the code quality cat.








November 22, 2014

Continuous Inspection I - Why Do We Need It?

This is the first of a series of posts about Continuous Inspection. My goals here is to give you something to think about, rather than to present a complete hands-on guide. The range (and maturity) of tools and techniques we can apply to Continuous Inspection (I'll call it CInsp from now on to save a few keystrokes) is such that I could write 1,000 blog posts and still not cover it all. So here I'll just focus on general CInsp principles and illustrate with cherrypicked examples.

In this first post, I want to summarise what I mean by "Continuous Inspection" and argue that there's a real need for it on most software development teams.

Contininuous Inspection is the practice of - and stop me if I'm getting too technical here - continuously inspecting your code to detect non-functional issues in the software.

CInsp is just another kind of Continuous Testing, which is a cornerstone of Continuous Delivery. To have our software always in a shippable state, we must take steps to assure ourselves that the software is always working.

If we follow the thinking behind continuous testing (and re-testing) of our software to check that it still works, the benefit is that we never stray more than a few minutes from having something we could ship if the business wanted us to.

To date, the only practical way we've found to achieve Continuous Testing is to automate those tests as much as possible, so they can be run quickly and economically. If it takes you 2 weeks to re-test your software, then after each change you make to the code, you are at least 2 weeks away from knowing if the software still works. Manual testing makes Continuous Delivery impractical.

In recent years, automated testing - and especially automated unit testing - has grown in popularity, and the effects can be seen in teams delivering more reliably and more sustainably as a result.

But only to a point.

What I've observed across hundreds of teams over the last decade or more is that, even with high levels of automated testing, the pace of delivery still slows to unacceptable levels.

In order to sustain the pace of change, the code itself needs to remain open to change. Being able to quickly regression test our software is a boon in this respect, no doubt. But it doesn't address the whole picture.

There are other things that can hamper change in our code. If the code's complicated, for example, it will be more likely to break when we change it. If there's duplication in our code - if we've been a bit trigger-happy with Copy+Paste - then that can multiply the cost of making a change. If we've not paid attention to the dependencies in our code, small changes can cause big ripples through the code and amplify the cost.

As we make progress in delivering functionality we tend also to make a mess inside the software, and that mess can get in our way and impede future progress. To maintain the pace of innovation over months and years and get the most out of our investment over the lifetime of a software product, we need to keep our code clean.

Experienced developers view design issues that impede progress in their code as bugs, and they can be every bit as serious as bugs in the functionality of the software.

And, just like functional bugs, these code quality bugs (often referred to as "code smells", because they're indicatice of your code "rotting" as it grows) have a tendency to get harder and more expensive to fix the longer we leave them.

Duplication has a tendency to grow, as does complexity. We build more dependencies on top of our dependencies. Switch statements get longer. Long parameter lists get longer. Big classes get bigger. And so on.

Here's what I've discovered form examining hundreds of code bases over the years: code smells that get committed into the code are very likely to remain for the lifetime of the software.

There seems to be a line that once we've crossed it, our mistakes are likely to live forever (and impede us forever). From observation, I've found that this line is moving on.

In the Test-driven Development cycle, for example, I've seen that when developers move on to the next failing test, any code smells they leave behind will likely not get addressed later. In programming, "later" is a distant and alien land where all our little TO-DO's never get done. "Later" might as well be "Narnia".

Even more so, when developers commit their code to a shared repository, at that point code smells "petrify", and remain forever trapped in the amber of all the other code that surrounds them. 90% of code smells introduced in committed code never get fixed.

This is partly because most teams have no processes for identifying and addressing code quality problems. But even the ones who do tend to find that their approach, while better than nothing, is not up to the task of keeping the code as clean as it needs to be to maintain the pace of change the customer needs.

Why? Well, let's look at the kinds of techniques teams these days use:

1. Code Reviews

There's a joke that goes something like this: "Ask a developer what's wrong with a line of code, and she'll give you a list. Ask her what's wrong with 500 lines of code, and she'll tell you it's fine."

Code reviews have a tendency to store up large amounts of code - potentially containing large numbers of issues - for consideration. The problem here is seeing the wood for the trees. A lot of issues get overlooked in the confusion.

But even if code reviews identified all of the code quality issues, the economics of fixing those issues is working against us. Fixing bugs - functional or non-functional - tends to get exponentially more expensive the longer we leave them in the code, and for precisely the same reasons (longer feedback cycles).

In practice, while rigorous code reviews would be a step forward for many teams who don't do them at all, they are still very much shutting the stable door after the horse has bolted.

2. Pair Programming

In theory, pair programming is a continuous code review where the "navigator" is being especially vigilent to code quality issues and points them out as soon as they spot them. In some cases, this is pretty much how it works. But, sad to say, in the majority of pairs, code quality issues are not high on anyone's agenda.

This is for two good reasons: firstly, most developers are not all that aware of code smells. They don't figure high in our list of priorities. Code quality isn't sexy, and doesn't get you hired at IronicBeards.com.

Secondly, with the best will in the world, people have limitations. When Codemanship does pairing to assess a developer's skill level in certain practices, the level of focus required on what the other person's doing is really quite intense. You don't take your eye off the screen in case you miss something. But there are dozens of code smells we need to be vigilant for, and even with all my experience and know-how, I can't catch them all. My mind will have to skip between lots of competing concerns, and when my remaining brain cells are tied up trying to remember how to do something with Swing, I'm likely to take my eye off the code quality ball. It's also very difficult to maintain that level of focus hour after hour, day-in and day-out. It hurts my brain.

Pair programming, as an approach to guarding against code smells, is good when it's done well. But it's not that good that we can be assured code written in this way will be maintainable enough.

3. Design Authorities

By far the least effective route to ensuring code quality is to make it someone else's job.

Hiring architects or "technical design authorities" suffers from all the shortcomings of code reviews and pair programming, and then adds a big bunch of new shortcomings.

Putting aside the fact that almost every architect or TDA I've ever met has been mostly focused on "the big picture", and that I've seen 1,000-line switch statements waved through the quality gate by people obsessing over whether classes implement certain interfaces they've prescribed, turning design authorities into design quality testers never seems to end well. Who wants to spend their day scouring other people's code for examples of Feature Envy?

I'll say no more, except to summarise by observing that the code I've seen produced by teams with dedicated design authorities counts amongs the worst for code quality.

4. Coding Standards

In theory, a team's coding standards are a codification of what we all agree we mean by "good code".

Typically, these are written down in documents that nobody ever reads, and suffer from the same practical drawbacks as architecture documents and company mission statements. They're aspirational affirmations at best. But, in practice, everybody just ignores them.

Even on those more disciplined teams that try to adhere to coding standards, they still have major drawbacks, all relating back to things we've already discussed.

Firstly, coding standards are a list of "stuff" we need to be thinking about along with all the other "stuff" we have to think about. So they tend to take a lower priority and often get overlooked.

Secondly, as someone who's studied a lot of coding standards documents (and what joy they bring!), they have a tendency to be both arbitrary and by no means universally agreed upon. Often they've been written by some kind of design or development authority, usually with little or no input from the team they're being imposed on. It's rare for issues that affect maintainability to be addressed in a coding standards document. Programmers are a funny bunch: we care deeply about some weird stuff while Elephants In The Room creep in without being questioned and sit on us. Naming conventions, therefore, have little relation to how easy the code will be to read and understand. And it's rare to see duplication, dependencies, complexity and so on even being hinted at. As long as all your instances have names beginning with obj and all your private member variables beging with "m_", the gods of code goodness will be appeased.

And then there's the question of how and when we enforce coding standards. And we're back to the hard physics of software development - time, money and cost. Knowing what we should be looking for is only the tip of the code quality iceberg.

What's needed is the ability to do code reviews so freqently, and do them in a way that's so effective, that we never stray more than a few minutes from clean code. For this, we need code reviewers who miss very little, who are constantly looking at the code, and who never get tired or distracted.

For that, thankfully, we have computers.

Program code is like any other domain model; we can write programs to reason about the design of other programs, expressed in terms of the structure of code itself.

Code quality rules are just like any other computable business rules. If the rule is that a block of code in one class should not make copious references to features in another class ("Feature Envy"), it's possible to write an automated test that reads code and looks at those references to determine if that block of code is in the right place.

Let's illustrate with a technology example. Imagine we're working in Java in, say, Eclipse. We could write code for a plug-in that, whenever we make a change to the code document we're working on, reads the code's Abstract Syntax Tree (basically, a code DOM) and does a calculation for the ratio of internal and external dependencies in that Java method we just changed. If the ratio is too low, it could flag it up as a warning while we're writing the code.

The computational power of computers is such today that this sort of continuous background code reviewing is practically possible, and there have already been some early attempts to create just such plug-ins.

In the article I wrote a few years ago for Visual Studio Journal, Ever-decreasing Cycles, I speculate about the impact such short code quality feedback loops might have on the economics of development.

It's my belief that, just as continuous automated unit testing has had a profound effect on the "bottom line" of software development for many teams and businesses, so too would Continuous Inspection.

In the next blog post, I'll talk about the CInsp process and look at practical ways of managing CInsp requirements, test automation and how we action the code quality problems it can throw up.




October 16, 2014

Dear Aunty Jason, I'm An Architect Who Wants To be Relevant Again...

"Dear Aunty Jason,

I am a software architect. You know, like in the 90's.

For years, my life was great. I was the CTO's champion in the boardroom, fighting great battles against the firebreathing dragons of Ad Hoc Design and the evil wizards of Commercial Off-The-Shelf Software. The money was great, and all the ordinary people on the development teams would bow down to me and call me 'Sir'.

Then, about a decade ago, everything changed.

A man called Sir Kent of Beck wrote a book telling the ordinary people that dragons and wizards don't actually exist, and that the songs I'd been singing of my bravery in battle had no basis in reality. 'You', Sir Kent told them, 'are the ones who fight the real battles.'

And so it was that the ordinary people starting coming up with their own songs of bravery, and doing their own software designs. If a damsel in distress needed saving, they would just save her, and not even wait for me - their brave knight - to at the very least sit astride my white horse and look handsome while they did it.

I felt dejected and rejected; they didn't need me any more. Ever since then, I have wandered the land, sobbing quietly to myself in my increasingly rusty armour, looking for a kingdom that needs a brave knight who can sings songs about fighting dragons and who looks good on a horse.

Do you know of such a place?

Your sincereley,

Sir Rational of Rose."

I can sympathise with your story, Sir Rational. I, too, was once a celebrated knight of the UML Realm about whom many songs were sung of battles to the death with the Devils of Enterprise Resource Planning. And, I, too, found myself marginalised and ignored when the ordinary folk started praying at the altar of Agile.

And, I'm sorry to say, there are lots of kingdoms these days where little thought is given to design or to architecture. You can usually spot them from a distance by their higgledy-piggledy, ramshackle rooftops, and the fact that they dug the moat inside the city walls.

But, take heart; there are still some kingdoms where design matters and where architecture is a still a "thing". Be warned, though, that many of them, while they may look impressive from the outside, are in fact uninhabited. Such cities can often be distinguished by gleaming spires and high white walls, beautiful piazzas and glistening fountains, and the fact that nobody wants to live there because they built it on the wrong place.

You don't want to go to either of those kinds of kingdom.

Though very rare, there are a handful of kingdoms where design matters and it matters that people want to live in that design. And in these places, there is a role for you. You can be relevant once again. Once again, people will sing songs of your bravery. But you won't be fighting dragons or wizards in the boardroom. You'll be a different kind of knight.

You'll be fighting real battles, embedded in the ranks of real soldiers. Indeed, you'll be a soldier yourself. And you'll be singing songs about them.




September 17, 2014

The 4 C's of Continuous Delivery

Continuous Delivery has become a fashionable idea in software development, and it's not hard to see why.

When the software we write is always in a fit state to be released or deployed, we give our customers a level of control that is very attractive.

The decision when to deploy becomes entirely a business decision; they can do it as often as they like. They can deploy as soon as a new feature or a change to an existing feature is ready, instead of having to wait weeks or even months for a Big Bang release. They can deploy one change at a time, seeing what effect that one change has and easily rolling it back if it's not successful without losing 1,001 other changes in the same release.

Small, frequent releases can have a profound effect on a business' ability to learn what works and what doesn't from real end users using the software in the real world. It's for this reason that many, including myself, see Continuous Delivery as a primary goal of software development teams - something we should all be striving for.

Regrettably, though, many software organisations don't appreciate the implications of Continuous Delivery on the technical discipline teams need to apply. It's not simply a matter of decreeing from above "from now on, we shall deliver continuously". I've watched many attempts to make an overnight transition fall flat on their faces. Continuous Delivery is something teams need to work up to, over months and years, and keep working at even after they've achieved it. You can always be better at Continuous Delivery, and for the majority of teams, it would pay dividends to improve their technical discipline.

So let's enumerate these disciplines; what are the 4 C's of Continuous Delivery?

1. Continuous Testing

Before we can release our software, we need confidence that it works. If our aim is to make the software available for release at a moment's notice, then we need to be continuously reassuring ourselves - through testing - that it still works after we've made even a small change. The secret sauce here is being able to test and re-test the software to a sufficiently high level of assurance quickly and cheaply, and for that we know of only one technical practice that seems to work: automate our tests. It's for this reason that a practice like Test-driven Development, which leaves behind a suite of fast-running automated tests (if you're doing TDD well) is a cornerstone of the advice I give for transitioning to Continuous Delivery.

2. Continuous Integration

As well as helping us to flag up problems in integrating our changes into a wider system, CI is also fundamental to Continuous Delivery. If it's not in source control, it's going to be difficult to include it in a release. CI is the metabolism of software development teams, and a foundation for Continuous Delivery. Again, automation is our friend here. Teams that have to manually trigger compilation of code, or do manual testing of the built software, will not be able to integrate very often. (Or, more likely, they will integrate, but the code in their VCS will likely as not be broken at any point in time.)

3. Continuous Inspection

With the best will in the world, if our code is hard to change, changing it will be hard. Code tends to deteriorate over time; it gets more complicated, it fills up with duplication, it becomes like spaghetti, and it gets harder and harder to understand. We need to be constantly vigilant to the kind of code smells that impede our progress. Pair Programming can help in this respect, but we find it insufficient to achieve the quality of code that's often needed. We need help in guarding against code smells and the ravages of entropy. Here, too, automation can help. More advanced teams use tools that analyse the code and detect and report code smells. This may be done as part of a build, or the pre-check-in process. The most rigorous teams will fail a build when a code smell is detected. Experience teaches us that when we let code quality problems through the gate, they tend to never get addressed. Implicit in ContInsp is Continuous Refactoring. Refactoring is a skill that many - let's be honest, most - developers are still lacking in, sadly.

Continuous Inspection doesn't only apply to the code; smart teams are very frequently showing the software to customers and getting feedback, for example. You may think that the software's ready to be released, because it passes some automated tests. But if the customer hasn't actually seen it yet, there's a significant risk that we end up releasing something that we've fundamentally misunderstood. Only the customer can tell us when we're really "done". This is a kind of inspection. Essentially, any quality of the software that we care about needs to be continuously inspected on.

4. Continuous Improvement

No matter how good we are at the first 3 C's, there's almost always value in being better. Developers will ask me "How will we know if we're over-doing TDD, or refactoring?", for example. The answer's simple: hell will have frozen over. I've never seen code that was too good, never seen tests that gave too much assurance. In theory, of course, there is a danger of investing more time and effort into these things than the pay-offs warrant, but I've never seen it in all my years as a professional developer. Sure, I've seen developers do these things badly. And I've seen teams waste a lot of time because of that. But that's not the same thing as over-doing it. In those cases, Continuous Improvement - continually working on getting better - helped.

DevOps in particular is one area where teams tend to be weak. Automating builds, setting up CI servers, configuring machines and dealing with issues like networking and security is low down on the average programmer's list of must-have skills. We even have a derogatory term for it: "shaving yaks". And yet, DevOps is pretty fundamental to Continuous Delivery. The smart teams work on getting better at that stuff. Some get so good at it they can offer it to other businesses as a service. This, folks, is essentially what cloud hosting is - outsourced DevOps.

Sadly, software organisations who make room for improvement are in a small minority. Many will argue "We don't have the time to work on improving". I would argue that's why they don't have the time.







September 16, 2014

Why We Iterate

So, in case you were wondering, here's my rigorous and highly scientific process for buying guitars...

It starts with a general idea of what I think I need. For example, for a couple of years now I've been thinking I need an 8-string electric guitar, to get those low notes for the metalz.

I then shop around. I read the magazines. I listen to records and find out what guitars those players used. I visit the manufacturers websites and read the specifications of the models that might fit. I scout the discussion forums for honest, uncensored feedback from real users. And gradually I build up a precise picture of exactly what I think I need, down to the wood, the pickups, the hardware, the finish etc.

And then I go to the guitar shop and buy a different guitar.

Why? Because I played it, and it was good.

Life's full of expectations: what would it be like to play one of Steve Vai's signature guitars? What would it be like to be a famous movie star? What would it be like to be married to Uma Thurman?

In the end, though, there's only one sure-fire way to know what it would be like. It's the most important test of all. Sure, an experience may tick all of the boxes on paper, but reality is messy and complicated, and very few experiences can be completely summed up by ticks in boxes.

And so it goes with software. We may work with the customer to build a detailed and precise requirements specification, setting out explicitly what boxes the software will need to tick for them. But there's no substitute for trying the software for themselves. From that experience, they will learn more than weeks or months or years of designing boxes to tick.

We're on a hiding to nothing sitting in rooms trying to force our customers to tell us what they really want. And the more precise and detailed the spec, the more suspicious I am of it. bottom line is they just don't know. But if you ask them, they will tell you. Something. Anything.

Now let me tell you how guitar custom shops - the good ones - operate.

They have a conversation with you about what guitar you want them to create for you. And then they build a prototype of what you asked for. And then - and this is where most of the design magic happens - they get you to play it, and they watch and they listen and they take notes, and they learn a little about what kind of guitar you really want.

Then they iterate the design, and get you to try that. And then rinse and repeat until your money runs out.

With every iteration, the guitar's design gets a little bit less wrong for you, until it's almost right - as right as they can get it with the time and money available.

Custom guitars can deviate quite significantly from what the customer initially asked for. But that is not a bad thing, because the goal here is to make them a guitar they really need; one that really suits them and their playing style.

In fact, I can think of all sorts of areas of life where what I originally asked for is just a jumping-off point for finding out what I really needed.

This is why I believe that testing - and then iterating - is most importantly a requirements discipline. It needs to be as much, if not more, about figuring out what the customer really needs as it is about finding out if we delivered what they asked for.

The alternative is that we force our customers to live with their first answers, refusing to allow them - and us - to learn what really works for them.

And anyone who tries to tell you that it's possible to get it right - or even almost right - first time, is a ninny. And you can tell them I said that.