August 6, 2007

...Learn TDD with Codemanship

Feedback-driven Development

If you asked me to invent a methodology for creating software, there's not doubt in my mind that the central pillar would be feedback cycles.

No great innovation there, you would correctly summise. But I think where I might add a little of my own "secret sauce", as at least one person I know might put it, is in two places:

1. I would include self-similar feedback cycles at al levels of the software development lifecycle. In that sense, my entire method would be constructed out of the same basic pattern, repeated at different levels, with different inputs and outputs - but essentially the same pattern.

2. I would make a strong statistical argument upon which this pattern would be based. I maintain that design is essentially a search, and - as computer scientists tell us - some search algorithms are better than others at solving certain kinds of problems.

I would less-than-humbly suggest that these feedback cycles - at whatever level - have a goal or a specification to aim for, a period of direct action to achieve that goal, followed as quickly as possible by objective feedback that tells us whether our actions brought us closer to (or further away from) our goal, which is then fed back into the next cycle.

Again, nothing earth-shattering about this kind of process. Where the secret sauce lies is in firstly identifying this pattern throughout the software development lifecycle, and then in optimising it at every level to deliver more value sooner.

At two extremes, for example, we might consider the project or product strategy feedback cycle - where we specify a business goal for the software we're building, and monitor it's business performance to get feedback on the extent to which what we delivered meets those goals. Maybe we wanted to improve patient care in a hospital by introducing a new software system that makes their detailed medical history available to doctors on a handheld device. Key indicators could tell us whether patient care is actually improving after the software has been released (perhaps in a trial, perhaps for wider use). These strategic feedback cycles - well known to savvy product managers - can be months or even years longs.

At the other extreme, program editors these days are very good at giving developers immediate feedback about the validity of the code they're writing. Mispell a class name, for example, and in many editors a wiggly red line will immediately appear underneath it to let you know you've made a boo-boo.

And in between we have a whole bunch of smaller/longer cycles - like the test-driven development cycle, or the refactoring cycle, or the continuous integration cycle, or the cycle that takes us from agreeing a specification (e.g., a usage scenario) to passing an acceptance test for that scenario. And, of course, we have those explicitly acknowledged cycles called iterations and releases.

But they are all cycles - bug-fixing has a feedback cycle, as does usability engineering, as does performance optimisation. They all follow the same self-similar pattern: specify goals->plan->act->measure/test->feedback.

The trick, I suspect, is in mapping each cycle on to each specific problem domain. For example, in usability engineering, what are our goals? How are they meaningfully expressed? What kind of plans can we create? How do we implement those plans? How do we measure our progress towards the goals? How can we act on the feedback we get to move us closer to our goals?

This introduces the need for three formal components in every kind of feedback cycle:

* We need some way to unambiguously specify our goals
* We need some way to meaningfully test the effect of our actions with regard to our goals
* We need to way to make the necessary adaptations based on the feedback we get from our tests

In simpler terms, we need:

1. (Testable) Specifications
2. Tests (including metrics)
3. Refactorings

In every type of feedback cycle. The imlementation of these will depend on the problem domain. In many domains, specs and tests might technically be the same thing (in test-driven development, for example), but they still occupy two distinct roles in the cycle, namely specifying goals and measuring/testing. In TDD, much of the planning goes on in our heads, as we reason internally about what code to write to pass the test.

In usability engineering, our goals might be expressed as usability targets - like the time it takes to complete a specific task. Our plans might be described using storyboards or screenflows. We might implement those plans on a platform like JSF using test-driven development, and we might do usability testing to get the feedback we need to help us iterate towards our goals. UI refactoring is something that occasionally gets talked about, but nobody I'm aware of has seriously explored it yet. In feedback-driven development, it's arguably necessary, though - which might explain why UI designs tend to be less than optimal even on Agile projects.
Posted 13 years, 4 months ago on August 6, 2007