January 3, 2007

...Learn TDD with Codemanship

Do I Practice What I Preach?

One universal truth about software methodologists is that we don't practice what we preach. We bang on about best practices to other people creating code, but when it's us who's writing it, all that good stuff tends to go out the window. You'd be amazed at how many UML gurus, for example, just do a UI mock-up and then start coding when they're writing commercial software with their own fair hands.

And most of you who've worked on real projects with consultants like me know the score. So I shouldn't be surprised when people email me and ask: "Jason, do you actually do any of this stuff yourself?" And I could make all sorts of claims about how great I am and how I do all of it all of the time. But I won't.

I will claim this, though: I do most of it, most of the time. And when I don't do it, it's often because they are collaborative practices, and I lack a willing or able dance partner.

There are some best practices that are so fundamental that I see them as non-negotiable:

* Evolutionary design and incremental delivery - I don't think anyone's under any illusions about how I feel about "waterfall" development. I simply will not, under any circumstances, do anything other than iterative and incremental development

* Goal-driven design - comes in many forms, like TDD, FDD, scenario-driven development, use case-driven development, and so on. Basically, we design our software to satisfy testable usage goals. In layman's terms, we build kitchens you can make omelettes in.

* Adaptive planning - in even the simplest projects, things are complicated enough that we should expect our plans to change. Detailed long-term plans are a big no-no in my book. I am a sworn enemy of MS Project.

* Feedback from testing - should be the primary measure of progress. Software isn't delivered until it's actually physically delivered, and the only way we can know it's been delivered is to test it and make sure that it actually works. You may have written 99% of the code, but if it only passes 1% of the tests then you are only 1% done.

* Continual design "pruning" - you may know this as refactoring, but basically as we inject new functionality into our code, we also inject a certain amount of disorder ("code smells"). This is inevitable. Therefore as the project progresses the shit - pardon my French - builds up and, if we let it, eventually hits the fan. Maintaining design quality - "code liquidity", or whatever we want to call it - is a critical, and consistently overlooked, necessity in software development.

Now, I would love to say that I always have continual customer involvement on every project I work on, but that would be a bare-faced lie. It is beyond my control. What I can, and always do manage to do is to continually remind the customer of their obligations and the risks they're taking on if they don't play their part effectively.

So if you happen to end up working on a project with me, you can be sure of 6 key things:

1. We won't be doing waterfall development
2. We won't be building stuff nobody's going to need
3. We won't spend half our time updating the project plan
4. We won't discover that we still have 6 months of work to do the day before we are supposed to go live
5. Our productivity won't drop to zero as the code "sets" like concrete
6. The customer will be sick and tired of me pestering them to come and look at the software they paid us to build for them

When clients bring me in, I suspect this is what they want to see happen.
Posted 14 years, 6 months ago on January 3, 2007