February 18, 2014

...Learn TDD with Codemanship

TDD, Architecture & Non-Functional Goals - All Of These Things Belong Together

One of the most enduring myths about Test-driven Development is that it is antithetical to good software or system architecture. Teams who do TDD, they say, don't plan design up-front. Nor do they look at the bigger picture, or consider key non-functional design quality attributes enough.

Like many of these truisms, TDD gets this reputation from teams doing it poorly. And, yes, I know "you must have not been doing it right" is the go-to excuse for many a snakeoil salesman, so let me qualify my hifalutin claims.

First of all, there's this perception that when we do TDD, we pick up a user story, agree a few acceptance tests and then dive straight into the code without thinking about, visualising or planning design at a higher level.

This is not true.

The first thing I recommend teams do when they pick up user stories - especially in the earlier stages of development on a new product or system - is to go to the whiteboards. Collaborative design is the key to software development. It's all about communicating - this is one of my core principles of writing software.

Scott Ambler's book, Agile Modeling (now roughly 300 years old, or something like that), painted a very clear picture of where collaborative analysis and design could fit into, say, Extreme Programming.

The fact is, TDD is a design discipline that can be effectively applied at every level of abstraction.

There are teams using test cases to plan their high-level architecture - which I highly recommend, as designs are best when they answer real needs, which can be clearly expressed using examples/test cases. Just because they're not writing code yet doesn't mean they're not being test-driven.

And, as most of us will probably have seen by now, test-driving the implementation of our high-level designs is a great way to drive out the details and "build it right".

At the other end of the detail spectrum, there are teams out there test-driving the designs of FPGAs, ASICs and other systems on silicon.

If it can be tested, it can be test-driven. And that applies to a heck of a lot of things, including packages architectures and systems of systems.

As to the non-functional considerations of design and architecture, I've learned from 15 years of TDD-ing a wide variety of software that these are best expressed as tests.

I'm no great believer in the Dark Arts of software architecture. Like a sort of object oriented Gandalf, I've worn that cloak and carried that magical staff more times than I care to remember, and I know from acres of experience that the best design authorities are empiricists (i.e., testers).

How many times have you heard an architect say "this design is wrong" when what they really mean is "that's not how I would have designed it"? I try to steer clear of that kind of highly subjective voodoo.

Better, I find, to express design goals using unambiguous tests that the software needs to pass. It could be a very simple design rules that says "methods shouldn't make more than one decision", or something more demanding like "the software should occupy a memory footprint of no more than 100KB per user session".

Be it about complexity, dependencies, performance, scalability, or even how much entropy an algorithm generates, we can be more scientific about these things if we're of a mind to.

My own experiences have taught me that, when design goals are ambiguous and/or not tested frequently, the software doesn't meet those goals. Because you get what you measure.

Both of these things are no just do-able in a test-driven approach, but I'd highly recommend teams do them. Not only do they make TDD more effective when considering the picture, but they also benefit from being made more effective by being test-driven. That's a Win-Win in my book.








Posted 3 weeks, 4 days ago on February 18, 2014