August 2, 2012

...Learn TDD with Codemanship

Back To Basics #1 - Software Should Have Testable Goals

This is the first in a series of 10 posts covering the most basic principles of software development. This is me thinking out loud about what I would seek to impart to someone learning how to be a software developer (or to be a better software developer), without ruining their delicate minds with hype, buzzwords, brand names and snake oil.

Why?

No, seriously, though. Why?

Whenever i ask this question to a software team, the response is usually a lot of handwaving and management-speak and magic beans.

Most teams don't know why they're building the software they're building. Most customers don't know why they're asking them to either.

If I could fix only one thing in software development (as opposed to no things, which is my current best score), it would be that teams should write software for a purpose.

By all means, if it's your time and your money at stake, play to your heart's content. Go on, fill your boots.

But if someone else is picking up the cheque, then I feel we have responsibility to try and give them something genuinely worthwhile for their money.

Failing to understand the problem we're trying to solve is the number one failure in software development. It stands to reason: how can we hope to succeed if we don't even know what the aim of the game is?

It will always be the first thing I test when I'm asked to help a team. What are your goals, and how will you know when you've achieved them (or are getting closer to achieving them)? How will you know you're heading in the right direction? How can one measure progress on a journey to "whereever"?

Teams should not only know what the goals of their software are, but those goals need to be articulated in a way that makes it possible to know unambiguously if those goals are being achieved.

As far as I'm concerned, this is the most important specification, since it describes the customer's actual requirements. Everything else is a decision about how to satisfy those requirements. As such, far too many teams have no idea what their actual requirements are. They just have proposed solutions.

Yes, a use case specification is not a business requirement. Ditto user stories. It's a system design. A wireframe outline of a web application is very obviously a design. Acceptance tests of the BBD variety are also design details. Anything expressed against system features is a design.

Accepted widsom when presented with a feature request we don't understand the need for is to ask "why?" In my experience, asking "why?" is a symptom that we've been puting the cart before the horse, and doing things arse-backwards.

We should have started with the why and figured out what features or properties or qualities our software will need to achieve those goals.

Not having the goals clearly articulated has a knock-on effect. Many other ills reported in failed projects seem to stem from the lack of testable goals. Most notably, poor reporting of progress.

How can we measure progress if we don't know where we're supposed to be heading? "Hey, Dave, how's that piece of string coming?" "Yep, good. It's getting longer."

But also, when the goals are not really understood, people can have unrealistic expectations about what the software will do for them. Or rather, what they'll be able to do with the software.

There's also the key problem of knowing when we're "done". I absolutely insist that teams measure progress against tested outcomes. If it doesn't pass the tests, it's 0% done. Measuring progress against tasks or effort leads to the Hell of 90% Done, where developers take a year to deliver 90% of the product, and then another 2 years to deliver the remaining 90%. We've all been there.

But even enlightened teams, who measure progress entirely against tested deliverables, are failing to take into account that their testable outcomes are not the actual end goals of the software. We may have delivered 90% of the community video library's features, but will the community who use it actually make the savings on DVD purchases and rentals they're hoping for when the system goes live? Will the newest titles be available soon enough to satisfy our film buffs? Will the users donate the most popular titles, or will it all just be the rubbish they don't want to keep any more? Will our community video library just be 100 copies of "The Green Lantern"?

It's all too easy for us to get wrapped up in delivering a solution and lose sight of the original problem. Information systems have a life outside of the software we shoehorn into them, and it's a life we need to really get to grips with if we're to have a hope of creating software that "delights".

In the case of our community video library, if there's a worry that users could be unwilling to donate popular ttitles, we could perhaps redesign the system to allow users to lend their favourite titles for a fixed period, and offer a guarantee that if it's damaged, we'll buy them a new copy. We could also offer them inducements, like priority reservations for new titles. All of this might mean our software will have to work differently.

So, Software Development Principle #1 is that software should have testable goals that clearly articulate why we're creating it and how we'll know if those goals are being achieved (or not).

Coming soon, Back To Basics #2.












Posted 5 years, 9 months ago on August 2, 2012