October 4, 2014

...Learn TDD with Codemanship

How I Deal With "Bugs" On Payment-On-Results Projects

Those of us who do bits and bobs of development work for clients, as I occasionally get asked to do, know how important it can be to get absolute clarity on what it is we've agreed I'm going to deliver.

I'm a big fan of getting paid on results, rather than just being paid for my time and expenses.

The upside is that my time remains my time; I don't respond well to clients attempting to manage it for me, especially as my weeks can get complicated in that respect with multiple concerns for me to address for multiple clients.

The potential downside is that I don't deliver what the client thought we'd agreed I'd deliver. And so extra effort and care needs to be put into how we agree these things.

Driving design with executable acceptance tests can help enormously in this respect. Books like Gojko Adzic's Specification By Example explain expertly how acceptance tests can lead us all to a clear shared understanding of what the software must do.

On fixed price - or payment on results - development deals, bugs can be a contentious issue. My preferred way of billing is to exclude the cost of fixing bugs. They are, after all, my failure.

But sneaky clients, given enough leeway, can try to sneak in new features and change requests as bug reports, hoping to get them done for free. I'm distinctly uncomfortable with the industry's traditional solution to this problem, which is to make the customer pay for their mistakes.

Executable acceptance tests can provide much clarity on what is a bug and what isn't.

If any of the agreed tests fail, then it's definitely a bug. Therefore, I don't deliver the software unless all the acceptance tests pass.

Let's say, for example, we agreed that a feature to transfer credits from one user's account to another's should debit the payer and credit the payee. I deliver the software that passes the tests we agreed for that. But none of those tests defined what should happen if the payer and the payee are the same user account. The customer might try to report it as a bug if the software allows it, but when acceptance criteria are precisely defined using tests, I classify that as a change request. The customer is asking me to make the software do something we hadn't agreed it should. It is undefined behaviour.

There is one kind of undefined behaviour that I would classify as a bug, and fix at my own cost, though. When development starts, we'll agree a set of behaviours that should never be allowed - for example, when would anyone ever want their software to throw an unhandled exception?

So if the implementation leaves it such that an undefined behaviour triggers one of these gotchas - e.g., a null reference exception - I fully accept that it is a programming error, and that my code should either not allow that scenario to arise, or meaningfully handle it. I usually refer to these as universal acceptance criteria, and they are requirements about things that should never happen, regardless of the problem domain.

Universal acceptance criteria are almost exclusively about low-level programming errors - trying to invoke methods on null objects, trying to reference an array index that's too high, and so on - and can often be eliminated with a bit of extra care. There are, for example, static analysis tools that can find these things. Inspections help enormously with these kinds of errors.

This approach neatly compartmentalises work into 3 distinct categories:

1. Things we explicitly agreed the software will do (requirements)
2. Things we didn't explicitly agree the software will do (more requirements)
3. Things we explicitly agreed the software must never do (bugs)

Done correctly, only the third category is "bugs", and they should be rare occurrences, leaving us more time to focus on what the customer needs.





Posted 3 years, 2 months ago on October 4, 2014