July 1, 2007

...Learn TDD with Codemanship

"Too Much Testing". Grrr!

You might already know that I firmly believe that few, if any, software development teams go "too far" with quality assurance. So I get a little heated when I hear people debating about "how much testing is too much?"

To me, this is like asking "how much cheese would it take to sink a battleship?" There probably is an answer - a real amount of cheese that really would sink a battleship. But very few of us are ever likely to see that amount of cheese in one place in our lifetimes.



It certainly is true that you can have too many testers, and that you can spend too much time and/or money on the testing activity, but that doesn't necessarily mean that you actually did too much testing, any more than spending too much money on software development automatically means that you'll get "too much" useful code.

I've watched teams of 10+ testers achieve pitifully low test coverage on relatively modest code bases, while a single tester managed to eak out very high coverage just by being better at it. We'd do well to remember that, just as there are developers who are literally 10 times as productive as others, the same also goes for testers.

Anyway, the point I'm struggling to make this early on a Sunday morning (crikey, I must be bored) is that you can achieve much higher levels of test coverage by working smarter, and it's perfectly feasible to take testing to a completely new level without incurring significant increases in expense or time.

The myth of "too much testing" is probably born out of very linear management thinking. The project manager sees a team with 10 testers, all working flat out to meet deadlines (which, naturally, they never do), delivering code with high defect densities. If they're asked to cut defect densities in half, they might extrapolate that they will therefore need 20 testers to double their coverage.

This is up there with the twice-as-many-developers-will-deliver-twice-as-fast kind of management thinking - the classic 9 women having 1 baby in one month scenario that we were warned about more than three decades ago.

You do not need twice as many testers to get twice the test coverage. Smaller teams of more highly skilled testers will get far better results. But if developers have been ruthlessly commodotized in recent years, testing professionals have been reduced to the status of sandbags. Application needs shoring up? Throw a few more testers on it!

I actually think that a really good tester is worth his or her weight in gold. Sadly, test professionals seem to occupy a lower social strata (or should that be "stratum"?) in the software development community. Even business analysts are taken more seriously (yes, that low!) Managers seem to view testers as flesh-coloured automata - human test harnesses doomed to repeat (almost) the same processes over and over and over and over and over and over and over again.

Of course, a "really good tester" would automate their tests and have a machine do the over-and-over bit for them while they get to work automating more tests, or exploring the system to find new test scenarios, or canoodling in the stationary cupboard with Ms. Jones from marketing. Hey, it's all good. Just as long as we get the kind of coverage we need to help us deliver better software.

Anyway, back to the headline story - the accusation of "too much testing". Like I said, it is indeed possible to have "too much test coverage". Just as it's possible to sink a battleship with "too much cheese". I look forward to the day when I'm lucky enough to bear witness to either spectacle.
Posted 13 years, 5 months ago on July 1, 2007