January 7, 2018

...Learn TDD with Codemanship

Do Your Automated Tests Give You Confidence In Your Code?

I ran a little poll on the @codemanship Twitter account asking:




The responses suggest many developers don't put a lot of faith in their automated tests for detecting bugs. The aim of test automation is to dramatically lower the cost and execution time of regression testing our code so that we're alerted to new bugs sooner rather than later.

The ultimate goal is to have high confidence at any point in time that the software works, and is therefore fit for release. This is a foundational requirement of Continuous Delivery - software should always be shippable.


Examining many test suites, as I do every year, I think I have some insight into this problem. Firstly, most teams that have automated tests don't have particularly good test suites. Much of the code isn't reached by them. Many of the tests ask loose questions, leaving big gaps in their assertions that you could drive a bus-load of bugs through.

Teams quickly learn, after the first few releases, that just because their tests are passing, that doesn't mean the code is working. But there seems to be little appetite for beefing up their tests suites to plug the leaks that bugs are pouring in through.

Very few teams test their tests to see how effective they are at catching bugs. Even fewer teams target more exhaustive testing at "load-bearing" code, or even have any awareness of which parts of the code present the highest risk.

Happy Path thinking still dominates the developer mindset. Most of us don't think like testers. We want to show that our code works, not that it doesn't in certain edge cases. So our tests tend to skip over the edge cases.

In code reviews - for those teams that do them on any regular basis - test assurance tends not to be one of the things reviewers look for. At best, line coverage is checked. If the coverage report shows the new or changed code is executed in a test, that's spiffy for most dev teams. And, to be fair, most teams don't even check for that. You'd be shocked at how many teams are genuinely surprised to learn how low their coverage is. "But we do TDD...!" Evidently not much of the time.

Teams that practice TDD fairly rigorously tend to have test suites they can put more faith in. But, even as a TDD trainer and mentor with two decades of experience doing it, I regularly feel the need to take testing further after my design is complete.

I'm a big fan of guided inspection, reading the code carefully, looking for test cases I may have missed. I'm also big on parameterised testing, because it can buy you potentially massive amounts of test coverage with surprisingly little extra test code.

And, believe it or not, to some extent you can also automate exploratory testing. One example is the simple Java prototype for generating combinations of inputs for use in JUnit tests that I threw together last year. Another example is tools that can randomly generate input data, like Haskell's QuickCheck (and it's many language-specific ports, like JCheck).

I also find simple test analysis techniques like truth tables and decision tables, state transition and program flow models very useful for discovering edge cases I might have missed. Think you're thinking like a tester? Read the first few chapters of Robert Binder's Testing Object Oriented Systems and think again.

So, if you're one of the 58% who said they don't have high confidence in their automated tests, it may be time to take your automated testing to the next level.






Posted 6 months, 1 day ago on January 7, 2018