October 15, 2016

...Learn TDD with Codemanship

If Your Code Was Broken, Would You Know?

I've been running a little straw poll among friends and clients, as well as on social media, to get a feel for what percentage of development teams routinely (or continuously) measure the level of assurance their automated regression tests give them.

For me, it's a fundamental question: if my code was broken, would I know?

The straw poll suggests that about 90% of teams don't ask that question often, and 80% don't ask it at all.

The whole point of automated tests is to give us early, cheap detection of new bugs that we might have introduced as we change the code. So profound is their impact, potentially, that Michael Feathers - in his book Working Effectively With Legacy Code - defines "legacy code" as code for which we have no automated tests.

I've witnessed first-hand the impact automating regression tests can have on delivery schedules and development costs. Which is why that question is often on my mind.

The best techniques I know for "testing your tests" are:

1. A combination of the "golden rule" of Test-Driven Development (only write source code if a failing test requires it, so all the code is executed by tests), and running tests to make sure their assertions fail when the result is wrong.

2. Mutation testing - deliberately introducing programming errors to see if the tests catch them

I put considerable emphasis on the first practice. As far as I'm concerned, it's fundamental to TDD, and a habit test-driven developers need to get into. Before you write the simplest code to pass a test, make sure it's a good test. If the answer was wrong, would this test fail?

The second practice, mutation testing, is rarely applied by teams. Which is a shame, because it's a very powerful technique. Code coverage tools only tell what code definitely isn't being executed in tests. Mutation testing tells us what code isn't being meaningfully tested, even if it is being executed by tests. It specifically asks "If I broke this line of code, would any tests fail?"

The tools for automated mutation testing have improved greatly in recent years, and support across programming languages is growing. If you genuinely want to know how much assurance your tests can give you - i.e., how much confidence you can have that the code really works - then you need to give mutation testing a proper look.

Here are some mutation testing tools that might be worth having a play with:

Java - PIT

C# - VisualMutator

C/C++ - Plextest

Ruby - Mutant

Python - Cosmic Ray

PHP - Humbug






Posted 1 year, 2 months ago on October 15, 2016