September 24, 2018

...Learn TDD with Codemanship

Why I Throw Away (Most Of) My Customer Tests

There was a period about a decade ago, when BDD frameworks were all new and shiny, when some dev teams experimented with relying entirely on their customer tests. This predictably led to some very slow-running test suites, and an upside-down test pyramid.

It's very important to build a majority of fast-running automated tests to maintain the pace of development. Upside-down test pyramids become a severe bottleneck, slowing down the "metabolism" of delivery.

But it is good to work from precise, executable specifications, too. So I still recommend teams work with their customers to build a shared understanding of what is to be delivered using tools like Cucumber and Fitnesse.

What happens to these customer tests after the software's delivered, though? We've invested time and effort in agreeing them and then automating them. So we shoud keep them, right?

Well, not necessarily. Builders invest a lot of time and effort into erecting scaffolding, but after the house is built, the scaffolding comes down.

The process of test-driving an internal design with fast-running unit tests - by which I mean tests that ask one question and don't involve external dependencies - tends to leave us with the vast majority of our logic tested at that level. That's the base of our testing pyramid, and as it should be.

So I now have customer tests and unit tests asking the same questions. One of them is surplus to requirements for regression testing, and it makes most sense to retain the fastest tests and discard the slowest.

I keep a cherrypicking of customer tests just to check that everything's wired together right in my internal design - maybe a few dozen key happy paths. The rest get archived and quite possibly never run again, or certainly not an a frequent basis. They aren't maintained, because those features or changes have been delivered. Move on.





Posted 3 weeks, 1 day ago on September 24, 2018