March 20, 2016

...Learn TDD with Codemanship

Property-based Testing: Return of the Son of Formal Methods

For many years now, I've been biding my time, waiting for the day when Formal Methods go mainstream.

If you're new to the concept, Formal Methods are essentially just the mathematically precise specification and testing of software and systems.

While you can take the verification side of Formal Methods to very expensive extremes, depending on how critical the integrity of your code is, they all start with a specification.

This needs to be written in an unambiguous - and therefore testable - specification language. In the 80s and 90s, formal specification languages like Z (and its object oriented nephew, Object Z), VDM and B were invented for this purpose.

These languages were somewhat unapproachable for mainstream programmers, often using mathematical notations that probably only somebody who had knowledge of first-order logic and set theory might be familiar with. Programmers, typically, work with text.

So, as UML began to take off, Formal Methods folks invented text-based specification languages for adding precise rules to their system models, like the Object Constraint Language. Naively, some of the inventors believed that OCL would be "friendly" enough to be understood by business stakeholders. Realistically, it was friendly enough to be used by programmers.

But OCL never really gained widespread acceptance, either. And that was chiefly for two reasons: firstly, programmers already have text-based languages they can precisely specify rules in. i.e., programming languages. Martin Fowler once commented to me at a workshop on Agile and Formal Methods many moons ago that we may as well write the code if we're going to precisely specify it. It was a sentiment echoed by many in the Agile community.

Interestingly, we had no such qualms about writing executable tests for our code. That was the other reason formal specification never really took off: Test-driven Development kind of sort of stole its thunder.

There is little doubt - well, none really - that TDD can produce software that's far more reliable than average. And arguably, in the hands of the right programmer, it may be possible to match the quality standards being achieved in safety-critical software development with TDD. Someone closely associated with OCL, John Daniels, made that claim to after I gave a talk on TDD - again, a looong time ago. So it's not a new idea, that's for sure.

I also now know from practical experience that John was right: you can push TDD to the point - in terms of the discipline - where high-integrity software is achievable.

The way I've done it, and seen it done, is through refactoring the test code into a form that can serve as an easy jumping-off point for getting much higher test assurance relatively cheaply. Most commonly, refactoring a bunch of similar test methods into a single parameterised test opens up many possibilities for testing further, like combinatorial testing, random testing, and even model checking.

Tools that can generate very large numbers of inputs, through which we can more exhaustively test our code, can help us make the leap from "very reliable" software to "extremely reliable software" for a relatively small investment. For this reason, I encourage developers doing TDD to refactor to parameterised tests when the opportunity arises. They're a good practical foundation for writing high-integrity code.

When we do TDD, usually we work with specific examples of inputs and expected outputs. So we write assertions like:


assertTrue(sqrroot(16) == 4)


This doesn't work for test inputs that are being generated programmatically, though. For random and combinatorial testing, and certainly for model-checking, we need to generalise our test assertions, because we don't know the expected result in advance. So we need to write assertions that can be evaluated for any test input.


assertTrue(sqrroot(input) * sqrroot(input) == input)


While it's becoming fashionable to refer to these kinds of generalised assertions as "properties", and using tests that make generalised assertions as "property-based testing", this is - in all essence - formal specification.

Styles vary, of course. Some advocate moving the assertions into the source code itself, to be evaluated at the exact point where the rule must be satisfied, so whenever that code is executed - perhaps in debug mode - the rule is evaluated.

Writing assertions in the code about what should be true while it's executing is an idea cooked in the very early days of computing, and advocated for checking the correctness of programs by Alan Turing, among others. Most modern programming languages have something like an assert keyword to allow us to say "at this point, x should be true" (and if x isn't true, then our program is wrong).



All we need to do is plug in a whole bunch of randomly generated inputs, and we can check if average() is ever called with an empty array of numbers. If it is, then our software is broken.

Model checking tools like NASA's Java Pathfinder give us choices about how we specify the rules of our software. We can embed them in the source code using proprietary specification languages and conventions, or we can write them in JUnit tests as Theories.

What all this adds up to is a "return of the son of Formal Methods". It's not quite formal specification as I might remember it, but it most definitely is formal specification, even if it's going by a different name these days.

Personally, I strongly approve. Although tool support for property-based testing needs to evolve, and to join hands with the best of the formal methods tools out there like Pathfinder, so we can finally get a practical workflow that could take us from vanilla example-based TDD all the way up to the most rigorous forms of verification.

I have higher hopes for property-based testing, for two good reasons:

1. Programmers can write their specs in the same language they program in, so the learning curve is shallower

2. Programming languages themselves are evolving into a more compatible declarative style, making it easier to express more sophisticated rules

But I genuinely hope it catches on widely. I've been waiting a long time for it.








Posted 1 year, 9 months ago on March 20, 2016