March 5, 2015

Public Workshop - Intensive Advanced Unit Testing, Sat May 9th, London

A quick note about an upcoming public workshop I'm running in London on Saturday May 9th.

Intensive Advanced Unit Testing crams in all the most useful bits of Codemanship's 2-day course into a single day, covering skills that can help us get higher assurance on the 10% of our code that needs to be especially reliable, as well as how to identify and target code critical code in our systems.

Hands-on exercises will include Design By Contract, advanced parameterised testing, mutation testing to find the gaps in our existing tests, and automated test case discovery using random test data generators, combinatorial input generators and tools that analyse our code with symbolic execution and constraint solvers to take the guesswork/luck out of things.

We'll also, of course, be looking at one of the most powerful and overlooked forms of white-box testing - inspections.

All for the very affordable price of £109 (plus EventBrite fees), about 80% cheaper than many of our competitors - though, for this workshop, we have none anywhere in the world! (It's true: nobody else is running a public course like this.)

If, like me, you believe that TDD is a great foundation for building reliable software, but that sometimes - on code that really matters - we need to go further, then join us on May 9th.



March 1, 2015

Continuous Inspection at NorDevCon

On Friday, I spent a very enjoyable day at the Norfolk developer's conference NorDevCon (do you see what they did there?) It was my second time at the conference, having given the opening keynote last year, and it's great to see it going from strength to strength (attendance up 50% on 2014), and to see Norwich and Norfolk being recognised as an emerging tech hub that's worthy of inward investment.

I was there to run a workshop on Continuous Inspection, and it was a good lark. You can check out the slides, which probably won't make a lot of sense without me there to explain them - but come along to CraftConf in Budapest this April or SwanseaCon 2015 in September and I'll answer your questions.

You can also take a squint at (or have a play with) some code I knocked up in C# to illustrate a custom FxCop code rule (Feature Envy) to see how I implemented the example from the slides in a test-driven way.

I'm new to automating FxCop (and an infrequent visitor to .NET Land), so please forgive any naivity. Hopefully you get the idea. The key things to take away are: you need a model of the code (thanks Microsoft.Cci.dll), you need a language to express rules against that model (thanks C#), and you need a way to drive the implementation of rules by writing executable tests that fail (thanks NUnit). The fun part is turning the rule implementation on its own code - eating your own dog food, so to speak. Throws up all sorts of test cases you didn't think of. It's a work in progress!

I now plan, before CraftConf, to flesh the project out a bit with 2-3 more example custom rules.

Having enjoyed a catch-up with someone who just happens to be managing the group at Microsoft who are working on code analysis tools, I think 2015-2016 is going to see some considerable ramp-up in interest as the tools improve and integration across the dev lifecycle gets tighter. If Continuous Inspection isn't on your radar today, you may want to put it on your radar for tomorrow. It's going to be a thing.

Right now, though, Continuous Inspection is very much a niche pastime. An unscientific straw poll on social media, plus a trawl of a couple of UK job sites, suggests that less than 1% of teams might even be doing automated code analysis at all.

I predicted a few years ago that, as computers get faster and code gets more complex, frequent testing of code quality using automated tools is likely to become more desirable and more do-able. I think we're just on the cusp of that new era today. Today, code quality is an ad hoc concern relying on hit-and-miss practices like pair programming, where many code quality issues often get overlooked by pair who have 101 other things to think about, and code reviews, where issues - if they get spotted at all in the to-and-fro - are flagged up long after anybody is likely to do anything about them.

In related news, after much discussion and braincell-wrangling, I've chosen the name for the conference that will be superceding Software Craftsmanship 20xx later this year (because craftsmanship is kind of done now as a meme). Watch this space.






February 17, 2015

Clean Code is a Requirements Discipline

Good morning, and welcome to my World of Rant.

Today's rant is powered by Marmite.

If you're one of those keeerrrraaazy developers who thinks that code should be reliable and maintainable, and have expressed that wrong-headed thought out loud and in public, then you've probably run into much more rational, right-thinking people - often who aren't developers, because who would want to do that for a living? (yuck!) - who counter that it's more important to satsify users' needs than to write Clean Code, and that a focus on the latter must necessarily be at the expense of the former.

i.e., people who focus on details must be losing sight of the Bigger Picture (TM).

I call bullshit.

First of all, the mentality that cares about making their software reliable and easy to understand and to change tends to just care, generally. I don't know about you, but I don't get much job satisfaction from writing beautiful code that nobody uses.

Secondly of all, it's a false dichotomy, like we have to choose between useful software and Clean Code, and it's not possible to take care of buiilding the right thing and building it right at the same time.

Good software developers are able to dive into the detail and then step back to look at the Bigger Picture (TM) with relative ease. The requirements discipline is part and parcel of software craftsmanship. It's misinformed to suggest that craftsmanship is all about code quality, just as it is to suggest that TDD is all about unit tests and internal design.

I'd go so far as to say that, not only are Clean Code and the Bigger Picture (TM) perfectly compatible, but in actuality they go together like Test-first and Refactoring. Teams really struggle to achieve one without the other.

Maintainable code requires, first and foremost, that the code be easy to read and understand. Literate programmig demands that our code clearly "tells the story" of what the software does in response to user input, and in the user's language. That is to say, to write code that makes sense, we must have sense to make of it.

And, more importantly, how clean our code is has a direct impact on how easy it will be for us to change that code based on user feedback.

People tend to forget that iterating is the primary requirements discipline: with the best will in the world, and all the requirements analysis and acceptance testing voodoo we can muster, we're still going to need to take a few passes at it. And the more passes we take, the more useful our software is likely to become.

Software that gets used changes. We never get it right first time. Even throwaway code needs to be iterated. (Which is why the "But this code doesn't need to last, so we don't need to bother making it easy to change" argument holds no water.)

Software development is a learning process, and without Clean Code we severely hinder that learning. What's the point in getting feedback if it's going to be too costly to act on it? I shudder to think how many times I've watched teams get stuck in that mud, bringing businesses to their knees sometimes.

Which is why I count Clean Code as primarily a requirements discipline. It is getting the details right so that we can better steer ourselves towards the Bigger Picture (TM).

To think that there's a dichotomy between the two disciplines is to fundamentally misunderstand both.

Mmmm. Marmite.


February 13, 2015

Intensive TDD, Continuous Inspection Recipes & Crappy Remote Collaboration Tools

A mixed bag for today's post, while I'm at my desk.

First up, after the Intensive TDD workshop on March 14th sold out (with a growing waiting list), I've scheduled a second workshop on Saturday April 11th, with places available at the insanely low price of £30. Get 'em while they're hot!

Secondly, I'm busy working on a practical example for a talk I'm giving at NorDevCon on Feb 27th about Continuous Inspection.

What I'm hoping to do is work through a simple example based on my Dependable Dependencies Principle, where I'll rig up an automated code analysis wotsit to find the most complex, most depended upon and least tested parts of some code to give early warning about where it might be most likely to be broken and might need better testing and simplifying.

To run this metric, you need 3 pieces of information:

* Cyclomatic Complexity of methods
* Afferent couplings per method
* Test coverage per method

Now, test coverage could mean different things. But for a short demonstration, I should probably keeep it simple and fairly brute force - e.g., % LOC reached by the tests. Not ideal, but in a short session, I don't want to get dragged into a discussion about coverage metrics. It's also a readily-available measure of coverage, using off-the-shelf tools, so it will save me time in preparing and allow viewers to try it for themselves without too much fuss and bother.

What's more important is to demonstrate the process going from identifying a non-functional requirement (e.g., "As the Architect, I want early warning about code that presents a highr risk of being unreliable so that I can work with the developers to get better assurance for it"), to implementing an executable quality gate using available tools in a test-driven manner (everybody forgets to agree tests for their metrics!), to managing the development process when the gate is in place. All the stuff that constitutes effective Continuous Inspection.

At time of writing, tool choice is split between a commercial code analysis tool called JArchitect, and SonarQube. It's a doddle to rig up in JArchitect, but the tool costs £££. It's harder to rig up in SonarQube, but the tools are available for free. (Except, of course, nothing's ever really free. Extra time taken to get what you want out of a tool also adds up to £££.) We'll see how it goes.

Finally, after a fairly frustrating remote pairing session on Wednesday where we were ultimately defeated by a combination of Wi-Fi, Skype, TeamViewer and generally bad mojo, it's occured to me that we really should be looking into remote collaboration more seriously. If you know of more reliable tools for collaboration, please tweet me at @jasongorman.





February 9, 2015

Mock Abuse: How Powerful Mocking Tools Can Make Code Even Harder To Change

Conversation turned today to that perennial question about mock abuse; namely that there are some things mocking frameworks enable us to do that we probably shouldn't ought to.

In particular, as frameworks have become more powerful, they've made it possible for us to substitute the un-substitutable in our tests.

Check out this example:



Because Orders invokes the static database access method getAllOrders(), it's not possible for us to use dependency injection to make it so we can unit test Orders without hitting the database. Boo! Hiss!

Along comes our mocking knight in shining armour, enabling me to stub out that static method to give a test-specific response:



Problem solved. Right?

Well, for now, maybe yes. But the mocking tool has not solved the problem that I still couldn't substitute CustomerData.getAllOrders() in the actual design if I wanted to (say, to use a different kind of back-end data store or a web service). So it's solved the "how do I unit test this?" problem, but not in a way that buys me any flexibility or solves the underlying design problem.

If anything, it's made things a bit worse. Now, if I want to refactor Orders to make the database back end swappable, I've got a bunch of test code that also depends on that static method (and in arguably a bigger way - more code depends on that internal dependency. If you catch my drift.)

I warn very strongly against using tools and techniques like these to get around inherent internal dependency problems, because - when it comes to refactoring (and what's the point in having fast-running unit tests if we can't refactor?) all that extra test code can actually bake in the design problems.

Multiply this one toy example by 1,000 to get the real scale I sometimes see this one in real code bases. This approach can make rigid and brittle designs even more rigid and more brittle. In the long term, it's better to make the code unit-testable by fixing the dependency problem, even if this means living with slow-running (or even - gasp! - manual) tests for a while.


February 4, 2015

Why Distribution & Concurrency Can Be A Lethal Cocktail For The Unwitting Dev Team

Picture the scene: it's Dec 31st 1990, a small town in upper state New York. I'm at a New Year's Eve party, young, stupid and eager to impress. The host mixes me my first ever Long Island Iced Tea. It tastes nice. I drink three large ones, sitting at their kitchen table, waxing eloquent about life, the universe and everything in my adorable English accent, and feeling absolutely fine. Better than fine.

And then, after about an hour, I get up to go to the bathroom. I'm not fine. Not fine at all. I appear to have lost the use of my legs, and developed an inner-ear problem that's affecting my normally balletically graceful poise and balance.

I proceed to be not-fine-at-all into the bathroom sink and several other receptacles, arguably none of which were designed for the purpose I'm now putting them to.

Long Island Iced Tea is a pretty lethal cocktail. Mixed properly, it tastes like a mildy alcoholic punch with a taste not dissimilar to real iced tea (hence the name), but one look at the ingredients puts pay to that misunderstanding: rum, gin, vodka, tequila, triple sec - ingredients that have no business being in the same glass together. It is a very alcoholic drink. Variants on the name, like "Three Mile Island" and "Adios Motherf***er", provide further clues that this is not something you serve at a child's birthday party.

I end the evening comatose on a water bed in a very hot room. This completes the effect, and Jan 1st 1991 is a day I have no memory of.

Vowing never to be suckered into a false sense of security by something that tastes nice and makes me feel better-than-fine for a small while, I should have known better than to get drawn like a lamb to the slaughter into the distributed components craze that swept software development in the late 1990's.

It went something like this:

Back in the late 1990's, aside from the let's-make-everything-a-web-site gold rush that was reaching a peak, there was also the let's-carve-up-applications-that-we-can't-even-get-working-properly-when-everything's-in-one-memory-address-space-and-there's-no-concurrency-and-distribute-the-bits-willy-nilly-adding-network-deficiencies-distributed-transactions-and-message-queues fad.

This was enabled by friendly technology that allowed us to componentise our software without the need to understand how all the undrelying plumbing worked. Nice in theory. You carve it, apply the right interfaces, deploy to your application server and everything's taken care of.

Except that it wasn't. It's very easy to get and up and running with these technologies, but we found ourselves continually having to dig down into the underlying detail to figure out why stuff wasn't working the way it was supposed to. "It just works" was a myth easily dispelled by looking at how many books on how this invisible glue worked were lying open on people's desktops.

To me, with the benefit of hindsight, object request brokers, remote procedure calls, message queues, application servers, distributed transactions, web services... these are the hard liquor of software development. The exponential increase in complexity - the software equivalent of alcohol units - can easily put unwitting development teams under the table.

I've watched so many teams merrily downing pints of lethal-but-nice-tasting cocktails of distribution and concurrency, feeling absolutely fine - better than fine - and then when it's time for the software to get up and walk any kind of distance... oh dear.

It turns out, this stuff is hard to get right, and the tools don't help much in that respect. They make it easy to mix these cocktails and easy to drink as much as you think you want, but they don't hold your hand when you need to go to the bathroom.

These tools are not your friends. They are the host mixing super-strength Long Island Iced Teas and ruining your New Year with a hangover that will never go away.

Know what's in your drink, and drink in moderation.





February 2, 2015

Stepping Back to See The Bigger Picture

Something we tend to be bad at in the old Agile Software Development lark is the Big Picture problem.

I see time and again teams up to their necks in small details - individual users stories, specific test cases, individual components and classes - and rarely taking a step back to look at the problem as a whole.

The end result can often be a piecemeal solution that grows one detail at a time to reveal a patchwork quilt of a design, in the worst way.

While each individual piece may be perfectly rational in its design, when we pull back to get the bird's eye view, we realise that - as a whole - what we've created doesn't make much sense.

In other creative disciplines, a more mature approach is taken. Painters, for example, will often sketch out the basic composition before getting out their brushes. Film producers will go through various stages of "pre-visualisation" before any cameras (real or virtual) star rolling. Composers and music producers will rough out the structure of a song before worrying about how the kick drum should be EQ'd.

In all of these disciplines, the devil is just as much in the detail as in software development (one vocal slightly off-pitch can ruin a whole arrangement, one special effect that doesn't quite come off can ruin a movie, one eye slightly larger than the other can destroy the effect of a portrait in oil - well, unless you're Picasso, of course; and there's another way in which we can be like painters, claiming that "it's mean to be like that" when users report a bug.)

Perhaps in an overreaction to the Big Design Up-Front excesses of the 1990's, these days teams seem to skip the part where we establish an overall vision. Like a painter's initial rough pencil sketch, or a composer's basic song idea mapped out on an acoustic guitar, we really do need some rough idea of what the thing we're creating as a whole might look like. And, just as with pencil sketches and song ideas recorded on Dictaphones, we can allow it to change based on the feedback we get as we flesh it out, but still maintain an overall high-level vision. They don't take long to establish, and I would argue that if we can't sketch out our vision, then maybe - just maybe - it's because we don't have one yet.

Another thing they do that I find we usually don't (anywhere near enough) is continually step back and look at the thing as a whole while we're working on the details.

Watch a painter work, and you'll see they spend almost as much time standing back from the canvas just looking at it as they do up close making fine brush strokes. They may paint one brush stroke at a time, but the overall painting is what matters most. The vase of flowers may be rendered perfectly by itself, but if its perspective differs from its surroundings, it's going to look wrong all the same.

Movie directors, too, frequently step back and look at an edit-in-progress to check that the footage they've shot fits into a working story. They may film movies one shot at a time, but the overall movie is what matters most. An actor may perform a perfect take, but if the emotional pitch of their performance doesn't make sense at that exact point in the story, it's going to seem wrong.

When a composer writes a song, they will often play it all the way through up to the point they're working on, to see how the piece flows. A great melody or a killer riff might seem totally wrong if it's sandwiched in between two passages that it just doesn't fit in with.

This is why a recommend to the developers that they, too, routinely take a step back and look at how the detail they're currently working on fits in with the whole. Run the application, walk through user journeys that bring you to that detail and see how that feels in context. It may seem like a killer feature to you, but a killer feature in the wrong place is just a wrong feature.





What's The Problem With Static Methods?

Quick brain dump before bed.

Static methods.

STATIC. METHODS.

When I'm pairing with people, quite often they'll declare a method as static. And I'll ask "Why did we make that static?" And they'll say "Well, because it doesn't have any state" or "It'll be faster" or "So I can invoke it without having to write new WhateverClassName()".

And then, sensing my disapproval as I subtly bang my head repeatedly against the desk and scream "Why? Why? Why?!", they ask me "What's the problem with static methods?"

I've learned, over the years, not to answer this question using words (or flags or interpretive dance). It's not that it's difficult to explain. I can do it in one made-up word: composibility.

If a method's static, it's not possible to replace it with a different implementation without affecting the client invoking it. Because the client knows exactly which implementation it's invoking.

Instance methods open up the possibility of substituting a different implementation, and it turns out that flexibility is the whole key to writing software with great composibility.

Composibility is very important when we want our code to accomodate change more easily. It's the essence of Inversion of Control: the ability to dynamically rewire the collaborators in a software system so that overall behaviour can be composed from the outside, and even changed at runtime.

A classic example of this is the need to compose objects under test with fake collaborators (e.g., mock objects or stubs) so that we can test them in isolation.

You may well have come across code that uses static methods to access external data, or web session state. If we want to test the logic of our application without hitting the file system or a database or having to run that code inside a web server, then those static methods create a real headache for us. They'd be the first priority to refactor into instance methods on objects that can be dependency injected into the objects we want to unit test.

The alternatives are high maintenance solutions: fake in-memory file systems, in-memory databases, fake web application servers, and so on. The dependency injection solution is much simpler and much cleaner, and in the long run, much, much cheaper.

But unit testability is just the tip of the iceberg. The "flex points" - to use a term coined in Steve Freeman and Nat Pryce's excellent Growing Object Oriented Software Guided By Tests book - we introduce to make our objects more testable also tend to make our software more composable in other ways, and therefore more open to accommodating change.

And if the static method doesn't have external dependencies, the unit testing problem is a red herring. It's the impact on composibility that really hurts us in the long run.

And so, my policy is that methods should be instance methods by default, and that I would need a very good reason to sacrifice that composibility for a static method.

But when I tell people whose policy is to make methods static if they don't need to access instance features this, they usually don't believe me.

What I do instead is pair with them on some existing code that uses static methods. The way static methods can impede change and complicate automated testing usually reveals itself quite quickly.

So, the better answer to "What's the problem with static methods?" is "Write some unit tests for this legacy code and then ask me again."


* And, yes, I know that in many languages these days, pointers to static functions can be dependency-injected into methods just like objects, but that's not what I'm talking about here.




January 30, 2015

It's True: TDD Isn't The Only Game In Town. So What *Are* You Doing Instead?

The artificially induced clickbait debate "Is TDD dead?" continues at developer events and in podcasts and blog posts and commemorative murals across the nations, and the same perfectly valid point gets raised every time: TDD isn't the only game in town

They're absolutely right. Before the late 1990's, when the discipline now called "Test-driven Development" was beginning to gain traction at conferences and on Teh Internets, some teams were still somehow managing to create reliable, maintainable software and doing it economically.

If they weren't doing TDD, then what were they doing?

The simplest alternative to TDD would be to write the tests after we've written the implementation. But hey, it's pretty much the same volume of tests we're writing. And, for sure, many TDD practitioners go on to write more tests after they've TDD'd a design, to get better assurance.

And when we watch teams who write the tests afterwards, we tend to find that the smart ones don't write them all at once. They iteratively flesh out the implementation, and write the tests for it, one or two scenarios (test cases) at a time. Does that sound at all familiar?

Some of us were using what they call "Formal Methods" (often confused with heavyweight methods like SSADM and the Unified Process, which aren't really the same thing.)

Formal Methods is the application of rigorous mathematical techniques to the design, development and testing of our software. The most common approach was formal specification - where teams would write a precise, mathematical and testable specification for their code and then write code specifically to satisfy that specification, and then follow that up with tests created from those specifications to check that the code actually works as required.

We had a range of formal specification languages, with exotic names like Z (and Object Z), VDM, OCL, CSP, RSVP, MMRPG and very probably NASA or some such.

Some of them looked and worked like maths. Z, for example, was founded on formal logic and set theory, and used many of the same symbols (since all programming is set theoretic.)

Programmers without maths or computer science backgrounds found mathematical notations a bit tricky, so people invented formal specification languages that looked and worked much more like the programming languages we were familiar with (e.g., the Object Constraint Language, which lets us write precise rules that apply to UML models.)

Contrary to what you may have heard, the (few) teams using formal specification back in the 1990's were not necessarily doing Big Design Up-Front, and were not necessarily using specialist tools either.

Much of the formal specification that happened was scribbled on whiteboards to adorn simple design models to make key behaviours unambiguous. From that teams might have written unit tests (that's how I learned to do it) for a particular feature, and they pretty much became the living specification. Labyrinthine Z or OCL specifications were not necessarily being kept and maintained.

It wasn't, therefore, such a giant leap for teams like the ones I worked on to say "Hey, let's just write the tests and get them passing", and from there to "Hey, let's just write a test, and get that passing".

But it's absolutely true that formal specification is still a thing that some teams still do - you'll find most of them these days alive and well in the Model-driven Development community (and they do create complete specifications and all the code is generated from those, so the specification is the code - so, yes, they are programmers, just in new languages.)

Watch Model-driven Developers work, and you'll see teams - well, the smarter ones - gradually fleshing out executable models one scenario at a time. Sound familiar?

So there's a bunch of folk out there who don't do TDD, but - by jingo! - it sure does look a lot like TDD!

Other developers used to embed their specifications inside the code, in the form of assertions, and then write tests suites (or drive tests in some other way) that would execute the code to see if any of the assertions failed.

So their tests had no assertions. It was sort of like unit testing, but turned inside out. Imagine doing TDD, and then refactoring a group of similar tests into a single test with a general assertion (e.g., instead of assert(balance == 100) and assert(balance == 200), it might be assert(balance = oldBalance + creditAmount).

Now go one step further, and move that assertion out of the test and into the code being tested (at the end of that code, because it's a post-condition). So you're left with the original test cases to drive the code, but all the questions are being asked in the code itself.

Most programming languages these days include a built-in assertion mechanism that allows us to do this. Many have build flags that allows us to turn assertion checking on or off (on if testing, off if deploying to live.)

When you watch teams working this way, they don't write all the assertions (and all the test automation code) at once. They tend to write just enough to implement a feature, or a single use case scenario, and flesh out the code (and the assertions in it) scenario by scenario. Sound familiar?

Of course, some teams don't use test automation at all. Some teams rely on inspections, for example. And inspections are a very powerful way to debug our code - more effective than any other technique we know of today.

But they hit a bit of a snag as development progresses, namely that inspecting all of the code that could be broken after a change, over and over again for every single change, is enormously time-consuming. And so, while it's great for discovering the test cases we missed, as a regression testing approach, it sucks ass Gagnam Style.

But, and let's be clear about this, these are the techniques that are - strictly speaking - not TDD, and that can (even if only initially, like in the case of inspections) produce reliable, maintainable software. if you're not doing these, then you're doing something very like these.

Unless, of course... you're not producing reliable, maintainable code. Or the code you are producing is so very, very simple that these techniques just aren't necessary. Or if the code you're creating simply doesn't matter and is going to be thrown away.

I've been a software developer for approximately 700 million years (give or take), so I know from my wide and varied experience that code that doesn't matter, code that's only for the very short-term, and code that isn't complicated, are very much the exceptions.

Code that gets used tends to stick around far longer than we planned. Even simple code usually turns out to be complicated enough to be broken. And if it doesn't matter, then why in hell are we doing it? Writing software is very, very expensive. If it's not worth doing well, then it's very probably - almost certainly - not worth doing.

So what is the choice that teams are alluding to when they say "TDD isn't the only game in town?" Do they mean they're using Formal Methods? Or perhaps using assertions in their code? Or do they rely on rigorous inspections to make sure they get it at least right the first time around?

Or are they, perhaps, doing none of these things, and the choice they're alluding to is the choice to create software that's not good enough and won't last?

I suspect I know the answer. But feel free to disagree.


UPDATE:

My apprentice, Will Price, has emailed me with a very good question:

"Why don't we embed our assertions in our code and just have the tests exercise the code?

What benefit did we gain from moving them out into tests?

It seems to me that having assertions inside the code is much nicer than having them in tests because it acts as an additional form of documentation, right there when you're reading the code so you can understand what preconditions and postconditions a particular method has, I should imagine you could even automate collection of these to
put them into documentation etc so developers have a good idea of whether they can reuse a method (i.e. have they satisfied the preconditions, are the postconditions sufficient etc)"


My reply to him (copied and pasted):

That's a good question. I think the answer may lie in tradition. Generally, folks writing unit tests weren't using assertions in code, and folks using assertions in code weren't writing automated tests. (For example, many relied on manual testing, or random testing, or other ways of driving the code in test runs).

So, people coming from the unit testing tradition have ended up with assertions in their tests, and folk from the assertions tradition (e.g., Design By Contract) have ended up with no test suites to speak of. Model checking is an example of this: folks using model checkers would often embed assertions in code (or in comments in the code), and the tool would exercise the code using white-box test case generation.

This mismatch is arguably the biggest barrier at the moment to merging these approaches because the tools don't quite match up. I'm hoping this can be fixed.


Thinking a bit more about it, I also believe that asserting correct behaviour inside the code helps with inspections.

January 26, 2015

Intensive Test-driven Development, London, Saturday March 14th - Insanely Good Value!

Just a quick note to mention that the Codemanship 1-day Intensive TDD training workshop is back, and at the new and insanely low price of just £20.

You read that right; £20 for a 1-day TDD training course (a proper one!)

Places are going fast. Visit our EventBrite page to book your place now