October 14, 2006

...Learn TDD with Codemanship

Exploratory Testing & Reverse Engineering

Exploratory testing is a discipline that's gaining popularity, particularly in the Agile community where much of the testing is no longer done by dedicated testers - and hence, I suspect, it's driving the need to find something else for testers to do to earn a crust.

It's actually really quite simple: let's say I agree an acceptance test with the customer that goes a bit like this:

1. Click on button X
2. Select menu item Y
3. Double-click on list item Z
4. Click the "submit" button
5. The outcome should be A, B and C

And, like a good Agile developer, I duly go away and write the code such that when the user clicks on button X, selects menu item Y, double-clicks on list item Z and then clicks the "submit" button, the outcome is indeed A, B and C.

A pat on the back, a gold star and a donut for me.

But then along comes an exploratory tester, and he immediately asks:

"Hmmm. I wonder what would happen if I select menu item Y first, and then click on button X and then double-click on list item Z twice and then hit the "submit" button..." And so he tries it. And the software goes bye bye and he loses his lovely data. So he reports it as a bug. And I say "but hey, we never agreed what would happen if you did that, so the spec is undefined". And the tester says; "well, why don't you go ask the customer what they think it should do, then?" And I do. And the customer says "Oh, I dunno. But the software certainly shouldn't go bye bye. Maybe it should display a message saying 'please don't do that again'". So we have now agreed a brand new acceptance test scenario. And this will need to be scheduled in my planning process like any other test scenario.

And that, in a nutshell, is exploratory testing. It's the process of figuring out what else the software does apart from what you agreed it should do in your spec or in your acceptance/system tests (which are also a kind of specification, in practice). It's almost reverse engineering. I think Agitar have a tool called Agitator that does something a bit like this, only with unit tests. It reads the code, dreams up a bunch of unit test scenarios, runs them, records the results and asks you "is it supposed to do that?"

Okay. There is a little more to it. But not much. Just as it helps to leave bread crumbs when you explore a maze so you can find your way back, it also helps to record your tracks when you explore software so you can accurately retrace your steps when something interesting happens. (It also helps to prevent other testers going over scenarois that have already been covered.) There are relatively inexpensive tools on the market for recording GUI scripts. And, since even some of the simplest software can allow for an effectively infinite number of usages scenarios, it also makes sense to target areas where exploratory testing will bring most value. You might want to focus on particularly complex functionality because it's more likely to contain bugs, for example, or on features that you think will be used the most (as Microsoft tend to do).

And it helps to be a little methodical about it. Test analysis tools and techniques can help to throw up paths that aren't immediately obvious. Many of these centre around logical combinations and boundary analysis, and you don't need much more than high school maths to figure that sort of stuff out.

Of course, exploratory testing is labour-intensive. So you'll be needing dedicated exploratory testers. And I won't argue with that. But it's not a discipline that takes years to master. Anyone who can write executable tests can do exploratory testing.
Posted 14 years, 2 months ago on October 14, 2006