August 12, 2013

...Learn TDD with Codemanship

Usefulness Testing!

Over the millennia that my software career has spanned, I've attempted to promote the idea that we should make serious efforts to try out the programs we create in the context in which they're intended to be used (or as accurate a simulation of such contexts as we can manage.)

I've mentioned before the Model Office, which is a simulated work environment into which we deploy software under development to see what happens when we try to use it in realistically recreated business scenarios.

Another example is testing in the field. If, say, your mobile app is designed to make it easy to find a hotel room near the location where you are, then send folk out with a test version into a variety of locations where you think they might need to use your app, and see how they get on.

It hasn't caught on.

Which is a shame, because, on those occasions when my teams have done it, testing software in a realistic context has proved to be very powerful. Typically, we learned more in an hour or two of such testing than we did in weeks or months of "requirements analysis" or sterile "usability testing".

I still cling to the hope that testing in context might become a thing - y'know, like BDD or Continuous Delivery.

Maybe it's a question of how it's marketed. And the first step in the promotion of a meme is to give it a snappy, easy-to-remember name. And it just occurred to me that we never named it, even though enlightened software teams have been doing it for decades.

It's not functional testing (although, arguably, it's what "functional testing" should really mean - but, heck, that name's already taken). It's not usability testing, because how easy it is for end users to get to grips with our features doesn't ask the more important question of "why would they want to do that in the first place?"

The natural response of our industry is to try to tie software features to business goals; typically in a bureaucratic kind of a way. So we sit in meetings talking about goals, key performances indicators, Balanced Scorecards and all that malarkey, and try to square the circle through paperwork and pie charts and those little dotted lines with arrows on the end that Enterprise Architects are so fond of.

But real life is complicated. How many times have we seen a product work on paper, only to fall flat on its arse when it was deployed into the complex, multidimensional mess that is the Real WorldTM?

Better, I've found, to start in the Real World and stick as close to it as we can throughout development.

So, I'm going to name this practice - which, admittedly, is usually the point where everyone immediately latches on to a misunderstanding of what they think the words mean and start erecting statues to it and sacrificing their projects on the altar of it; but I'm willing to risk it all the same.

In fact, if you Google it, it already has a name: Usefulness Testing.

Because that is what it is. (Although, in fairness, most people have been using it to describe the doing-it-on-paper-in-meeting-rooms approach up to now, so I'm co-opting it to describe something far more, er, useful.)

So now, when your boss harps on about the different kinds of fancy testing you're going to do - with hifalutin names like "unit testing", "integration testing", "functional testing", ""usability testing", "performance testing", "giant underwater Japanese radiation monster testing" and so on - you can smugly retort "Yeah, but we're going to do some Usefulness Testing first, right?", secure in the knowledge that it trumps all other kinds of software testing.

Because all the rest is a moot point if it fails that.


Yes, it has occurred to me that what many companies call "Beta testing" is actually, at least in part, Usefulness Testing. For sure, many teams learn the most important lessons from feedback from real users trying to use the software for real. For sure, much of the "bug reports" we get during Beta testing tend to be along the lines of "you built the wrong software".

But this happens far too late in the day, and in a very unstructured way. Surely, in our modern Agile world (because I know that's what you kids like these days), we can begin this process much earlier and seek such feedback right from the start and throughout development.

I advocate that it should be the Alpha and the Omega of software development. Start with it and end with it, all throughout in between.

How can we test "usefulness" if we haven't written any software yet? One way would be to recreate how it is now, without your software, and iteratively introduce the software and test that - at the very least - it doesn't break your business. (Would that more teams would at least do that!)

Other kinds of Usefulness Testing that a few teams are doing now can utilise low-fi prototypes of our software (e.g. wireframes, mock-ups, etc) to allow us to walk through how the software will work in a given real-life situation, enabling us to validate our user experience designs in a much more meaningful context (though this is not a good replacement for actually testing the software in the actual real world, because of the inherent messiness of reality.)

Going back to the hotel-finder app, you might start in the field with the Yellow Pages and a cell phone and build your understanding of the problems concerned with finding a room from there, then seek to make it genuinely, observably, actually easier using software.

Posted 7 years, 4 months ago on August 12, 2013