April 23, 2016
Does Your Tech Idea Pass The Future Dystopia Test?One thing that at times fascinates and at times appals me is the social effect that web applications can have on us.
Human beings learn fast, but evolve slowly. Hence we can learn to program a video recorder, but living a life that revolves around video recorders can be toxic to us. For all our high-tech savvy, we are still basically hominids, adapted to run from predators and pick fleas off of each other, but not adapted for Facebook or Instagram or Soundcloud.
But the effects of online socialisation are now felt in the Real World - you know, the one we used to live in? People who, just 3-4 years ago, were confined to expressing their opinions on YouTube are now expressing them on my television and making pots of real money.
Tweets are building (and ending) careers. Soundcloud tracks are selling out tours. Facebook viral posts are winning elections. MySpace users are... well, okay, maybe not MySpace users.
For decades, architects and planners obsessed over the design of the physical spaces we live and work in. The design of a school building, they theorise, can make a difference to the life chances of the students who learn in it. The design of a public park can increase or decrease the chances of being attacked in it. Pedestrianisation of a high street can breath new life into local shops, and an out-of-town shopping mall can suck the life out of a town centre.
Architects must actively consider the impact of buildings on residents, on surrounding communities, on businesses, on the environment, when they create and test their designs. Be it for a 1-bed starter home, or for a giant office complex, they have to think about these things. It's the law.
What thought, then, do software developers give to the social, economic and environmental impact of their application designs?
Having worked on "Web 2.0" sites of all shapes and sizes, I have yet to see teams and management go out of their way to consider such things. Indeed, I've seen many occasions when management have proposed features of such breath-taking insensitivity to wider issues, that it's easy to believe that we don't really think much about it at all. That is, until it all goes wrong, and the media are baying for our blood, and we're forced to change to keep our share price from crashing.
This is about more than reliability (though reliability would be a start).
Half-jokingly, I've suggested that teams put feature requests through a Future Dystopia Test; can we imagine a dark, dystopian, Philip K Dick-style future in which our feature has caused immense harm to society? Indeed, whole start-up premises fail this test sometimes. Just hearing some elevator pitches conjures up Blade Runner-esque and Logan's Run-ish images.
I do think, though, that we might all benefit from devoting a little time to considering the potential negative effects of what we're creating before we create it, as well as closely monitoring those effects once it's out there. Don't wait for that hysterical headline "AcmeChat Ate My Hamster" to appear before asking yourself if the fun hamster-swallowing feature the product owner suggested might not be such a good thing after all.
This blog post is gluten free and was not tested on animals
February 19, 2016
Performance-Optimise Your Code From The Outside-InSomething that comes up time and again on Codemanship training workshops is the tendency of many developers to prematurely optimise their code for performance, often at the expense of readability.
The classic example is Java programmers concatenating strings using the StringBuilder class. I'll ask them "Why are you using StringBuilder?", and they'll reply "For performance". And then I'll ask them "What test will fail if you don't use StringBuilder?", implying "What are the performance requirements, exactly?"
Two points about this: firstly, the Java compiler optimises string concatenation under the hood, using - yep, you guessed it - StringBuilder. So, in terms of machine-executable code, there's no performance difference between
"A" + "B"
StringBuilder builder = new StringBuilder();
But secondly, even if there was a performance gain, we're trading it off against readability for no apparent reason.
In the heirarchy of software design needs, after "does it work?" typically comes "and can other programmers understand it?" So important is readability that, by default, I will happily trade other design concerns for more readable code. And that includes performance.
The only exception to that is when performance (or security, or scalability etc) impacts the answer to the question "does it work?" The code for processing my credit payment may be logically correct and readable, but if the system can't find the time to execute it, then it doesn't work all the same.
Performance optimisation starts on the outside of our software, with the customer and his or her needs. It's not simply a question of looking at the code and thinking "could I make this faster?" The performance of, say, a search algorithm should be driven by an understanding of how soon the user needs search results, and/or how many users need to do searches in a given timeframe.
For this reason, performance requirements are best explored with the customer when we're working to understand their requirements. So, on top of all the questions about the logic of our applications - like "What should happen if they try to sign up but their email is left blank?" - we should also seek to understand constraints on performance, like "How long, maximum, should processing their sign-up take?" and "How many sign-ups might we need to handle every hour?" and so on.
Then we can agree acceptance tests with them about performance that are driven by their needs, not by our desire to make all our code as efficient as possible just for the sake of it.
Always remember that everything we do, ultimately, is about user experience. We write code that works so the users can use it. We write code that's readable so that the user experience can evolve and improve with feedback. And we write code that's fast when the user needs things to happen quickly.
April 8, 2015
Reality-driven Development - Creating Software For Real Users That Solve Real Problems In the Real WorldIt's a known fact that software development practices cannot be adopted until they have a pithy name to identify the brand.
Hence it is that, even though people routinely acknowledge that it would be a good idea for development projects to connect with reality, very few actually do because there's no brand name for connecting your development efforts with reality.
Reality-driven Development is a set of principles and practices aimed at connecting development teams to the underlying reality of their efforts so that they can create software that works in the real world.
RDD doesn't replace any of your existing practices. In some cases, it can short-circuit them, though.
Take requirements analysis, for example: the RDD approach compels us to immerse ourselves in the problem in a way traditional approaches just can't.
Instead of sitting in meeting rooms talking about the problem with customers, we go out to where the problem exists and see and experience it for ourselves. If we're tasked with creating a system for call centre operatives to use, we spend time in the call centre, we observe what call centre workers do - pertinent to the context of the system - and most importantly, we have a go at doing what the call centre workers do.
It never ceases to amaze me how profound an effect this can have on the collaboration between developers and their customers. Months of talking can be compressed into a day or two of real-world experience, with all that tacit knowledge communicated in the only way that tacit knowledge can be. Requirements discussions take on a whole different flavour when both parties have a practical, first-hand appreciation of what they're talking about.
Put the shoe on the other foot (and that's really what this practice is designed to do): imagine your customer is tasked with designing software development tools, based entirely on an understanding they've built about how we develop software purely based on our description of the problem. How confident are you that we'd communicate it effectively? How confident are you that their solutions would work on real software projects? You would expect someone designing dev tools to have been a developer at some point. Right? So what makes us think someone who's never worked in a call centre will be successful at writing call centre software? (And if you really want to see some pissed off end users, spend an hour in a call centre.)
So, that's the first practice in Reality-driven Development: Real-world Immersion.
We still do the other stuff - though we may do it faster and more effectively. We still gather user stories as placeholders for planning and executing our work.. We still agree executable acceptance tests. We still present it to the customer when we want feedback. We still iterate our designs. But all of these activities are now underpinned with a much more solid and practical shared understanding of what it is we're actually talking about. If you knew just how much of a difference this can make, it would be the default practice everywhere.
Just exploring the problem space in a practical, first-hand way can bridge the communication gap in ways that none of our existing practices can. But problem spaces have to be bounded, because the real world is effectively infinite.
The second key practice in Reality-driven Development is to set ourselves meaningful Real-world Goals: that is, goals that are defined in and tested in the real world, outside of the software we build.
Observe a problem in the real world. For example, in our real-world call centre, we observe that operatives are effectively chained to their desks, struggling to take regular comfort breaks, and struggling to get away at the end of a shift. We set ourselves the goal of every call centre worker getting at least one 15-minute break every 2 hours, and to work a maximum of 15 minute's unplanned overtime at the end of a day. This goal has nothing to do with software. We may decide to build a feature in the software they use that manages breaks and working hours, and diverts calls that are coming in just before their break is due. It would be the software equivalent of when the cashier at the supermarket checkout puts up one of those little signs to dissuade shoppers from joining their queue when they're about to knock-off.
Real-world Goals tend to have a different flavour to management-imposed goals. This is to be expected. If you watch any of those "Back to the floor" type TV shows, where bosses pose as front-line workers in their own businesses, it very often the case that the boss doesn't know how things really work, and what the real operational problems are. This raises natural cultural barriers and issues of trust. Management must trust their staff to drive development and determine how much of the IT budget gets spent. This is probably why almost no organisation does it this way. But the fact remains that, if you want to address real-world problems, you have to take your cues from reality.
Important, too, is the need to strike a balance in your Real-world Goals. While we've long had practices for discovering and defining business goals for our software, they tend to suffer from a rather naïve 1-dimensional approach. Most analysts seek out financial goals for software and systems - to cut costs, or increase sales, and so on - without looking beyond that to the wider effect the software can have. A classic example is music streaming: while businesses like Spotify make a great value proposition for listeners, and for major labels and artists with big back catalogues, arguably they've completely overlooked 99.9% of small and up-and-coming artists, as well as writers, producers and other key stakeholders. A supermarket has to factor in the needs of suppliers, or their suppliers go out of business. Spotify has failed to consider the needs of the majority of musicians, choosing to focus on one part of the equation at the expense of the other. This is not a sustainable model. Like all complex systems, dynamic equilibrium is usually the only viable long-term solution. Fail to take into account key variables, and the system tips over. In the real world, few problems are so simple as to only require us to consider one set of stakeholders.
In our call centre example, we must ask ourselves about the effect of our "guaranteed break" feature on the business itself, on its end customers, and anyone else who might be effected by it. Maybe workers get their breaks, but not withut dropping calls. Or without a drop in sales. All of these perspectives need to be looked at and addressed, even if by addressing it we end up knowingly impacting people in a negative way. Perhaps we can find some other way to compensate them. But at least we're aware.
The third leg of the RDD table - the one that gives it the necessary balance - is Real-world Testing.
Software testing has traditionally been a standalone affair. It's vanishingly rare to see software tested in context. Typically, we test it to see if it conforms to the specification. We might deploy it into a dedicated testing environment, but that environment usually bears little resemblance to the real-world situations in which the software will be used. For that, we release the software into production and cross our fingers. This, as we all know, pisses users off no end, and rapidly eats away at the goodwill we rely on to work together.
Software development does have mechanisms that go back decades for testing in the real world. Alpha and Beta testing, for example, are pretty much exactly that. The problem with that kind of small, controlled release testing is that it usually doesn't have clear goals, and lacks focus as a result. All we're really doing is throwing the software out there to some early adopters and saying "here, waddaya think?" It's missing a key ingredient - real-world testing requires real-world tests.
Going back to our Real-world Goals, in a totally test-driven approach, where every requirement or goal is defined with concrete examples that can become executable tests, we're better off deploying new versions of the software into a real-world(-ish) testing environment that we can control completely, where we can simulate real-world test scenarios in a repeatable and risk-free fashion, as often as we like.
A call centre scenario like "Janet hasn't taken a break for 1 hour and 57 minutes, there are 3 customers waiting in the queue, they should all be diverted to other operators so Janet can take a 15-minute break. None of the calls should be dropped" can be simulated in what we call a Model Office - a recreation of all or part of the call centre, into which multiple systems under development may be deployed for testing and other purposes.
Our call centre model office simulates the real environment faithfully enough to get meaningful feedback from trying out software in it, and should allow us to trigger scenarios like this over and over again. In particular, model offices enable us to exercise the software in rare edge cases and under unusually high peak loads that Alpha and Beta testing are less likely to throw up. (e.g., what happens if everyone is due a break within the next 5 minutes?)
Unless you're working on flight systems for fighter aircraft or control systems for nuclear power stations, it doesn't cost much to set up a testing environment like this, and the feedback you can get is worth far more.
The final leg of the RDD table is Real-world Iterating.
So we immerse ourselves in the problem, find and agree real-world goals and test our solutions in a controlled simulation of the real world. None of this, even taken together with existing practices like ATDD and Real Options, guarantees that we'll solve the problem - certainly not first time.
Iterating is, in practice, the core requirements discipline of Agile Software Development. But too many Agile teams iterate blindly, making the mistake of believing that the requirements they've been given are the real goals of the software. If they weren't elucidated from a real understanding of the real problem in the real world, then they very probably aren't the real goals. More likely, what teams are iterating towards is a specification for a solution to a problem they don't understand.
The Agile Manifesto asks us to value working software over comprehensive documentation. Realty-driven Development widens the context of "working software" to mean "software that testably solves the user's problem", as observed in the real world. And we iterate towards that.
Hence, we ask not "does the guaranteed break feature work as agreed?", but "do operatives get their guaranteed breaks, without dropping sales calls?" We're not done until they do.
This is not to say that we don't agree executable feature acceptance tests. Whether or not the software behaves as we agreed is the quality gate we use to decide if it's worth deploying into the Model Office at all. The software must jump the "it passes all our functional tests" gate before we try it on the "but will it really work, in the real world?" gate. Model Office testing is more complex and more expensive, and ties up our customers. Don't do it until you're confident you've got something worth testing in it.
And finally, Real-world Testing wouldn't be complete unless we actually, really tested the software in the real real world. At the point of actual deployment into a production environment, we can have reasonably high confidence that what we're putting in front of end users is going to work. But that confidence must not spill over into arrogance. There may well be details we overlooked. There always are. So we must closely observe the real software in real use by real people in the real world, to see what lessons we can learn.
So there you have it: Reality-driven Development
1. Real-world Immersion
2. Real-world Goals
3. Real-world Testing
4. Real-world Iterating
...or "IGTI", for short.
March 19, 2015
Requirements 2.0 - Ban Feature RequestsThis is the first post in a series that challenges the received wisdom about how we handle requirements in software development.
A lot of the problems in software development start with someone proposing a solution too early.
User stories in Agile Software Development are symptomatic of this: customers request features they want the software to have, qualifying them with a "so that..." clause that justifies the feature with a benefit.
Some pundits recommend turning the story format around, so the benefit comes first, a bit like writing tests by starting with the assertion and working backwards.
I'm going to suggest something more radical: I believe we should ban feature requests altogether.
My format for a user story would only have the "so that..." clause. Any mention of how that would be achieved in the design of the software would be excluded. The development team would figure out the best way to achieve that in the design, and working software would be iterated until the user's goal has been satisfied.
It's increasingly my belief that the whole requirements discipline needs to take a step back from describing solutions and their desired features or properties, to painting a vivid picture of what the user's world will look like with the software in it, with a blank space where the software actually goes.
Imagine trying to define a monster in a horror movie entirely through reaction shots. We see the fear, we hear the screams, but we never actually see the monster. That's what requirements specs, in whatever form they're captured, should be like. All reaction shots and no monster.
Well, three reasons:
1. All too often, we find ourselves building a solution to a problem that's never been clearly articulated. Iterating designs only really works when we iterate towards clear goals. Taking away the ability to propose solutions (features) early forces customers (and developers) to explicitly start by thinking about the problem they're trying to solve. We need to turn our thinking around.
2. The moment someone says "I want a mobile app that..." or "When I click on the user's avatar..." or even, in some essential way, "When I submit the mortgage application..." they are constraining the solution space unnecessarily to specific technologies, workflows and interaction designs. Keeping the solution space as wide open as possible gives us more choices about how to solve the user's problem, and therefore a greater chance of solving it in the time we have available. On many occasions when my team's been up against it time-wise, banging our heads against a particular technical brick wall, when we took a step back and asked "What are we actually trying to achieve here?" and the breakthrough came when we chose an easier route to giving the users what they really needed.
3. End users generally aren't software designers. For the exact same reason that it's not such a great idea to specify a custom car for me by asking "What features do you want?" or for my doctor to ask me "What drugs would you like?", it's probably best if we don't let users design the software. It's not their bag, really. They understand the problem. We do the design. We play to our strengths.
So there you have it. Ban feature requests.
February 2, 2015
Stepping Back to See The Bigger PictureSomething we tend to be bad at in the old Agile Software Development lark is the Big Picture problem.
I see time and again teams up to their necks in small details - individual users stories, specific test cases, individual components and classes - and rarely taking a step back to look at the problem as a whole.
The end result can often be a piecemeal solution that grows one detail at a time to reveal a patchwork quilt of a design, in the worst way.
While each individual piece may be perfectly rational in its design, when we pull back to get the bird's eye view, we realise that - as a whole - what we've created doesn't make much sense.
In other creative disciplines, a more mature approach is taken. Painters, for example, will often sketch out the basic composition before getting out their brushes. Film producers will go through various stages of "pre-visualisation" before any cameras (real or virtual) star rolling. Composers and music producers will rough out the structure of a song before worrying about how the kick drum should be EQ'd.
In all of these disciplines, the devil is just as much in the detail as in software development (one vocal slightly off-pitch can ruin a whole arrangement, one special effect that doesn't quite come off can ruin a movie, one eye slightly larger than the other can destroy the effect of a portrait in oil - well, unless you're Picasso, of course; and there's another way in which we can be like painters, claiming that "it's mean to be like that" when users report a bug.)
Perhaps in an overreaction to the Big Design Up-Front excesses of the 1990's, these days teams seem to skip the part where we establish an overall vision. Like a painter's initial rough pencil sketch, or a composer's basic song idea mapped out on an acoustic guitar, we really do need some rough idea of what the thing we're creating as a whole might look like. And, just as with pencil sketches and song ideas recorded on Dictaphones, we can allow it to change based on the feedback we get as we flesh it out, but still maintain an overall high-level vision. They don't take long to establish, and I would argue that if we can't sketch out our vision, then maybe - just maybe - it's because we don't have one yet.
Another thing they do that I find we usually don't (anywhere near enough) is continually step back and look at the thing as a whole while we're working on the details.
Watch a painter work, and you'll see they spend almost as much time standing back from the canvas just looking at it as they do up close making fine brush strokes. They may paint one brush stroke at a time, but the overall painting is what matters most. The vase of flowers may be rendered perfectly by itself, but if its perspective differs from its surroundings, it's going to look wrong all the same.
Movie directors, too, frequently step back and look at an edit-in-progress to check that the footage they've shot fits into a working story. They may film movies one shot at a time, but the overall movie is what matters most. An actor may perform a perfect take, but if the emotional pitch of their performance doesn't make sense at that exact point in the story, it's going to seem wrong.
When a composer writes a song, they will often play it all the way through up to the point they're working on, to see how the piece flows. A great melody or a killer riff might seem totally wrong if it's sandwiched in between two passages that it just doesn't fit in with.
This is why a recommend to the developers that they, too, routinely take a step back and look at how the detail they're currently working on fits in with the whole. Run the application, walk through user journeys that bring you to that detail and see how that feels in context. It may seem like a killer feature to you, but a killer feature in the wrong place is just a wrong feature.
January 30, 2014
Software Design: Stick To The StoryOne thing I try to remind teams of whenever we're thinking about the design process is to stick to the story.
A user story tells the story of how a user gets what they want by using some feature of the software. It's not much of a story, I grant you. Just the 30-second elevator pitch, really. A user story tells us just enough to know if the feature has any value and who to talk to about fleshing the story out.
Extreme Programming teams sit down with the customer who wrote the story and elaborate it into one or more examples, adding meat to the bones of our user story in the form of acceptance tests with example data to make them explicit.
When writing acceptance tests, bear in mind that we are still telling the story of how the user gets what they want from the software. More detail, but same story.
Acceptance tests may then feed into our design process. We may use them as the basis for a user experience design, creating storyboards illustrating how it will actually look as a GUI and what physical form the user's interactions will take (button clicking etc.)
Again, it's important to remember that our storyboards are telling the same story as the acceptance tests and the user story upon which they're based. More implementation detail, but same story.
Acceptance tests may also drive an internal software design that identifies modules or classes, the functions each module or class performs, and how these parts will interact with each other to co-ordinate the work.
And again, we must remember that our physical implementation design tells the exact same story that the acceptance tests, the UX storyboards and the original user story tell. More detail, same story.
The reason I think this is important is that I often see teams deviating from the story during the design process. A common example is UI wireframes that show details that aren't needed to tell the story. But I also catch teams adding internal "plumbing" that has nothing to do with telling the story.
All these extra bells and whistles that we might dream up while exploring implementation design are not serving the story, and all come at a cost. It's not uncommon to find costs doubling when teams add frills the customer didn't ask to. It's also not unheard of for teams to be so focused on their inventions that the user's original goals end up being overlooked.
I keep a notepad with me so, if I'm working on a feature and have a brainwave about how to make it better, I can talk to the customer about feeding that in as a new requirement.
So, when it comes to design, the trick is to stick to the story.
January 20, 2014
What is Customer-Driven Development, Anyway?After that last blog post, a couple of people have asked "is 'Customer-driven Development' a thing?"
Well, it almost was. Let's take a trip down memory lane...
In the Dark Ages, before we went all Agile and Lean and Wotsit-driven, an idea kicked around that sought to reimagine software development as a system that only did things in response to external stimuli from customers.
Every process, every practice, every activity was to be triggered by some customer action (or some other event determined by the customer, like a milestone, a deadline, a business rule or trigger and so on.)
Yes; you can probably tell that this has the architect's fingerprints all over it. Naive as we were, we genuinely believed - well, some of us did, at any rate - that the way teams developed software could be modeled and shaped using the same techniques we used to model and shape the software itself. Development resources and artefacts were objects. Processes were state machines that acted on those objects, or were enacted by those objects. Development teams were systems, with use cases. Software development use cases were triggered by actors - agents outside of the system.
Crazy as it was, out of all that a handful of us nurtured the germ of the idea of Customer-driven Development (though we didn't call it that at the time).
Think of your development teams a server. It sits and listens, ticking over, waiting for a customer to make a request. It might come in the form of a feature request, or a bug report, or a request for the software to be released, that sort of thing.
Today, a better analogy might be that the development team is the engine of a motorboat, and the customer is the captain who steers the boat. When the customer says "go this way", the engine propels the boat that way. Okay, that's not a great analogy, either.
You get the idea, though; the customer is in the driving seat. The customer drives development.
For true Customer-driven Development, though, the customer needs suitable controls to drive with. If they lack the necessary hands-on control, there's a danger they just become passengers. I see that a lot. The customer is being driven by the developers, or by the project manager, or a product owner, or a business analyst. And they don't necessarily end up where they wanted to go, especially when these "taxi drivers" have ideas of their own about the direction the software should be taking.
To reduce this risk, customers need to be at the wheel. And the controls need to be designed with the customer in mind. Hence my previous post.
January 19, 2014
Customer-driven Development: How Our Dev Tools Forget The *Other* Team MemberPeriodically I'm reminded just how bad we are at developing tools to be used in our own work.
Nowhere is this more obvious than in the clutch of tools that, in a sane and rational world, would be used by our customers and not by us.
Take deployment, for example. Deploying software should be the prerogative of the person paying for it. We make sure that it's always in a shippable state (right?). It then becomes a business decision to deploy the software when the customer thinks we've added enough value. So who should get to press the button? If deployment happens as a result of some command line hackery, then we've just taken the decision away from the customer. They have to ask.
Another example is customer acceptance testing; there are two places in this process where the customer ought to be in the driving seat, just as with deployment. First, the acceptance tests are theirs - in particular, the plain language versions of the tests (e.g., "Given... when... then...") and the associated examples (i.e., test data). So creating and managing these should happen using customer-facing tools. If you're one of those many, many teams who version control these acceptance tests using Git or some other VCS, then you've already failed in this respect. I've watched teams ask customer to edit and manage acceptance tests using Git and Vim. You can probably guess at the results.
The fact is, when we write these tools, we should be on the lookout for user stories that start "As the customer,..." and be mindful of their user experience as participants in the development process. Ideally, we could give them a joined-up experience, where their involvement is codified in maybe just one or two simple, intuitive tools. Presenting them with a mish-mash of domain-specific languages, Vim, GitHub, Jira and various other tools that even developers struggle with sometimes just doesn't cut the mustard. The upshot is that they tend to draw back from the process and disengage - the exact opposite of what we want to achieve, surely?
August 12, 2013
Usefulness Testing!Over the millennia that my software career has spanned, I've attempted to promote the idea that we should make serious efforts to try out the programs we create in the context in which they're intended to be used (or as accurate a simulation of such contexts as we can manage.)
I've mentioned before the Model Office, which is a simulated work environment into which we deploy software under development to see what happens when we try to use it in realistically recreated business scenarios.
Another example is testing in the field. If, say, your mobile app is designed to make it easy to find a hotel room near the location where you are, then send folk out with a test version into a variety of locations where you think they might need to use your app, and see how they get on.
It hasn't caught on.
Which is a shame, because, on those occasions when my teams have done it, testing software in a realistic context has proved to be very powerful. Typically, we learned more in an hour or two of such testing than we did in weeks or months of "requirements analysis" or sterile "usability testing".
I still cling to the hope that testing in context might become a thing - y'know, like BDD or Continuous Delivery.
Maybe it's a question of how it's marketed. And the first step in the promotion of a meme is to give it a snappy, easy-to-remember name. And it just occurred to me that we never named it, even though enlightened software teams have been doing it for decades.
It's not functional testing (although, arguably, it's what "functional testing" should really mean - but, heck, that name's already taken). It's not usability testing, because how easy it is for end users to get to grips with our features doesn't ask the more important question of "why would they want to do that in the first place?"
The natural response of our industry is to try to tie software features to business goals; typically in a bureaucratic kind of a way. So we sit in meetings talking about goals, key performances indicators, Balanced Scorecards and all that malarkey, and try to square the circle through paperwork and pie charts and those little dotted lines with arrows on the end that Enterprise Architects are so fond of.
But real life is complicated. How many times have we seen a product work on paper, only to fall flat on its arse when it was deployed into the complex, multidimensional mess that is the Real WorldTM?
Better, I've found, to start in the Real World and stick as close to it as we can throughout development.
So, I'm going to name this practice - which, admittedly, is usually the point where everyone immediately latches on to a misunderstanding of what they think the words mean and start erecting statues to it and sacrificing their projects on the altar of it; but I'm willing to risk it all the same.
In fact, if you Google it, it already has a name: Usefulness Testing.
Because that is what it is. (Although, in fairness, most people have been using it to describe the doing-it-on-paper-in-meeting-rooms approach up to now, so I'm co-opting it to describe something far more, er, useful.)
So now, when your boss harps on about the different kinds of fancy testing you're going to do - with hifalutin names like "unit testing", "integration testing", "functional testing", ""usability testing", "performance testing", "giant underwater Japanese radiation monster testing" and so on - you can smugly retort "Yeah, but we're going to do some Usefulness Testing first, right?", secure in the knowledge that it trumps all other kinds of software testing.
Because all the rest is a moot point if it fails that.
Yes, it has occurred to me that what many companies call "Beta testing" is actually, at least in part, Usefulness Testing. For sure, many teams learn the most important lessons from feedback from real users trying to use the software for real. For sure, much of the "bug reports" we get during Beta testing tend to be along the lines of "you built the wrong software".
But this happens far too late in the day, and in a very unstructured way. Surely, in our modern Agile world (because I know that's what you kids like these days), we can begin this process much earlier and seek such feedback right from the start and throughout development.
I advocate that it should be the Alpha and the Omega of software development. Start with it and end with it, all throughout in between.
How can we test "usefulness" if we haven't written any software yet? One way would be to recreate how it is now, without your software, and iteratively introduce the software and test that - at the very least - it doesn't break your business. (Would that more teams would at least do that!)
Other kinds of Usefulness Testing that a few teams are doing now can utilise low-fi prototypes of our software (e.g. wireframes, mock-ups, etc) to allow us to walk through how the software will work in a given real-life situation, enabling us to validate our user experience designs in a much more meaningful context (though this is not a good replacement for actually testing the software in the actual real world, because of the inherent messiness of reality.)
Going back to the hotel-finder app, you might start in the field with the Yellow Pages and a cell phone and build your understanding of the problems concerned with finding a room from there, then seek to make it genuinely, observably, actually easier using software.
July 8, 2013
The Key To Agile UI Design Is User Interface RefactoringAgile folk have embraced the idea that the first release of a software product is just the start of a very long - potentially endless - journey.
What we can learn from the software we put out there tends to be more valuable than what we guessed at when we were writing it. And for that reason, the best software tends to be much more a product of feedback than of planning.
But it's widely known that this highly iterative approach we call "Agile" has been largely failing us when it comes to the design of the user interface. Our UI's tend towards a sort of Heath-Robinson clunkiness as they grow piecemeal to accommodate rapid shifts in our understanding of what's needed.
This can, and often does, happen under the covers, too. Many Agile teams find themselves hoisted on the petard of 1001 architectural compromises as the design slips and slides from one user requirement to the next.
Unless, of course, they refactor continually and reshape and optimise the internal design as they go. I know a few teams who actually do that. (Gasp!) And they can produce very reliable software with very clean code under the hood that is easy to understand and ready to accommodate more change.
But their UI's... not so much.
I put it to you, dear reader, that this is because we have yet to embrace the concept and the discipline of continuous user interface refactoring.
Yes, there is such a thing as "clean" user interface. User experience has its own organising principles and design heuristics - many of which echo those for internal design (e.g., UI's should be as simple as possible, UI's should be easy for the user to understand, etc). But as we bolt on more and more features, there's a tendency for entropy at the UI level to increase. And so very few teams do on-going work to minimise that entropy. And so, the user interface just keeps getting worse - more complicated, less intuitive, clunkier, less responsive, and so on.
Now, UI design - and it's trendy hipster modern equivalent of "user experience" - is a whole can of worms which I don't want to open here. Suffice to say, there are principles and there are patterns, and if you're minded to, you can investigate them and build your own "school of user interface design" that may suit what you're doing.
Assuming you've established your "school" of UI design, here are three ideas that I've found can help a heck of a lot to steer your UI refactoring:
1. Eat You Own Dog Food
- A comedy basic, but most teams aren't even doing this. When the people writing the software become so divorced from the experience of the end users, expect it to turn out badly. And I'm not just talking about running it through a few simple test cases before you hit the "deploy" button. Actually invest some real time and effort into doing the user's job with your software. It can be a real eye-opener.
2. Make Like The Real World
- I've seen this happen over and over again: teams trial their software in an environment that is nothing like that in which it's going to be used. Hence mobile app developers discover that their software is useless in 80% of the places it's intended to be used, because data connections are too slow, and so on. let's use a hypothetical hotel-finder smartphone app as an example. We think we've design it so that you can find a room within walking distance of where you currently are. The most meaningful to test it would be to send team members out into the Cursed Earth and see how they get on actually finding actual hotel rooms in actual real places, actually. If it's too difficult or expensive or dangerous to test in the real world, simulate it as faithfully as you can. The closer you can get to reality, the more meaningful the feedback it will give you.
3. Users Are Programmers, Too
- Yep, just like in programming. Using a GUI is visual programming, every bit as much as using an API is programming. One of the most useful resources a dev team can have is detailed logs of how their visual programming language is being used. These logs can be analysed to find UI "smells" like over-complexity and duplication of effort, and used as a more formal basis for refactoring the UI to eliminate these smells. In particular, buried in these logs will be groupings of instructions that hint at missing functionality. Since we haven't given them the means to easily express themselves and what they want the computer to do, they may well find a way using the clunkier language we did give them.