May 20, 2011

...Learn TDD with Codemanship

User Interface Design Is About Speaking The User's Language

So I find myself unexpectedly with a free day, and I just wanted to expand on some of the things I hinted at in yesterday's blog post about UI design.

In my own experience, a user interface is a language that allows the user to interactively "program" the computer. Some UI languages are text-based (command line applications, old computing paradigms we used in the 80's on our Acorns and Spectrums and Commodores etc), but more usually these days, they're visual languages.

One of the goals of a good UI is to not require the user to learn a new language. The UI should allow them to express themselves in terms they already understand, presenting them with easily recognisable concepts which map closely to their own mental models of the application domain.

In the Agile Design workshop, pairs/threes work as a single team to establish a UI design for a community DVD library. I very deliberately ask them to design and implement the internal application logic first before considering the user interface. Many will tell you this is wrong, wrong, wrong, but I'm here to tell you that they are wrong, wrong, wrong and double-wrong.

Language design starts with the semantics - what do we need to say? - before establishing a conceptual abstract syntax that would allow us to say it. Only afterwards do we then decide upon a concrete syntax. The user interface is the concrete syntax of your application.

So we take our user's stories - the things they wish to say with this application language - and develop a conceptual logical model - an abstract syntax - with which they could say it.

In the workshop, teams often end up with a domain model that looks a bit like the one below. Once they've established that model, they can then decide on visual representations for those concepts that the user should recognise and attach the same meaning to that our application will.



At this point, what we're doing is very similar to what we might do if developing a visual Domain-Specific Language (e.g., a UML profile with custom icons) for our application. In fact, that's exactly what we're doing.

First we can come up with graphical symbols for the classes in our domain. This establishes, if you like, the characters in our user stories.

Then we can combine these symbols to express relationships between the characters:






We can vary the appearance of a symbol to express that it is in one of a number of discrete states (e.g., a copy of a DVD can be Available, On Loan or Damaged):



To make our UI design useful, of course, we need to not just take care of the nouns and adjectives in our user's mental model of the application domain. We also need to allow them to express the verbs. (For a story without verbs is a story in which nothing interesting ever happens.)

Actions that can be applied to objects need to mapped on to actions that a user can perform to invoke that behaviour. The word here is "gesture". What gestures does our user interface allow? Well, if it's a PC or a laptop, we can do things like click a mouse button while the cursor is over an object, or hit a key. Just as a guitar player has to learn that pressing a finger down on that string at that fret and plucking the string means "A sharp", a user must learn that double-clicking on a DVD title may mean "select that DVD and then show me its copies", or double-clicking on a copy may mean "I wish to borrow that copy".

The dominant paradign for graphical user interfaces is object-action. We select an object, and the UI tells us what actions we can apply to it. For example, when you right click on a file in your file explorer, a context-sensitive menu pops up listing the actions that you can apply to that file. We've known since the 1970's that it works far less intuitively for the user when it's the other way around ("action-object"). If you don't believe me, ask your mother to operate her PC using only the command line.

The actions that apply to objects are the methods that can be invoked on an object. We seek to make those actions available through our user interface when that object has the focus.

But any method may have pre-conditions that mean we can only invoke them under certain circumstances. For example, we cannot borrow a video that is already on loan to another member. So the actions available for an object must be exposed only when they are actually allowed.



In many good OO user interface designs, users are offered more than one way to perform an action. For example, they may be to invoke it from the application's main menu, or from a context-sensitive menu, as well as by clicking on it and so on.

When an object only supports one kind of action, or has an action that could be considered the default action, we can often give the user a simple default "shortcut" (e.g., hitting the key or double-clicking).

By presenting users with a choice of ways to invoke an action, we can greatly increase their chances of figuring out a way to do what they want.

When an action is performed that changes the state of objects in the system - or has some testable outcome - it's important to ensure that what the user sees afterwards clearly reflects this change of state. Partly so that they know it actually worked, but also to help them learn the language of the user interface. The meaning of any action is the effect it has, so ti helps enormously to be able to see what that effect is when we invoke an action. Without that feedback, users can be left somewhat in the dark.

And because users don't always know what effect an action is going to have until they see what effect it's had, as well as because we all make mistakes from time to time, it's also very important that your application can "forgive" mistakes. Allowing users to undo actions has long been established as a vitally important factor in user interface design.

When an action cannot be undone, the application should make it clear, giving the user the opportunity to double-check before they commit to it. Such messages should make it clear what the effect of the action will be.

Of course, user interface designers will seldom speak in these terms: object, action, state, and so on. The industry has mostly concerned itself with more traditional computing design concepts like "workflow" and "process".

In object oriented interfaces, workflow is the stories the application will allow the user to tell. In very much the same way that in OO programming, workflows are the processes that objects collaborsating together can achieve.

By driving the design of our application directly from these stories, but not hard-baking in those workflows in the style of, say, a text-driven greenscreen application, we can not only create interfaces that are easier for users to understand, but we can create interfaces that allow users to express themselves in ways we didn't plan for, telling new stories using the same characters in our conceptual model.

Throughout the process, it's vitally important that the users are closely involved in design. Your user stories should come from them, and the interactions designs and UIs you come up with to tell those stories should be fed back to them in a highly iterative and collaborative nature.

Only when you have a rough-and-ready, but useful and usable UI should you even consider handing it over to a graphic designer to smooth out the edges and make it look slick and appealing.

Test your UI designs constantly. Use the user stories and system scenarios you get from the users, but also ask your testers to help you identify scenarios you've missed through exploratory testing of the UI.

Consider using these two powerful usability testing techniques:

1. Find potential users who have not been involved in design and haven't seen the UI before. Using mock-ups and rough prototypes, set them functional goals and then observe them trying to use your UI design to achieve those goals. Ask them to verbalise their thoughts as they do it ("I think if I click on this it will let me borrow that DVD" and so on). Ask them questions about what they think each symbol represents, what they think they can do to that object, etc etc. Give them no help until they get stuck and don't know what to do next. That can indicate where your UI isn't speaking their language. Feed that straight back into your designs and iterate with both the same users and new users.

2. Rig up a simulated environment in which the software will be used. for example, build a real DVD library. Get some shelves, stick a bunch of DVDs on them organised into genres. Try and run the library for a day, with a bunch of people acting as members, and walk through as many scenarios as you can, using the software in the context it was designed for. You'd be amazed what you learn that no amount of system testing would throw up.





Posted 3 weeks, 2 days ago on May 20, 2011