May 15, 2008

...Learn TDD with Codemanship

User Interfaces Are A Graphical Domain-Specific Language

When I occasionally stop to think more deeply about software development, I'm often drawn to the concept of languages.

Languages are very integral to programming, of course. We use programming languages like Java, C# and Ruby to give instructions to our computers on behalf of our customers.

And when I design an API, I'm also defining a language. A set of symbols that, when invoked, have an underlying meaning expressed in another - usually lower-level - language.

Domain-specific Languages are another way in which languages pervade the computing landscape. A NAnt script is written in a domain-specific language, for example. There are a set of symbols that NAnt understands (written as XML) and is able to interpret and execute to compile, test and deploy our applications for us.

But there's one kind of computing language that I think we've maybe overlooked. While a programmer like me or you uses a programming language to issue instructions to the computer, the users of our applications express their instructions through a user interface. A UI defines another kind of language, with symbols and gestures that have an underlying meaning - defined by the code that is executed when they invoke the symbols - and rules that govern their use.



Take drag and drop, for example. Selecting an image file in a folder and moving it to another folder, for example, is communicated through the user interface by clicking the mouse on that file, and dragging and dropping the file's image on to the target folder's icon.

That sequence of interactions with the user interface triggers code to be executed that - hopefully - does what the user expects to the objects inside the system. And with an effective user interface design, this process is intuitive enough that the user doesn't have to learn a completely new language that's alien to them. By using recognisable and meaningful symbols for files and folders, and relying on intuitive real-world gestures like pointing, selecting, dragging and dropping, the user interface can present users with a way to productively commune with their machine.

The same interactions could be expressed as, say, a FitNesse test, and we could use FitNesses as a kind of surrogate user interface. But FitNesse isn't as intuitive as, say, Windows Explorer, and would not be a suitable medium to have users interact with their software.

Having said that, the logic of these interactions - the semantics, if you like - would be identical. Which is why I'm becoming increasingly convinced that UI design should start by establishing the underlying semantics of the interactions and evolve towards a concrete syntax - an actual graphical users interface - which is driven by our understanding of the syntax-independent logic of how the software will be used.

One technique that I might even dare to call "Agile" might be to express user stories using FitNesse tests - or something similarly UI-agnostic - and then apply a graphical transformation to the tests. So where you see a type of object playing a specific role in an interaction, select or design a graphical icon for it that will be recognisable to the users. And when we have actions occuring to our objects, we can choose or design a gesture that would intuitively represent that action. (Like dragging and dropping.)

In your typical development process, this would mean that software driven by executable specifications would be created first, and then a user interface would be created after we'd established that the logic works.


Posted 10 years, 2 months ago on May 15, 2008