April 19, 2013
Dark Life & New Ways Of SeeingThis article in the Guardian about how some astrobiologists theorise that there could be a "hidden biosphere" that has evolved on Earth in parallel with the tree of life from which we sprang reminded me of that age-old problem of how we can expect to find things we're not looking for.
We similarly overlooked a big chunk of the mass of the universe because we looked at electromagnetic radiation, seeing only that which emits or reflects electromagnetic waves.
In software development, we to can naively interpret our inability to see something as non-existence of the thing we can't see. Typically, we can't see it because we're not looking for it.
These could be, for example, the bugs nobody tested for. Like dark matter, and dark life, the bugs are still there, and they can still bite. But I've seen too many teams apply the strategy of not looking, as if that somehow means those bugs don't exist. This is like covering our faces and assuming that, because we can't see other people, they can't see us.
New ways of seeing are therefore vitally important. We can "see" dark matter by measuring its gravitational effects. And we could see dark life by appying tests for a wider set of biological possibilities. Then a whole new world (or universe) emerges out of the shadows, and our understanding is expanded.
Developers may believe their multithreaded code has few bugs, but that may because they haven't tested it in multithreaded scenarios. They may believe their software is easy to use, but that may be because they haven't tested it users who weren't involved in the design. They may believe their software is performant, but that may be because they haven't tested it under a high load. They may believe their classes are loosely coupled, but that may be because they haven't looked at a graph of class dependencies.
New ways of seeing offer up new possible understandings. And I can't help feeling we, as an industry, invest far too little in expanded our senses so we can expand our understanding of software. Too much of it is about "looking at text files", and I find that limits our vision and restricts our understanding.
September 13, 2012
We Can Learn A Lot About Collaborative Design From AardmanScientists have learned a great deal about humans by studying other animals and looking for similar attributes (and differences) that mark out what it means to be "human".
In particular, we've learned an enormous amount from studying our closest cousins, the Great Apes.
I've been pondering what software development's closest cousins might be, and what we could learn from them.
While watching Aardman's The Pirates In An Adventure With Scientists, it suddenly struck me that perhaps the endevour that most closxely resembles software development is animation.
We face strikingly similar problems to animators.
Firstly, we're both trying to tell compelling stories. Software, when it's done well, has a clear narrative, just like an animated movie. This narrative can be expressed in many ways, and - just as it is with animation - the process of producing working software can be thought of as telling and re-telling the story, adding more detail and refining it until the story's told in executable code.
The second similarity is that we both have to overcome the extreme difficulty of taking care of millions of tiny details without losing sight of the big picture.
Programming is inherently fiddly; far too fiddly for most people to be bothered with. What other kind of person would devote the lion's share of their lives to the kind of minutiae we do? Well, animators for one.
A single animation unit working on a film like "Pirates" might produce 4 seconds of usable action in a week. Each second of film is made up of 24 frames, each of which has to be painstakingly manipulated, with dozens of details changing from frame to frame that they have to keep track of.
And yet, working one frame at a time, tracking miriad interconnected elements, Aardman are able to produce something miraculous; something that many live action films fail to capture - comic timing.
Fight scenes, chase scenes, comedy - all of this is hard enough to get right shooting at 24 frames a second. To execute it so perfectly working one individual frame at a time requires something that, sadly, too many software teams lack - a clear vision.
The split-second timing and the exquisite dynamics of an Aardman animation are no accident. The mechanics of the overall narrative, every scene and every shot are carefully choreographed with storyboards, animatics (more animation) and with people performing the action to match the voice recordings of the actors, so that the animators can see how it should look and work towards realising that vision.
And with as many as 40 units working on different shots at any given time, this vision not only needs to be clear but it also must be a shared vision.
The rules that apply to each character - including non-living characters like the ocean and the wind - have to be clearly established so that no matter which team is animating those characters, they behave in a way that's consistent to their character. It would do little for the movie if the Pirate Captain inexplicably moved and behaved in 40 different ways through the movie, depending on who was animating him.
The objects in our software - howvere you choose to interpret that word - are the characters in our stories. As the design evolves and grows, is extremely important to maintain a clear shared vision of those objects and how they behave, as well as the narratives in which those objects play a part.
Watching "Pirates", something else jumps out at me; the extraordinary consistency of quality. Aardman have very high standards, and these standards seem to have been applied across the board.
I don't doubt that there were animators working on that film with less experience than some of the others. I don't doubt that some animators were probably learning this craft on the job. Where else do they get their great animators from? That scope and quality is not evident in art and film schools. I suspect you can only really learn to make films of Aardman quality working for someone like Aardman.
But there's not a scrap of evidence for less experienced animators in the movie. Every scene and every shot is sublime. If someone was screwing up, then it must have ended up on the cutting room floor or at the back of shot where nobody noticed.
The greatest animators are masters of collaborative design. I believe there's much we could learn from companies like Aardman about telling compelling stories, about establishing a clear shared vision, about getting the tiniest details right while not losing sight of our "comic timing", and about committing to consistently high standards of quality.
July 27, 2012
Great Software Ideas #4751 - Eat Your Own Dog FoodHere's a random Friday thought to end the week before the behemoth we call "The Olympics" shuts London down for 2 weeks. (Imagine what £11 billion could have done for, say, science! But, hey, running and jumping's important, too. Right?)
Anyhoo, moan moan moan and so on.
One thing that often strikes me on software projects is how unaware developers can sometimes seem to what it is like to use their software.
It's a bit like customer service. Here in the UK, we're famed throughout the world for our truly awful customer service. We complain endlessly about the poor service we get from companies, while failing to see the irony that this poor customer service is being dished out by - well, not to put too fine a point on it - us.
A lot of businesses have no idea that their products and services suck. When you watch these TV shows where the boss goes "back to the floor", they always seem genuinely surprised to discover that all is not well in their company.
This obliviousness may be commonplace in software. Our reputation as an industry for quality is by no means enviable. And I'm sure we've all had experiences with tech support that suggest that, just maybe, software companies are also blissfully unaware that their products suck to one degree or another.
Rather than bury our heads in the sand, or, worse, get angry and defensive about it ("I mean, obviously, if you want to send a blind carbon copy of the document you press the button with the picture of an Elf on it! Duh!"), perhaps matters could be improved if more of us tasted our own dog food.
I led a team on a job seekers web site many moons ago, and the most damning verdict I can give on it today is that, when seeking the exact kind of work this site specialises in, I've never used it. I did try once, months afterwards, and quickly decided it wasn't working for me.
Looking back, I should have tried it while we were iterating the design. I might then have noticed how cumbersome and clunky it was, or how off-the-mark the search results were, and how out of date the job postings were.
The site was designed entirely from the advertiser's point of view, it transpires, with barely lipservice paid to job seekers.
Many times since, I've made a point, if I can, to become a user of the software I'm working on - though that's not always possible (e.g., a private banking web site that requires a minimum investment of £100,000). But it should almost always be possible to simulate that experience, at least. In the case of the bank, for example, we could create a mirror version that uses simulated money against real financial instruments and play a Monopoly Money version of being a real user. This relates back the Model Office idea I talked about in the last blog post.
If you make a promise to yourself today to eat your own dog food, I would expect it to have quite a profound effect on your attitude to design and development. There should be at least some part of us that's aligned with the users, and wants what they want. Or is at least capable of understanding why they want it and why it's important to them.
I've found no better way of understanding our users than walking a mile in their shoes.
July 26, 2012
Empirical Design & Testing In The WildPonderings and musings on that question of why we code.
I'm not talking about why I am a programmer. That's easy - I enjoy it. Really, it's the question of "why software?"
It's no secret that, as an industry, we tend to be solution-led. We figure out how to do something often before we've thought of a good reason for doing it.
Maybe it's because we enjoy inventing solutions more than we do solving problems. Who knows?
And it's fair to say that it can cut both ways. Many times, we have a solution sitting on the shelf gathering dust because nobody found a use for it, and then one day someone made that connection to a real problem and said "hey, you know what we could use for that?"
But I'm seeing far too many solutions-looking-for-problems out there. CRM is the classic case-in-point. Large organisations know that they want it, but what is the goal of CRM? All too often, they can't articulate their reasons for wanting a particular CRM (or ERP, or whatever) solution. They just want it, and there's some vague acknowledgement that it might make things better somehow.
I suspect some of the most successful software solutions have attached themselves to problems almost by accident. How often have you seen software being used for something that it wasn't intended to be used for? Who said, for exampe, that Twitter was an open messaging solution, and not the micro-blogging solution it was designed to be? As a micro-blogging solution, it's arguably a failure. What it's turned out to be is something like AOL Instant Messenger, but anyone can join in the conversation.
Successes like Twitter and Facebook occur by providence more than by design. Users discover things they can do with the software, projecting their own use cases into it and working around the available features to find ways to exploit the underlying computerific nature of the beast.
Strip away the brand names and the logos and the unique designs, and you're left with a fundamental set of use cases upon which all software is based to some degree or another.
We're not supposed to use it that way, but for the majority, Microsoft Excel is a database solution. Indeed, I've seen Microsoft Word used as a database solution. You can store structured data in it. Ergo, it's a database.
You see, people have problems. And when all's said and done, software is nothing more than an interface to the computer that they can use to solve their problems. A user interface of any kind presents us with a language we can use to communicate with the computer, and users can be very creative about how they use that language. In Word, it may well be "add row to table", but in the user's mind it's "add item to order" or "register dog with kennel".
So too in Twitter, posting an update on my "micro-blog" might actually mean something else to me. I might be sending an open message to someone. I might be alerting followers to an interesting documentary I'm watching on TV at that moment. I might be asking for technical support. I've seen Twitter used in so many different ways.
I'm fascinated by watching people use software, and especially by the distance between their own internal conceptual model of what they think they're doing (adding an item to an order) and what the software thinks they're doing (adding a row to a table).
For me, these are the most enlightening use cases. What do people actually do using our software?
When I examine usage logs, I often find patterns of repeated sequences of user interactions. When I was younger and more naive, I believed that these revealed a need to offer further automation (e.g., wizards) to speed up these repetitive tasks, and to an extent that's usually true. It's a very mechanistic way of looking at these patterns.
But now I suspect that what these patterns reveal is more profound than that.
Imagine examining a log of instructions sent to the CPU of your computer. You would undoubtedly find much repetition. But tracing those patterns up through the technology stack, we will discover that these repetitions are a product of sequences of instructions defined at increasingly higher levels of abstraction - layers of languages, if you like. A simple expression or statement in Java might result in a whole sequence of machine instructions. A method containing multiple statements might result in even longer sequences. And a user interface or domain-specific language (which, by the way, is also a user interface, and vice-versa) might ultimately invoke many such methods with each interaction.
What I'm suggesting is that there can often be an unspoken - usually unacknowleged - language that sits above the user interface. This is the language of what the user intends.
And for all our attempts to define this user language up-front (with use cases and user stories), I don't think I've ever seen software where the mapping between software features and user intentions was precisely 1-to-1. When I resolve to watch closely, I've always found the user working around the software to at least some extent to get what they really want.
Inevitably, we don't get it right first time. Which is why we iterate. (We do iterate, right?) But what is that iteration based on? What are we feeding back in that helps to refine the design of our software?
It's my contention that requirements analysis and UI/UX design should be as much - if not more - an activity based on watching what users do with our software as it is on asking them what they want to do before we write it.
User acceptance testing helps us agree that we delivered what we agreed we should, but we need to go further. It's not enough to know that users can do what we expected they should be able to do using the software, because so much software gets its real value from being misused.
And it's not enough that we observe people using our software in captivity, under controlled conditions and sticking to the agreed scripts. We need to know what they'll likely do with it in the wild.
Going foward, here's how I plan to adapt my thinking about software design:
I plan to shift even more of the effort to redesign. I plan to base redesign not on washy-washy "customer feedback" but on detailed, objective observations taken from the real-world (or as near as damn-it) as to how the software's actually being used. Repetition and patterns in real-world usage data will reveal that there are goals and concepts I must have missed, and I will examine the patterns and the data, and then use that as input to ongoing collaborative analysis and redesign with the users.
I will keep doing this until no more usage patterns emerge and the design now encapsulates all of those missing goals and concepts, at which point hopefully the conceptual language of my software will be a 1-to-1 match for the user's.
I plan to refine this approach so that less and less we present users with our interpreration of what we think they need, and more and more we allow the patterns that emerge from continued usage to inform us what really needs to be in the software.
I consider this to be a scientific, empirical approach to software design. Design based on careful observation, which is then tested and retested based on further observations until what we observe is a precise match for what our users intend.
In iterative design, every design iteration is a theory, and every theory must be thoroughly tested by experiment. My feeling is that, for all these years, I've been doing the experiments wrong. And this has meant that the feedback going into the next iteration is less meaningful.
The whole point of iterative design is that we want to converge on the best design possible with the time and resources available to us. The $64,000 question is: converge on what? How do we know if we're getting hotter or colder?
That final test has always felt somehow lacking to me. We deliver some working software, the customer tests it to see that it's what we agreed it should be, and then we move on to the next iteration, where - instead of refining the design - we usually just add more features to it.
It's never felt right to me. In theory, the customer could come back and say "okay, so it does what we agreed, but now here are my changes to what we agreed for the next iteration". But they generally don't. That gets put off, and put off, and put off. Usually until a major roll-out, which is where most testing in the wild happens, and where most of the really meaningful feedback tends to come from.
This is one of Agile's dirty little secrets. The majority of the teams are doing short increments and loooong iterations. The real learning doesn't start until a great deal of the software's already been written. And then, thanks to Agile's other dirty little secret (Unclean Code), there's less we can do about it. Usually bugger all, in fact.
Of course, we're not going to be allowed to deploy software that doesn't have the mimimum viable set of features into a real business - any more than we'd be allowed to cut the ribbon on 10% of a suspension bridge - which is why I favour testing software in the most realistic simulations of the real world as possible.
Whenever I mention the idea of a "model office" I hear murmurs of approval. Everyone thinks it's a good idea. So, naturally, nobody does it*.
But if you want to get that most meaningful feedback, and therefore converge on the real value in your software, testing in captivity isn't going to work. You need to be able to observe end users trying to do their jobs, live their lives and organise their pool parties using your software. If you can't observe them in the wild, you need to at least create a testing environment that can fool them into thinking they're in the wild, so you can observe them using it in the way they naturally would.
That's my idea, basically. Deploy your software into the wild (or a very realistic simulation of it) and carefully and objectively observe what your real users do with it in realistic situations. Look for the patterns in that detailed usage data. Those patterns are goals and concepts that matter to your users which your software doesn't encapsulate. Make your software encapsulate those patterns. Then rinse and repeat until your software and your users are speaking exactly the same language.
* You think I'm kidding? Seriously, using a model office to test your software in is THE best idea in software development. Barr none. Nothing gets you closer to your users faster, except for actually becoming them. Nothing reveals the true nature of the user's problems, and the real gaps in your software, more directly. Nothing. NO-THING!
And I bet you still won't use one.
February 24, 2012
Agile Design - How A Bit Of Informal Visual Modeling Can Save A Heap Of HeartacheAll my courses are, of course, fine holiday fine. But the Agile Design workshop's especially enjoyable, as it brings together a whole range of disciplines while challening participants to work effectively together in designing and implementing different features of the same simple system.
The group works in pairs (or threes, depending on the overall numbers). After a bit of a crash course in basic UML - use cases, class diagrams and sequence diagrams - each pair is given a user story for a community DVD library, and tasked with iteratively fleshing out an object oriented design to pass an acceptance test agreed with the customer (me).
In a break from the traditional approach, we turned the design process around - arguably the right way round - and spent day #1 telling the story using plain old objects, designing and implementing a functioning domain model that includes all the concepts and functions required to pass the tests.
On day #2, we look at how these cncepts and functions should be presented to the end users, designing a graphical user interface and retelling the story, this time through the GUI.
The impetus behind the course is to help teams avoid the design train wreck that can ensue when Agile teams pick up stories and go off into their silos to do the design for their part of the overall system. I've seen very experienced teams end up with duplicated classes, database tables, multiple architectures and disjoints in the same code base.
Using informal visual models in a collaborative design approach can aid us in externalising our thinking so that other people can see how what they're doing fits in what everyone else is doing.
Getting the team around the whiteboard to explore shared concepts like the domain model, the screenflow of the user interface or the patterns used in the technical architecture - especially in the earlier stages of development - can draw out misunderstandings and disjoints that might otherwise have only come to light in integration, when these issues can be much more costly to fix (and therefore often never get fixed).
Importantly, teams are soon testing their designs by implementing them in code (test-driven, of course), and important design decisions and changes to the shared vision that happen as a result of making the designs work for real can be visualised and communicated by sketching them out on flipchart paper or on whiteboards and keeping them around the team's work area for everyone to see.
On the course, teams discover just how much active collaboration's needed to coordinate design effectively, and to take the time to resolve design issues and conflicts at the whiteboard if they can. Pairs need to be going out of their way to find out what the other pairs are working on. In real life, we tend to put in a wholly inadequate amount of effort into collaborative design, and our ad hoc, inconsistent, and sometimes just plain wrong, designs can be the end result.
The more visible our work is, the easier it is to bring design issues out into the open early, and the sooner we're able to establish a shared language for meaningfully talking about our designs.
And we're not just talking about developers here, either. Testers and graphic designers can play an active and valuable role in this process, as well as the customer, of course. They should take an active interest in establishing the design of use cases, in designing UI storyboards and screenflows, and in designing good acceptance tests that will effectively constrain our designs to what will meet the customer's real needs.
That's why I love this workshop. You get a buzz and an energy in the room, and a real sense of "stuff happening" and of progress being made. And it incorporates disciplines like continuous integration, TDD and BDD (or, as I know it, "TDD with a B instead of a T"), making it a much closer fit to real-world Agile Software Development.
January 28, 2012
Non-functional Test-Driven DevelopmentIt's the question that comes up everytime I introduce someone to Test-driven Development: "But what about performance?"
The thing about TDD is that adage "be careful what you wish for" applies. The solution we end up with is constrained by tests. There may be a million and one ways of achieving a goal, and some will perform better than others. The trick with TDD is to ask the right questions.
What I like about TDD - and similar precise approaches to defining requirements - is that it forces us to be explicit and unambiguous about what we want from our software.
So, my stock in trade reply to the question "But what about performance?" is "yes, what about performance?"
Software performance has different dimensions, and if it's important then we need to define exactly what performance we require in specific scenarios. A great way to do this is using non-functional tests.
There's the dimension of time, for example. How long should it take for the code to run?
Imagine a search algorithm that looks for a customer name in a sorted list. We could just loop through the list, and if there are only 1,000 customers and the occasional search, that might be fine. But if there are 10,000,000 customers and users are frequently searching, then a simple loop probably isn't going to cut the mustard.
We can constrain our search algorithm with a basic timing test, like the one below, that makes it explicit that our worst case search - the customer we're looking for isn't in the list - should take a maximum of 1 millisecond to complete.
Execution time is only one dimension, of course. What if we need to constrain the memory footprint of our code while it's running? In Java, we can use the JVM to get information about memory usage, and we can create a multithreaded test to monitor how much more memory is being eaten up as our code executes. Let's imagine we need to constrain the memory footprint when sorting our list of 10 million customers by name, forcing us to use an in-place sorting algorithm that uses up a maximum of another 10KB of memory:
And here, with massive caveats for my less-than-amazing knowledge of the Java Runtime (I make no warranties, the value of shares can go down as well as up, etc etc):
Leaving aside the fact that my brute-force method for calculating memory footprint is a bit hokey (and on running the tests several times, quite variable, it seems), the basic idea is hopefully useful. No doubt some fine fellow will point out a much better way.
You may be able to envisage now how we could use tests to explicitly constrain other non-functional runtime qualities of our code. But we can also often find ways to constrain code at design time, too.
We might have a requirement that our methods should be short and simple. Static code analysis tools like XDepend and Checkstyle can give us hooks into the structure of our code and enable us to create tests that, when code fails to live up to our quality standards, alert us to that fact early enough to do something meaningful about it.
Using executable tests, we can steer our software between acceptable limits of performance, scalability, portability, maintainability, and a whole heap of other -ilities we might care about.
But what about the more, how shall we put this, etheric -ilities, like usability, accessibility and so on? These things tend to be pretty ill-defined and qualitative. Can we make them explicit and testable, just like execution time or memory footprint?
I believe that we can, and to without reason, because I've done it and seen it done. We could, say, define a test that fails if a carefully selected group of target users (e.g., legal secretaries with more than 2 years Windows and web browsing experience), when presented with our application for the first time, fail to get their heads around it fast enough to complete certain tasks we set them within a specified time, without any help or documentation.
With a bit of imagination and lateral thinking, it's possible to meaningfully test many more software qualities than we usually do. And my experience of non-functional TDD is that we tend to get what we have tests for, and we tend not to get what don't have tests for. So agreeing executable non-functional tests tends to lead to better non-functional software quality, if it's done well.
As I warned before, though, be very careful what you wish for.
July 15, 2011
Heath Robinson User-Created FeaturesJust a quick example of a Google+ user (Rich Kiker) tweeting about how to mimic a wanted feature from Twitter using another feature of Google+:
"Want to save favorites in #GPlus? Create an empty circle and share fav posts with that circle. All favs stored. Done."
This is actually very common. Users often find ingenius ways to shoehorn in features they find useful that we didn't think of.
In design terms, this is like those paths we find in parks that nobody deliberately built, but users have worn by repeatedly taking shortcuts, going the way they wanted to go, not necessarily the way we planned that they should.
The smart park ranger takes the hint and puts a proper path there. The unenlightened park ranger will put up a sign saying "keep off the grass".