September 13, 2012
We Can Learn A Lot About Collaborative Design From AardmanScientists have learned a great deal about humans by studying other animals and looking for similar attributes (and differences) that mark out what it means to be "human".
In particular, we've learned an enormous amount from studying our closest cousins, the Great Apes.
I've been pondering what software development's closest cousins might be, and what we could learn from them.
While watching Aardman's The Pirates In An Adventure With Scientists, it suddenly struck me that perhaps the endevour that most closxely resembles software development is animation.
We face strikingly similar problems to animators.
Firstly, we're both trying to tell compelling stories. Software, when it's done well, has a clear narrative, just like an animated movie. This narrative can be expressed in many ways, and - just as it is with animation - the process of producing working software can be thought of as telling and re-telling the story, adding more detail and refining it until the story's told in executable code.
The second similarity is that we both have to overcome the extreme difficulty of taking care of millions of tiny details without losing sight of the big picture.
Programming is inherently fiddly; far too fiddly for most people to be bothered with. What other kind of person would devote the lion's share of their lives to the kind of minutiae we do? Well, animators for one.
A single animation unit working on a film like "Pirates" might produce 4 seconds of usable action in a week. Each second of film is made up of 24 frames, each of which has to be painstakingly manipulated, with dozens of details changing from frame to frame that they have to keep track of.
And yet, working one frame at a time, tracking miriad interconnected elements, Aardman are able to produce something miraculous; something that many live action films fail to capture - comic timing.
Fight scenes, chase scenes, comedy - all of this is hard enough to get right shooting at 24 frames a second. To execute it so perfectly working one individual frame at a time requires something that, sadly, too many software teams lack - a clear vision.
The split-second timing and the exquisite dynamics of an Aardman animation are no accident. The mechanics of the overall narrative, every scene and every shot are carefully choreographed with storyboards, animatics (more animation) and with people performing the action to match the voice recordings of the actors, so that the animators can see how it should look and work towards realising that vision.
And with as many as 40 units working on different shots at any given time, this vision not only needs to be clear but it also must be a shared vision.
The rules that apply to each character - including non-living characters like the ocean and the wind - have to be clearly established so that no matter which team is animating those characters, they behave in a way that's consistent to their character. It would do little for the movie if the Pirate Captain inexplicably moved and behaved in 40 different ways through the movie, depending on who was animating him.
The objects in our software - howvere you choose to interpret that word - are the characters in our stories. As the design evolves and grows, is extremely important to maintain a clear shared vision of those objects and how they behave, as well as the narratives in which those objects play a part.
Watching "Pirates", something else jumps out at me; the extraordinary consistency of quality. Aardman have very high standards, and these standards seem to have been applied across the board.
I don't doubt that there were animators working on that film with less experience than some of the others. I don't doubt that some animators were probably learning this craft on the job. Where else do they get their great animators from? That scope and quality is not evident in art and film schools. I suspect you can only really learn to make films of Aardman quality working for someone like Aardman.
But there's not a scrap of evidence for less experienced animators in the movie. Every scene and every shot is sublime. If someone was screwing up, then it must have ended up on the cutting room floor or at the back of shot where nobody noticed.
The greatest animators are masters of collaborative design. I believe there's much we could learn from companies like Aardman about telling compelling stories, about establishing a clear shared vision, about getting the tiniest details right while not losing sight of our "comic timing", and about committing to consistently high standards of quality.
August 2, 2012
Back To Basics #1 - Software Should Have Testable GoalsThis is the first in a series of 10 posts covering the most basic principles of software development. This is me thinking out loud about what I would seek to impart to someone learning how to be a software developer (or to be a better software developer), without ruining their delicate minds with hype, buzzwords, brand names and snake oil.
No, seriously, though. Why?
Whenever i ask this question to a software team, the response is usually a lot of handwaving and management-speak and magic beans.
Most teams don't know why they're building the software they're building. Most customers don't know why they're asking them to either.
If I could fix only one thing in software development (as opposed to no things, which is my current best score), it would be that teams should write software for a purpose.
By all means, if it's your time and your money at stake, play to your heart's content. Go on, fill your boots.
But if someone else is picking up the cheque, then I feel we have responsibility to try and give them something genuinely worthwhile for their money.
Failing to understand the problem we're trying to solve is the number one failure in software development. It stands to reason: how can we hope to succeed if we don't even know what the aim of the game is?
It will always be the first thing I test when I'm asked to help a team. What are your goals, and how will you know when you've achieved them (or are getting closer to achieving them)? How will you know you're heading in the right direction? How can one measure progress on a journey to "whereever"?
Teams should not only know what the goals of their software are, but those goals need to be articulated in a way that makes it possible to know unambiguously if those goals are being achieved.
As far as I'm concerned, this is the most important specification, since it describes the customer's actual requirements. Everything else is a decision about how to satisfy those requirements. As such, far too many teams have no idea what their actual requirements are. They just have proposed solutions.
Yes, a use case specification is not a business requirement. Ditto user stories. It's a system design. A wireframe outline of a web application is very obviously a design. Acceptance tests of the BBD variety are also design details. Anything expressed against system features is a design.
Accepted widsom when presented with a feature request we don't understand the need for is to ask "why?" In my experience, asking "why?" is a symptom that we've been puting the cart before the horse, and doing things arse-backwards.
We should have started with the why and figured out what features or properties or qualities our software will need to achieve those goals.
Not having the goals clearly articulated has a knock-on effect. Many other ills reported in failed projects seem to stem from the lack of testable goals. Most notably, poor reporting of progress.
How can we measure progress if we don't know where we're supposed to be heading? "Hey, Dave, how's that piece of string coming?" "Yep, good. It's getting longer."
But also, when the goals are not really understood, people can have unrealistic expectations about what the software will do for them. Or rather, what they'll be able to do with the software.
There's also the key problem of knowing when we're "done". I absolutely insist that teams measure progress against tested outcomes. If it doesn't pass the tests, it's 0% done. Measuring progress against tasks or effort leads to the Hell of 90% Done, where developers take a year to deliver 90% of the product, and then another 2 years to deliver the remaining 90%. We've all been there.
But even enlightened teams, who measure progress entirely against tested deliverables, are failing to take into account that their testable outcomes are not the actual end goals of the software. We may have delivered 90% of the community video library's features, but will the community who use it actually make the savings on DVD purchases and rentals they're hoping for when the system goes live? Will the newest titles be available soon enough to satisfy our film buffs? Will the users donate the most popular titles, or will it all just be the rubbish they don't want to keep any more? Will our community video library just be 100 copies of "The Green Lantern"?
It's all too easy for us to get wrapped up in delivering a solution and lose sight of the original problem. Information systems have a life outside of the software we shoehorn into them, and it's a life we need to really get to grips with if we're to have a hope of creating software that "delights".
In the case of our community video library, if there's a worry that users could be unwilling to donate popular ttitles, we could perhaps redesign the system to allow users to lend their favourite titles for a fixed period, and offer a guarantee that if it's damaged, we'll buy them a new copy. We could also offer them inducements, like priority reservations for new titles. All of this might mean our software will have to work differently.
So, Software Development Principle #1 is that software should have testable goals that clearly articulate why we're creating it and how we'll know if those goals are being achieved (or not).
Coming soon, Back To Basics #2.
July 27, 2012
Great Software Ideas #4751 - Eat Your Own Dog FoodHere's a random Friday thought to end the week before the behemoth we call "The Olympics" shuts London down for 2 weeks. (Imagine what £11 billion could have done for, say, science! But, hey, running and jumping's important, too. Right?)
Anyhoo, moan moan moan and so on.
One thing that often strikes me on software projects is how unaware developers can sometimes seem to what it is like to use their software.
It's a bit like customer service. Here in the UK, we're famed throughout the world for our truly awful customer service. We complain endlessly about the poor service we get from companies, while failing to see the irony that this poor customer service is being dished out by - well, not to put too fine a point on it - us.
A lot of businesses have no idea that their products and services suck. When you watch these TV shows where the boss goes "back to the floor", they always seem genuinely surprised to discover that all is not well in their company.
This obliviousness may be commonplace in software. Our reputation as an industry for quality is by no means enviable. And I'm sure we've all had experiences with tech support that suggest that, just maybe, software companies are also blissfully unaware that their products suck to one degree or another.
Rather than bury our heads in the sand, or, worse, get angry and defensive about it ("I mean, obviously, if you want to send a blind carbon copy of the document you press the button with the picture of an Elf on it! Duh!"), perhaps matters could be improved if more of us tasted our own dog food.
I led a team on a job seekers web site many moons ago, and the most damning verdict I can give on it today is that, when seeking the exact kind of work this site specialises in, I've never used it. I did try once, months afterwards, and quickly decided it wasn't working for me.
Looking back, I should have tried it while we were iterating the design. I might then have noticed how cumbersome and clunky it was, or how off-the-mark the search results were, and how out of date the job postings were.
The site was designed entirely from the advertiser's point of view, it transpires, with barely lipservice paid to job seekers.
Many times since, I've made a point, if I can, to become a user of the software I'm working on - though that's not always possible (e.g., a private banking web site that requires a minimum investment of £100,000). But it should almost always be possible to simulate that experience, at least. In the case of the bank, for example, we could create a mirror version that uses simulated money against real financial instruments and play a Monopoly Money version of being a real user. This relates back the Model Office idea I talked about in the last blog post.
If you make a promise to yourself today to eat your own dog food, I would expect it to have quite a profound effect on your attitude to design and development. There should be at least some part of us that's aligned with the users, and wants what they want. Or is at least capable of understanding why they want it and why it's important to them.
I've found no better way of understanding our users than walking a mile in their shoes.
July 26, 2012
Empirical Design & Testing In The WildPonderings and musings on that question of why we code.
I'm not talking about why I am a programmer. That's easy - I enjoy it. Really, it's the question of "why software?"
It's no secret that, as an industry, we tend to be solution-led. We figure out how to do something often before we've thought of a good reason for doing it.
Maybe it's because we enjoy inventing solutions more than we do solving problems. Who knows?
And it's fair to say that it can cut both ways. Many times, we have a solution sitting on the shelf gathering dust because nobody found a use for it, and then one day someone made that connection to a real problem and said "hey, you know what we could use for that?"
But I'm seeing far too many solutions-looking-for-problems out there. CRM is the classic case-in-point. Large organisations know that they want it, but what is the goal of CRM? All too often, they can't articulate their reasons for wanting a particular CRM (or ERP, or whatever) solution. They just want it, and there's some vague acknowledgement that it might make things better somehow.
I suspect some of the most successful software solutions have attached themselves to problems almost by accident. How often have you seen software being used for something that it wasn't intended to be used for? Who said, for exampe, that Twitter was an open messaging solution, and not the micro-blogging solution it was designed to be? As a micro-blogging solution, it's arguably a failure. What it's turned out to be is something like AOL Instant Messenger, but anyone can join in the conversation.
Successes like Twitter and Facebook occur by providence more than by design. Users discover things they can do with the software, projecting their own use cases into it and working around the available features to find ways to exploit the underlying computerific nature of the beast.
Strip away the brand names and the logos and the unique designs, and you're left with a fundamental set of use cases upon which all software is based to some degree or another.
We're not supposed to use it that way, but for the majority, Microsoft Excel is a database solution. Indeed, I've seen Microsoft Word used as a database solution. You can store structured data in it. Ergo, it's a database.
You see, people have problems. And when all's said and done, software is nothing more than an interface to the computer that they can use to solve their problems. A user interface of any kind presents us with a language we can use to communicate with the computer, and users can be very creative about how they use that language. In Word, it may well be "add row to table", but in the user's mind it's "add item to order" or "register dog with kennel".
So too in Twitter, posting an update on my "micro-blog" might actually mean something else to me. I might be sending an open message to someone. I might be alerting followers to an interesting documentary I'm watching on TV at that moment. I might be asking for technical support. I've seen Twitter used in so many different ways.
I'm fascinated by watching people use software, and especially by the distance between their own internal conceptual model of what they think they're doing (adding an item to an order) and what the software thinks they're doing (adding a row to a table).
For me, these are the most enlightening use cases. What do people actually do using our software?
When I examine usage logs, I often find patterns of repeated sequences of user interactions. When I was younger and more naive, I believed that these revealed a need to offer further automation (e.g., wizards) to speed up these repetitive tasks, and to an extent that's usually true. It's a very mechanistic way of looking at these patterns.
But now I suspect that what these patterns reveal is more profound than that.
Imagine examining a log of instructions sent to the CPU of your computer. You would undoubtedly find much repetition. But tracing those patterns up through the technology stack, we will discover that these repetitions are a product of sequences of instructions defined at increasingly higher levels of abstraction - layers of languages, if you like. A simple expression or statement in Java might result in a whole sequence of machine instructions. A method containing multiple statements might result in even longer sequences. And a user interface or domain-specific language (which, by the way, is also a user interface, and vice-versa) might ultimately invoke many such methods with each interaction.
What I'm suggesting is that there can often be an unspoken - usually unacknowleged - language that sits above the user interface. This is the language of what the user intends.
And for all our attempts to define this user language up-front (with use cases and user stories), I don't think I've ever seen software where the mapping between software features and user intentions was precisely 1-to-1. When I resolve to watch closely, I've always found the user working around the software to at least some extent to get what they really want.
Inevitably, we don't get it right first time. Which is why we iterate. (We do iterate, right?) But what is that iteration based on? What are we feeding back in that helps to refine the design of our software?
It's my contention that requirements analysis and UI/UX design should be as much - if not more - an activity based on watching what users do with our software as it is on asking them what they want to do before we write it.
User acceptance testing helps us agree that we delivered what we agreed we should, but we need to go further. It's not enough to know that users can do what we expected they should be able to do using the software, because so much software gets its real value from being misused.
And it's not enough that we observe people using our software in captivity, under controlled conditions and sticking to the agreed scripts. We need to know what they'll likely do with it in the wild.
Going foward, here's how I plan to adapt my thinking about software design:
I plan to shift even more of the effort to redesign. I plan to base redesign not on washy-washy "customer feedback" but on detailed, objective observations taken from the real-world (or as near as damn-it) as to how the software's actually being used. Repetition and patterns in real-world usage data will reveal that there are goals and concepts I must have missed, and I will examine the patterns and the data, and then use that as input to ongoing collaborative analysis and redesign with the users.
I will keep doing this until no more usage patterns emerge and the design now encapsulates all of those missing goals and concepts, at which point hopefully the conceptual language of my software will be a 1-to-1 match for the user's.
I plan to refine this approach so that less and less we present users with our interpreration of what we think they need, and more and more we allow the patterns that emerge from continued usage to inform us what really needs to be in the software.
I consider this to be a scientific, empirical approach to software design. Design based on careful observation, which is then tested and retested based on further observations until what we observe is a precise match for what our users intend.
In iterative design, every design iteration is a theory, and every theory must be thoroughly tested by experiment. My feeling is that, for all these years, I've been doing the experiments wrong. And this has meant that the feedback going into the next iteration is less meaningful.
The whole point of iterative design is that we want to converge on the best design possible with the time and resources available to us. The $64,000 question is: converge on what? How do we know if we're getting hotter or colder?
That final test has always felt somehow lacking to me. We deliver some working software, the customer tests it to see that it's what we agreed it should be, and then we move on to the next iteration, where - instead of refining the design - we usually just add more features to it.
It's never felt right to me. In theory, the customer could come back and say "okay, so it does what we agreed, but now here are my changes to what we agreed for the next iteration". But they generally don't. That gets put off, and put off, and put off. Usually until a major roll-out, which is where most testing in the wild happens, and where most of the really meaningful feedback tends to come from.
This is one of Agile's dirty little secrets. The majority of the teams are doing short increments and loooong iterations. The real learning doesn't start until a great deal of the software's already been written. And then, thanks to Agile's other dirty little secret (Unclean Code), there's less we can do about it. Usually bugger all, in fact.
Of course, we're not going to be allowed to deploy software that doesn't have the mimimum viable set of features into a real business - any more than we'd be allowed to cut the ribbon on 10% of a suspension bridge - which is why I favour testing software in the most realistic simulations of the real world as possible.
Whenever I mention the idea of a "model office" I hear murmurs of approval. Everyone thinks it's a good idea. So, naturally, nobody does it*.
But if you want to get that most meaningful feedback, and therefore converge on the real value in your software, testing in captivity isn't going to work. You need to be able to observe end users trying to do their jobs, live their lives and organise their pool parties using your software. If you can't observe them in the wild, you need to at least create a testing environment that can fool them into thinking they're in the wild, so you can observe them using it in the way they naturally would.
That's my idea, basically. Deploy your software into the wild (or a very realistic simulation of it) and carefully and objectively observe what your real users do with it in realistic situations. Look for the patterns in that detailed usage data. Those patterns are goals and concepts that matter to your users which your software doesn't encapsulate. Make your software encapsulate those patterns. Then rinse and repeat until your software and your users are speaking exactly the same language.
* You think I'm kidding? Seriously, using a model office to test your software in is THE best idea in software development. Barr none. Nothing gets you closer to your users faster, except for actually becoming them. Nothing reveals the true nature of the user's problems, and the real gaps in your software, more directly. Nothing. NO-THING!
And I bet you still won't use one.
July 8, 2012
Testing The Testers - A Vague Hiring Process.Over sunday lunch with a tester friend today, I got to thinking about testing interviews.
There's been quite a lot of good ideas floating around recently about interviewing developers (e.g., Hibri Marzook's Pair Programming Interviews workshop at SC2012), and I've seen testers put through their paces with what are essentially developer interviews, too.
But testing is not programming - though programming may well be involved. On the principle that if you want to see if a juggler can juggle, ask to see them juggle, what kind of practical techniques could we use to put a tester through his or her paces?
What occured to me over lunch is that there'd be three distinct areas I'd look into.
The most obvious is the tester's ability to find bugs. Bring them in (after some basic vetting to weed out the testers who, let's face it, just aren't - still too many of those about, sadly) and sit them down with a copy of some software in which there are known bugs. Then give them a fixed amount of time to find those bugs, and document them in a useful way (i.e., how to reproduce them.)
This is sort of a human variant of mutation testing. We test the tester by introducing known defects into the code and then see if they can find them.
We could make it more meaningful by introducing the bugs in places where bugs would be more likely to lurk (long/complex methods, multithreaded code accessing global variables etc) so that they could use their understanding of the relationship between code and quality to make educated guesses. You could also include an incomplete automated test suite so they could look for parts of the software that aren't being tested, where bugs are more likely to lurk. You could even be really cheeky and leave a test failing, to see if they even bother to check. You might also like to leave them a pile of user stories with points assigned to them by the customer for relative value, or feature usage statistics, to test their ability to not only find bugs, but find the most important ones first.
There's more to being a tester than finding bugs, of course. So the second thing I'd want to look into is the tester's ability to drive out the details of what a customer wants and "bridge the communication gap", as Gojko Adzic puts it.
One way I thought of might be to get a "customer" - a non-technical domain/application expert - to describe features of an existing piece of software to our candidate. The candidate can ask questions and use examples and test cases to firm up their understanding of what it should actually be like, eventually agreeing a set of acceptance test scripts for each feature with the "customer". Because this software actually exists, we can execute this tests against a running version of it, and test the tests, effectively.
Finally, these days, a tester often needs to be a programmer - and a pretty handy one at that. So my third focus would be on programming skills, probably with an emphasis on automating tests. I might ask them to write Selenium scripts for the acceptance tests they agreed for this existing piece of software, looking not only for test automation abilities, but also clean code and generally good dev instincts.
Realistically, you might be looking at a whole day to put a tester through their paces, but this could be a progression. If they can't find bugs, probably not much point moving on to the next stage, so it might only be a whole day if you're actually any good.
And then there's the whole question of team fit. Sure, they may have the technical chops, but can this person actually work well with us? Maybe round the day off, if they get through all the previous stages, with a Team Dojo with the candidate fulfilling tester duties.
So, in practice, how might I do it? I think I might run it as elimination rounds. Invite a sixteen of the best candidates in to do the bug-finding exercise, and select the best eight at that to do the "customer" exercise, and the best four from that to do a pair programming interview to check their dev skills, and the two remaining after that participate in a Team Dojo to determine which one will be a better fit. (Those numbers are pretty arbitrary - you may be looking for several testers, for example - but that's the general idea. Whittle them down over the course of a day.)
Of course, I'm just thinking out loud. Again.
January 24, 2012
Jason's Handy Guide To Evaluating Software PackagesI get asked this question a lot, but it never occurred to me to write down my usual answer.
How do we evaulate shrink-wrapped software against our needs?
Well, that's easy. You still need to do the usual business requirements analysis. Identify who will be using this system, and what their goals will be for using it. In the good old days, we called these "Use Cases". Yep, even if you're buying and not building the software, you still need use cases.
The next step is to flesh out the design of your use cases, as we might normally do, by describing how the user interacts with the software to achieve their goal.
When we're describing software we haven't built yet, this is design. When we're describing how we'll use software that already exists, this is a process of validation. Can the user achieve their goal using the software we're evaluating?
Even with the most feature-rich packages, we tend to find we don't get an exact match. It's not always possible to achieve every user goal using the software. So as we validate the software against our use cases, we may identify gaps. There are almost always gaps.
The next question we need to answer is can we fill those gaps? Let's say we're evaluating Microsoft PowerPoint for our training business. It doesn't do everything we need out of the box. Let's pretend we have a use case where the trainer needs to populate a slide with an organisation chart showing the reporting structure of the group attending the course. She has a spreadsheet with those names listed in alphabetical order and with information about who reports to whom. using PowerPoint's built-in scripting language, Visual Basic for Applications (VBA), it is indeed possible to take that information and automatically generate an Org Chart.
So that gap could be plugged, with some work. Write a reminder about it down on a blank index card. This is now a potential "User Story" for some programming work that would need to be done if we went the PowerPoint route.
Of course, people identify gaps in software all the time, and it's possible that someone somewhere has already found a solution to plugging some of your gaps with handy tools and utilities. Google is your friend here: search for solutions before you think about reinventing the wheel. If you find one, and there's money involved, write down roughly how much on the index card.
Finally, don't forget the non-functional requirements. A package may offer the right features, but it may not be able to handle a high-enough volume of users, or it may not be secure enough for your purposes, or it may take a long time for users to learn. Evaluate thye software against these criteria, too. Be as explicit as you can. Handwavy requirements like "it must be scalable" aren't very helpful for validating software. What do you mean by "scalable" - a certain number of users at any one time, or a certain number of transactions per second, or the ability to run it on more servers?
All too often, businesses buy a solution and then validate that it does what they need - often by actually trying to roll it out. Whether buying or building, the key is to have clear, testable requirements and to validate the software against them. Don't be seduced by their sales patter and let them lead you like a donkey to the slaughter to their feature list. What their software does is far less important than what we can do with their software.
July 15, 2011
Heath Robinson User-Created FeaturesJust a quick example of a Google+ user (Rich Kiker) tweeting about how to mimic a wanted feature from Twitter using another feature of Google+:
"Want to save favorites in #GPlus? Create an empty circle and share fav posts with that circle. All favs stored. Done."
This is actually very common. Users often find ingenius ways to shoehorn in features they find useful that we didn't think of.
In design terms, this is like those paths we find in parks that nobody deliberately built, but users have worn by repeatedly taking shortcuts, going the way they wanted to go, not necessarily the way we planned that they should.
The smart park ranger takes the hint and puts a proper path there. The unenlightened park ranger will put up a sign saying "keep off the grass".