September 16, 2012
Are Woolly Definitions Of "Success" At The Heart Of Software Development's Thrall To Untested Ideas?In the ongoing debate about what works and what doesn't in software development, we need to be especially careful to define what we mean by "it worked".
In my Back To Basics paper, I made the point that teams need to have a clear, shared and testable understanding of what is to be achieved.
Without this, we're a ship on a course to who-knows-where, and I've observed all manner of ills stemming from this.
Firstly, when we don't know where we're supposed to be headed, steering becomes a fruitless exercise.
It also becomes nigh-on impossible to gauge progress in any meaningful way. It's like trying to score an archery contest with an invisible target.
To add to our worries, teams that lack clear goals have a tendency to eat themselves from the inside. We programmers will happily invent our own goals and persue our own agendas in the absence of a clear vision of what we're all meant to be aiming for.
This can lead to excess internal conflict as team members vie to stamp their own vision on a product or project. Hence an HR system can turn into a project to implement an "Enterprise Service Bus" or to "adopt Agile".
Since nobody can articulate what the real goals are, any goal becomes more justifiable, and success becomes much easier to claim. I've met a lot of teams who rated their product or project as a "big success", much to the bemusement of the end users, project sponsors and other stakeholders, who can take a very different view.
There are times when we can display all the misplaced confidence and self-delusion of an X Factor contestant who genuinely seems to have no idea that they're singing out of tune and dancing like their Dad at a wedding.
Much of the wisdom we find on software development comes from people, and teams, who are basing their insights on a self-endowed sense of success. "We did X and we succeeded, therefore it is good to X" sort of thing.
Here's my beef with that: first off, it's bad science.
It's bad science for three reasons: one is that one data point doesn't make a trend, two is that perhaps you have incorrectly attributed your success to X rather than one of the miriad other factors in software development, and three is that can we really be sure that you genuinely succeeded?
If I claim that rubbing frogspawn into your eyes cures blindness, we can test that by rubbing frogspawn into the eyes of blind people and then measuring the accuity of their eyesight afterwards.
If, on ther hand, I claim that rubbing frogspawn into your eyes is "a good thing to do", and that after I rubbed frogspawn into my eyes, I got "better" - well, how can we test that? What is "better"? Maybe I rubbed frogspawn into my eyes and my vocabulary improved.
My sense is that a worrying proportion of what we read and hear about "things that are good to do" in software development is based on little more than "how good (or how right) it felt" to do them. Who knows; maybe rubbing fresh frogspawn in your eyes feels great. But that has little bearing on its efficacy as a treatment.
Without clear goals, it's not easy to objectively determine if what we're doing is working, and this - I suspect - is the underlying reason why so much of what we know, or we think we know, about software development is so darned subjective.
Teams who've claimed to me that they're "winning" (perhaps because of all the tiger blood) have turned out to be so wide of the mark that, in reality, the exact oppsosite was true. These days, when I hear proclamations of great success, it's usually a precursor to the whole project getting canned.
The irony is that those few teams who knew exactly what they were aiming for often measure themselves more brutally against their goals, and are more pessimistic, despite in real terms being more "winning" than teams who were prematurely doing their victory lap.
This, I suspect, has also contributed to the dominance of subjective ideas in software development. Ideas backed up by objective successes seem to be expressed more tentatively and with more caveats than ideas backed up by little more than feelgood and tiger blood, which are expressed more confidently and in more absolute terms.
The naked ape in all of us seems to respond more favourably to people who present their ideas with confidence and a greater sense of authority. In reality, many of these ideas have never really been put to the test.
Once an idea's gained traction, there can be benefits within the software development community to being its originator or a perceived expert in it. Quickly, vested interests build up and the prospect of having their ideas thoroughly tested and potentially debunked becomes very unattractive. The more popular the idea, and the deeper the vested interests, the more resistance to testing it. We do not question whether a burning bush really could talk when we're in the middle of a fundraising drive for the church roof...
It's saddening to see, then, that in the typical lifecycle of an idea, publicising it often preceds testing it. More fools us, though. We probably need to be much more skeptical and demanding of hard evidence to back these ideas up.
Will that happen? I'd like to think it could, but the pessimist in me wonders if we'll always opt for the shiny-and-new and leave our skeptical hats at home when sexy new ideas - with sexy new acronyms - come along.
But a good start would be to make the edges of our definition of "success" crisper and less forgiving.
September 13, 2012
We Can Learn A Lot About Collaborative Design From AardmanScientists have learned a great deal about humans by studying other animals and looking for similar attributes (and differences) that mark out what it means to be "human".
In particular, we've learned an enormous amount from studying our closest cousins, the Great Apes.
I've been pondering what software development's closest cousins might be, and what we could learn from them.
While watching Aardman's The Pirates In An Adventure With Scientists, it suddenly struck me that perhaps the endevour that most closxely resembles software development is animation.
We face strikingly similar problems to animators.
Firstly, we're both trying to tell compelling stories. Software, when it's done well, has a clear narrative, just like an animated movie. This narrative can be expressed in many ways, and - just as it is with animation - the process of producing working software can be thought of as telling and re-telling the story, adding more detail and refining it until the story's told in executable code.
The second similarity is that we both have to overcome the extreme difficulty of taking care of millions of tiny details without losing sight of the big picture.
Programming is inherently fiddly; far too fiddly for most people to be bothered with. What other kind of person would devote the lion's share of their lives to the kind of minutiae we do? Well, animators for one.
A single animation unit working on a film like "Pirates" might produce 4 seconds of usable action in a week. Each second of film is made up of 24 frames, each of which has to be painstakingly manipulated, with dozens of details changing from frame to frame that they have to keep track of.
And yet, working one frame at a time, tracking miriad interconnected elements, Aardman are able to produce something miraculous; something that many live action films fail to capture - comic timing.
Fight scenes, chase scenes, comedy - all of this is hard enough to get right shooting at 24 frames a second. To execute it so perfectly working one individual frame at a time requires something that, sadly, too many software teams lack - a clear vision.
The split-second timing and the exquisite dynamics of an Aardman animation are no accident. The mechanics of the overall narrative, every scene and every shot are carefully choreographed with storyboards, animatics (more animation) and with people performing the action to match the voice recordings of the actors, so that the animators can see how it should look and work towards realising that vision.
And with as many as 40 units working on different shots at any given time, this vision not only needs to be clear but it also must be a shared vision.
The rules that apply to each character - including non-living characters like the ocean and the wind - have to be clearly established so that no matter which team is animating those characters, they behave in a way that's consistent to their character. It would do little for the movie if the Pirate Captain inexplicably moved and behaved in 40 different ways through the movie, depending on who was animating him.
The objects in our software - howvere you choose to interpret that word - are the characters in our stories. As the design evolves and grows, is extremely important to maintain a clear shared vision of those objects and how they behave, as well as the narratives in which those objects play a part.
Watching "Pirates", something else jumps out at me; the extraordinary consistency of quality. Aardman have very high standards, and these standards seem to have been applied across the board.
I don't doubt that there were animators working on that film with less experience than some of the others. I don't doubt that some animators were probably learning this craft on the job. Where else do they get their great animators from? That scope and quality is not evident in art and film schools. I suspect you can only really learn to make films of Aardman quality working for someone like Aardman.
But there's not a scrap of evidence for less experienced animators in the movie. Every scene and every shot is sublime. If someone was screwing up, then it must have ended up on the cutting room floor or at the back of shot where nobody noticed.
The greatest animators are masters of collaborative design. I believe there's much we could learn from companies like Aardman about telling compelling stories, about establishing a clear shared vision, about getting the tiniest details right while not losing sight of our "comic timing", and about committing to consistently high standards of quality.
August 17, 2012
Software Apprenticeships Summit, Sept 20thOn Sept 20th I'll be chairing a summit for people interested in long-term mentoring of aspiring software developers.
I'll explain a bit of the background. For the last year, I've been looking into this whole question of apprenticeships for software developers, talking to employers, universities and professional bodies who might be interested in getting involved. And guess what? They aren't.
With very few exceptions, it seems, the traditional alliances between employers, higher education and professional institutions hasn't got legs when we're talking about real and genuinely meaningful apprenticeships for developers.
This leaves two main groups still in the game. There are young people out there who are interested in learning how to be software developers and who've contacted me asking about apprenticeships. And there are practitioners who've expressed willingness to take on apprentices in some form.
The good news is that, in theory, that's all we need to get started.
I plan to take on two apprentices in the next year. Alas, I'm not in a position to offer them employment. Doubtless, many of us won't be. But what I am able to offer is ongoing guidance and mentoring, as well as opportunities that they might not otherwise have found.
As a mentor, I'll enter into a contract with my apprentices that stipulates a roadmap for what I want them to learn and to do, and will work with them on a regular basis - e.g., a couple of hours a week - to offer guidance and to pair program with them.
Once a year - probably during summer recess - I'll ask my apprentices to undertake a significant challenge. They'll be tasked with creating working software of the order of a dozen or so use cases for some good cause. I'll be acting as the "customer" and monitoring their progress, keeping an eye on the quality of the software they create.
Year on year, the challenges will get more sophisticated and the quality bar will be set higher. My aim is that after a few years, the projects will be not just like real-world software development, but a whole heap better than that. Being fiendish, I plan to make them build on the code they wrote in the previous year, and improve it year on year. Yes, that much better!
Outside of development skills, I'll also be helping them out by paying for them to attend a couple of conferences each year, so they can meet real developers and see what the zeitgeist is like.
I'll be asking them to blog throughout, and eventually to teach and mentor other developers, as I feel that can be a hugely valuable experience.
And, if they do well, I'll be promoting them as professionals as they become fully rounded developers. My hope is that when they apply for their first development job, they'll not just have solid development skills, people skills and experience of writing software under similar constraints to industry, but they'll be known quantities in our community, with a body of work people can look at, blogs, talks at conferences and other public-facing stuff people can judge them on. And judge me on, as their mentor.
Perhaps in 5-6 years time, Codemanship might be in a position to take them on full-time. But that is not the be-all and end-all. I'm fully prepared that this will cost me time and money and that I personally won't gain (in those terms) from doing it.
For those among you who feel that anyone who does all this and gets nothing in return is a fool, I'd like to introduce you to this thing called society. Software development as a whole could benefit, and that's plenty benefit for me. I'll also get a kick out of doing it. I'm funny like that.
What I'd really like is to see a bunch if us take on apprentices, and then we can share this experience and amplify the benefits. If we can agree on a basic foundation that would mean that any apprentice mentored by us would have to achieve a shared vision of what we think it means to be a software developer, and co-ordinate and collaborate, I believe a lot more could be achieved at a national and maybe even an international level.
So I'm organising this little get-together at Bletchley Park on Sept 20th to set out my stall, so to speak, and explain what I'm going to be doing, and then no doubt have a lively discussion with others like me, kicking ideas around in an informal setting, to see if we can begin to point ourselves in roughly the same direction.
My proposal is that we form a loose alliance beneath a recognisable banner - e.g., a guild, or an institute, or something else that wouldn't look out of place on an apprentice's CV - establish a foundation for skills and knowledge (without smearing marketing hype all over it, I hope) and also decide where/how we set the bar for mentors. Because not every developer's necessarily going to be a great role model, let's face it.
This alliance might do little more than promote a shared vision, act as a gatekeeper to fliter out ne'er-do-wells, and maybe organise a conference where applicants can meet mentors once a year (in the spring?), and possibly even graduation challenges where apprentices prove their metal on a bigger project.
Strength in numbers, basically.
If you're think of mentoring a software developer, and would like to talk with others like you, I really hope you can join us on Sept 20th.
August 14, 2012
I Was Worried About Apprenticeships. Now I'm Resolved.I'm worried.
No, not about whether New Girl will get another series.
I'm worried about apprenticeships. Apprenticeships for software developers, specifically.
Over the last year my focus has been shifting inexorably towards apprenticeships. Whichever angle I approach it from, I seem to always arrive at apprenticeships as the best potential answer to the question "where will the next generation of great software developers come from?"
I've been talking to employers, to aspiring apprentices (one of whom I have decided to take on as my own "apprentice" when he begins his degree studies, and I'm looking for one more, if anyone out there's interested), as well as to various august institutions of learning and professionalism, and a whole bunch of drunked conversations with my fellow practitioners.
And what I'm hearing - the general themes that are emerging - worry me.
Theme Number 1 - sing along if you know the words - has emerged from employers. We are at odds. Most practitioners I give any credence to believe that an apprenticeship of 5-7 years might be sufficient time to "grow" a proper software developer. Most employers don't think beyond 3 years. If companies were to take on apprentices, I fear they would be looking to "speed up" this process, taking many shortcuts and ultimately lowering the bar. The evidence corroborates this. I've seen a lot of companies offering inadequately short apprenticeships of a few months, maybe a year. the longest I've seen is 18 months.
Theme Number 2 is differing expectations about where the bar should be set. I, as you probably know, have little interest in cultivating anything short of excellence. Maybe if you're hiring developers out by the hour to clients who can't tell the difference, then mediocrity is worth money to you, but I've worked with those teams and I would be vehemently opposed to real apprenticeships becoming part of that scam. Let's start as we mean to go, shall we; honestly and with noble intentions. But how many employers have such high standards? Indeed, how often have you worked in a place that rewarded striving for excellence over vulgar politicial pragmatism? I am not interested in apprenticeships for greasy pole climbers. They belong in business schools.
Theme Number 3 is most troubling of all. Institutions that could support and co-ordinate apprenticeship schemes at national and international levels have given me strong hints that they're viewing apprenticeships as a source of income or as a source of greater influence. None seem all that interested in the apprentices themselves. My fear is that, seeking the largest audience possible, apprenticeships under their governance might be designed to fit the lowest common denominator.
Theme Number 4 is a common sentiment among those who fear they might lose out to apprenticeships; in particular, institutions of higher education. I make no bones about it - education for software developers isn't working. Kids are spending 3-4 years studying computing or software engineering and emerging blinking into the harsh light of the real world effectively still at square one as software developers.
This could be because universities are preparing them for a world that doesn't exist; a world where we generate code from UML models and use mathematical proofs to test our shopping cart code. Most computing graduates have never written a unit test. Most computing graduates have never refactored legacy code. Most computing graduates have never worked on a shared code base at the same time.
I've spent 12 years trying to collaborate with universities on developing courses that offer real hands-on experience of actual software development - and not just a week's worth - and it always falls down at the same hurdle. Universities teach what they teach because that's what their teachers know how to teach. Inevitably our partnerships evolve from enthusiastic lunches with departments heads who are "100% with me all the way" to frustrating meetings with senior lecturers with beards and sandals who 100% insist that the course must contain a module on Z and on compiler design.
The fact is that more than half of computer science graduates who work in software took a CS degree because they wanted to work in software. Nobody's denying that the theory's useful. And nobody's suggesting that they don't teach them the theory. But the brick wall I hit time and again is this insistence that the real world has got it wrong, and students have nothing to learn from those of us who debase ourselves by working in it. And so theory's all most computing graduates get. And, especially in software engineering, a lot of that theory is demonstrably wrong.
My ultimate goal is that apprenticeships should work. And they'll have to work in the real world, where employers are short-termists, where excellence isn't valued, where companies and institutions have their own agendas, and where the academic institutions won't help you because they can't.
In my mind, that just leaves you and me.
An employer's unlikely to commit to 5+ years during which time a considerable amount of learning's going on (though, it's going on all the time under their noses no matter how experienced their developers are - but don't tell them, or they'll assume you're not busy enough and relieve you of some of that "slack"). But I can. I know I can (sudden death or unexpected eloping to Fiji with Julia Sawalha permitting.)
What I can't do is pay someone for 5+ years.
And so we come to the compromise where our plucky apprentice gets to have her cake and eat it.
The world carries on as normal. Kids seeking careers as great software developers take their A-Levels, apply to university and do their computing degrees along with all the consultancy fodder. They study hard. They graduate. They apply for jobs as software developers. Just like they were probably going to do anyway.
The change I'm proposing happens alongside all of that. A person with considerable proven knowledge and experience working as a software developer takes them on as an apprentice.
They spend time with their apprentice every week (maybe a few hours on Skype of a weekend, maybe a face-to-face) guiding them, mentoring them and helping them to develop as fully-rounded software developers. This commitment - this bond - between the apprentice and their mentor (let's not call them "masters", eh?) will endure for years. Certainly well into the apprentice's career, with the amount of guidance needed gradually diminishing until this becomes a relationship of equals.
As well as guiding them to become better developers, we would also nuture them as professionals - gradually introducing them into the software development community and encouraging them to actively engage with their peers and do more than just write code for money.
Eventually, I would hope these apprentices will become mentors themselves, and perpetuate the relationship from one generation to the next. And, as mentors, we would be as much defined by the achievements and the conduct of our apprentices as we are by our own.
To avoid saddling them with an experience that means little to the rest of the world, I'd also seek to engage with other mentors and apprentices to build a consensus that means that my apprentice can command the same respect for her achievements from another mentor as that mentor might give to their own apprentices. Yes, I'm afraid this is going to mean that we'll need to agree on some things. That, in itself, will make for an interestingf experiment.
So this is the end of my journey of research on apprenticeships, and the beginning of my journey doing it for real.
August 7, 2012
Back To Basics as a PDFYou can now download all of the Back To Basics hype-free software development principles in a handy PDF for the e-readers and tree-murderers among you.
It's a staggering 11,300 words - roughly 20% of your average technical book - written in just one weekend, so please forgive any silliness and imperfections. Hopefully you'll get the gist of it all the same.
August 6, 2012
Back To Basics - Hype-free Principles For Software DevelopersIf you're a regular reader of my blog (hello, Mum!), then you may have heard me before prevaricating on the need to find a way to impart real insights on apprentices, and how the marketing jargon, buzzwords, brand names and voodoo tends to get in the way of that.
I'm about to embark on a journey that will involve taking on two such apprentices. And so, in preparation, I'm trying to organise my thoughts on exactly what kinds of insights I think will be most important.
To that end, I've spewed out these ten (yes, count them - eleven) basic, hype-free principles for you to read using your eyes.
#1 - Software Should Have Testable Goals
#2 - Close Customer Involvement Is Key
#3 - Software Development Is A Learning Process
#4 - Do The Important Stuff First
#5 - Communicating Is The Principal Activity
#6 - Prevention Is (Usually) Cheaper Than Cure
#7 - Software That Can't Be Put To Use Has No Value
#8 - Interfaces Are For Communicating
#9 - Automate The Donkey Work
#10 - Grow Complex Software Using The Simplest Parts
#11 - To Learn, We Must Be Open To Change
So there are you. A decent stab, I hope, at ten* basic principles for software developers that avoids buyng in to any hype.
Back To Basics #11 - To Learn, We Must Be Open To ChangeThis is the eleventh of ten posts covering basic hype-free principles for software developers. Yes, I said "eleventh of ten".
If there's one thing we can be certain of in this crazy, mixed up world, it's that we can be certain of nothing.
In previous posts, I've alluded often to change, and how important it is in software development.
This final post - putting aside my feeble joke - seeks to reify change to a first-order component of successful software development. It deserves its own principle.
As software development is a learning process, and since we learn by incorporating feedback in an iterative sort of fashion, it stands to reason that our software must remain open to the necessary changes this feedback demands.
If we're not able to accomodate change, then we're unable to learn, and therefore less likely to succeed at solving the customer's problems.
But, although we call it "SOFTware" (and, admittedly it is easier to change than things made out of, say, concrete) changes to software don't come at no cost.
In fact, changing software can be quite expensive. More expensive than writing it in the first place, if we're not careful.
What happens when software is too expensive to change? Well, what happens when anything becomes too expensive? That's right - nobody's willing to pay for it. Except fools and madmen, of course.
Software becomes too expensive to change when the cost of changing it outweighs the benefits of making those changes.
A surprisingly large number of software products out there have reached this sorry point. All over the world, there are businesses who rely on software they can't afford to change, and therefore can't change the way their business works.
When a business can't change the way they work, they struggle to adapt to changing circumstances, and become less competitive.
The same goes for the software we use in our daily lives. We may see many improvements that could be made that would add a lot of value, and we may have come to rely on the version of the software we're using. But if the people who make that software are unable to incorporate our feedback, we end up stuck with a duff product, and they end up stuck with a duff business.
Meanwhile, competitors can take those same lessons and come up with a much better product. There are thousands of new businesses out there ready, willing and able to learn from your mistakes.
To accomodate change in our software, we need to minimise those factors that can be barriers to change.
Some of these factors have already been touched upon in previous principles. For example, if we strive to keep our software simple and readable, that can make a big difference. It will make our code easier to understand, and understanding code makes up the lion's share of the work in changing it, as studies have revealed.
If we automate our repeated tests, this can also make a big difference. One of the risks of making a change to a piece of working software is that we might break it. The earlier we can find out if we've broken the software, the cheaper it might be to fix it.
Automating builds and release/deployment of software can also help us to accomodate change. Teams that frequently integrate their individual work find that they minimise the impact of integration problems. And teams that can quickly and cheaply release or deploy their software (and safely undo that deployment if something goes wrong) are in a much better position to release software updates more frequently, so they can learn more and learn faster.
There are other important factors in our ability to accomodate change, but I'm going to end by considering two more.
As well as making our code simple and easy to understand, we also need to be vigilant for duplication and dependencies in our code.
Duplicate code has a nasty tendency to duplicate the effort required to make changes to the common logic in that duplicated code. We also risk duplicating errors in the code.
We must also be careful to minimise the "ripple effect" when we make changes in our software. Ask any experienced developer, and they'll be able to tell you about times when they made what they thought would be a tiny change to one part of the software, but found that small change broke several other parts that were depending on it. And when they fixed those dependent parts, they broke even more parts of the software that were in turn depending on them. And so on.
When the dependencies in our software aren't carefully managed, we risk the equivalent of "forest fires" spreading throughout it. A seemingly small change can end up turning into a major piece of work, costing far more than that change is worth to our customer.
Finally, in order to accommodate change, we must be open to change. The way we work, the way we communicate with each other, the way we plan what we do, all has to make change - and therefore learning - easier.
Too many professional software developers have a fear of change, and too many teams organise themselves around the principle of avoiding it if they can.
For example, many teams do everything humanly possible to avoid customer or end user feedback. They can become very defensive when someone points out a flaw in their software or makes suggestions for improvements. This is often because they fear they cannot act on that feedback, so they employ the coping mechanism of hiding, or getting angry with the person offering the feedback.
Many teams employ complex, bureaucratic procedures for "change managgement" (which is software-speak for "discouraging change") which can only be designed to put customers off asking for new things.
The language of software development has evolved to be anti-change: commonly used terms like "code freeze" and "scope creep" are aimed at encouraging a culture where change is bad, and no change is good.
When we approach software development as a learning process, and accept that much of the real value in what we create will come from feedback and not from what we originally planned, then we must not just tolerate or allow change, but actively embrace it.
Back To Basics #10 - Grow Complex Software Using The Simplest PartsThis is the final of ten posts aiming to set out basic principles for software developers in a way that avoids pandering to trends and marketing hype, so that developers and aspiring developers can hopefully gain some useful insights that might last their careers. Well, maybe.
One thing I learned years ago is that when life is simpler, I tend to get more done. Other people make themselves busy, filling up their diaries, filling out forms, taking on more and more responsibilities and generally cluttering up their days.
Like a lot of software developers, I'm inherently lazy. So when I need to get something done, my first thought is usually "what's the least I can do to achieve this?" (My second thought, of course, is "what time does the pub open?")
Somehow, though, I do manage to get things done. And, after examining why someone as lazy as me manages to achieve anything, I've realised that it's because I'm an ideal combination of lazy and focused. I tend to know exactly what it is I'm setting out to achieve, and I have a knack for finding the lowest energy route to getting there.
When life gets more complicated, we not only open ourselves up to a lot of unnecessary effort, but we also end up in a situation where there are a lot more things that can go wrong.
I measure the complexity of my life in terms of keys. In my twenties, I had lots of keys, house keys, car keys, keys for the garage, keys for the windows, spare keys for the office if I had to let myself in out of hours, and so on. And lots of spare keys in case I lost any of those keys.
Today, after recently selling my old VW, I have a key. Just the one. I am living a one-key life. (And not a wonkey life. Well, not in my opinion, at least.)
Although I'm lazy, I actually have to work quite hard to keep my life simple. But it's worth it. By keeping things simple and uncluttered, it leaves much potential to actually do things. In particular, it leaves time to sieze opportunities and deal with problems that suddenly come up.
Keeping things simple reduces the risk of disasters, and increases my capacity to adapt to changing circumstances. I've got time to learn and adapt. Busy people don't.
And waddayaknow? It turns out that much of the joy and fulfilment that life has to offer comes through learning and adapting, not through doggedly sticking to plans.
Software is similar. When we make our programs more complicated than they need to be, we increase the risk of the program being wrong - simply because there's more that can go wrong.
And the more complex a program is, the harder it is to understand, and therefore the harder it can be to change without breaking it. Teams who overcomplicate their software can often be so busy fixing bugs and wrestling to get their heads around the code that they have little time for adding new features and adapting the software to changing circumstances.
When we write code, we need to be lazy and focused. We need to work hard at writing the simplest code possible that will satisfy the customer's requirements.
And hard work it is. Simplicity doesn't come easy. We need to be constantly vigilant to unnecessary complexity, always asking ourselves "what's the least we can do here?"
And we need to be continually reshaping and "cleaning" the code to maintain that simplicity. Uncluttered code will no more stay magically uncluttered as it grows than an uncluttered house will magically stay uncluttered with no tidying.
But doesn't software necessarily get complicated? Is it possible to write a "simple" Computer-Aided Design program, or a "simple" Digital Audio Workstation, or a "simple" Nuclear Missile Defence System?
While we must strive for the simplest software, many problems are just darn complicated. There's no avoiding it.
Cities are also necessarily very complicated. But my house isn't. I don't need to understand how a city works to deal with living in my house. I just need to know how my house works and how it interacts with the parts of the city it's connected to (through the street, through the sewers, through the fibre-optic cable that brings the Interweb and TV and telephone, etc.)
Cities are inescapably complex - beyond the capability of any person to completely grasp - but living and working in a big city is something millions do every day quite happily. We can build fantastically complicated cities out of amazingly simple parts.
The overall design of a city emerges through the interactions of all the different parts. We cannot hope to plan how a city grows in detail at the level of the entire city. It simply won't fit inside our heads.
But we can apply some simple organising principles to the parts - houses, streets, communities, roads, waterways, power supplies and all the rest - and in particular to how the parts interact, so that what emerges is a working city.
And we can gently influence the overall shape by applying external constraints (e.g., you can't build here, but build affordbale houses over there and we'll give you a generous tax break.)
When it comes to organising software in the large, a great deal of the action needs to happen in the small. We can allow complicated software to grow by wiring together lots of very simple pieces, and applying a few basic organising principles to how those individual pieces are designed and how they interact with each other.
We can focus on getting it right at that atomic level of functions, modules and their interactions, working to maintain the ultimate simplicity.
And then we can constrain the overall design by applying the customer's tests from the outside. So, regardless of what internal design emerges, as a whole it must do what the customer requires it to do, while remaining simple enough in its component parts to accomodate change.
Back To Basics #9 - Automate The Donkey WorkThis is the ninth in a series of ten posts about basic principles for software developers aimed at side-stepping the flim-flam and hifalutin hyperbole so, hopefully, the dog can see the rabbit.
I don't know about you, but I'm not a big fan of mindless, repetitive tasks.
In software development, we find that there are some activities we end up repeating many times.
Take testing, for example. An averagely complicated piece of software might require us to perform thousands of tests to properly ensure that every line of code is doing what it's supposed to. That can spell weeks of clicking the same buttons, typing in the same data etc etc, over and over again.
If we only had to test the software once, then it wouldn't be such a problem. Yeah, it'll be a few dull weeks, but when it's over, the champagne corks are popping.
Chances are, though, that it won't be the only time we need to perform those tests. If we make any changes to the software, there's a real chance that features that we tested once and found to be working might have been broken. So when we make changes after the software's been tested once, it will need testing again. Now we're breaking a real sweat!
Some inexperienced teams (and, of course, those experienced teams who should know better) try to solve this problem by preventing changes after the software's been tested.
This is sheer folly, though. By preventing change, we prevent learning. And when we prevent learning, we usually end up preventing ourselves from solving the customer's problems, since software development is a learning process.
The other major drawback to relying on repeated manual testing is that it can take much longer to find out if a mistake has been made. The longer a mistake goes undetected, the more it costs to fix (by orders of magnitude).
A better solution to repeated testing is to write computer programs that execute those tests for us. These could be programs that click buttons and input data like a user would, or programs that call functions inside the software to check the internal logic is correct or that the communication between different pieces of the software is working as we'd expect.
How much testing you should automate depends on a range of factors.
Writing automated test programs that perform user actions tends to be expensive and time-consuming, so you may decide to automate some key user interface tests, and then rely more on automating internal ("unit") tests - which can be cheaper to write and often run much faster - to really put the program through its paces.
If time's tight, you may choose to write more automated tests for parts of the software that present the greatest risk, or have the greatest value to the customer.
Automating tests can require a big investment, but can pay significant dividends throughout the lifetime of the software. Testing that might take days by hand might only take a few minutes if done by a computer program. You could go from testing once every few weeks to testing several times an hour. This can be immensely valuable in a learning process that aims to catch mistakes as early as possible.
Basic Principle #7 states that software that can't be put to use has no value. Here's another obvious truism for you: while software's being tested, we can't be confident that it's fit for use.
Or, to use more colourful language, anyone who releases software before it's been adequately tested is bats**t crazy.
If it takes a long time to test your software, then there'll be long periods when you don't know if the software can be put to use, and if your customer asked you to release it, you'd either have to tell them to wait or you'd release it under protest. (Or just don't tell them it might not work and brace yourself for the fireworks - yep, it happens.)
If we want to put the customer in the driving seat on decisions about when to release the software - and we should - then we need to be able to test the software quickly and cheaply so we can do it very frequently.
Repeating tests isn't the only kind of donkey work we do. Modern software is pretty complicated. Even a "simple" web application can involve multiple parts, written in multiple programming languages, that must be installed in multiple technology environments that each have their own way of doing things.
Imagine, say, a Java web application. To put it into use, we might have to compile a bunch of Java program source files, package up the executable files created by compilation into an archive (like a ZIP file) for deploying to a Java-enabled web server like the Apache Foundation's Tomcat. Along with the machine-ready (well, Java Virtual Machine-ready) executable files, a bunch of other source files need to be deployed, such as HTML templates for web pages, and files that contain important configuration information that the web application needs. It's quite likely that the application will store data in some kind of structured database, too. Making our application ready for use might involve running scripts to set up this database, and if necessary to migrate old data to a new database structure.
This typical set-up would involve a whole sequence of steps when doing it by hand. We'd need to get the latest tested (i.e. working) version of the source files from the team's source code repository. We'd need to compile the code. Then package up all the executable and supporting files and copy them across to the web server (which we might need to stop and restart afterwards.) Then run the database scripts. And then, just to be sure, run some smoke tests - a handful of simple tests just to "kick the tyres", so to speak - to make sure that what we've just deployed actually works.
And if it doesn't work, we need to be able to put everything back just the way it was (and smoke test again to be certain) as quickly as possible.
When we're working in teams, with each developer working on different pieces of the software simultaneously, we would also follow a similar procedure (but without releasing the software to the end users) every time we integrated our work into the shared source code repository, so we could be sure that all the individual pieces work correctly together and that any changes we've made haven't inadvertantly impacted on changes someone else has been making.
So we could be repeating this sequence of steps many, many times. This is therefore another great candidate for automation. Experienced teams write what we call "build scripts" and "deployment scripts" to do all this laborious and repetitive work for us.
There are many other examples of boring, repetitive and potentially time-consuming tasks that developers should think about automating - like writing programs that automatically generate the repetitive "plumbing" code that we often have to write in many kinds of applications these days (for example, code that reads and writes data to databases can often end up looking pretty similar, and can usually be inferred automatically from the data structures involved).
We need to be vigilant for repetition and duplication in our work as software developers, and shrewdly weigh up the pros and cons of automating the work to save us time and money in the future.
August 5, 2012
Back To Basics #8 - Interfaces Are For CommunicatingThis is the eighth of ten posts setting out basic principles for software development without all the usual hype and buzz that tends to leave younger developers under the mistaken impression that we've only very recently figured this stuff out.
Basic Principle #5 states that the principal activity in software development is communicating
The interfaces we design to allow people - and other software - to use our programs fall under that banner, but I feel they're important enough to warrant their own principle.
An interface provides a means for users to communicate with our software, and through our software, with the computer.
There are different kinds of interface.
Most computer users are familiar with Graphical User Interfaces. These present users with friendly and easy-to-understand visual representations of concepts embodied by the software (like "file", "document", "friend" and so on) and ways to perform actions on these objects that have a well-defined meaning (like "file... open", "document... save" and "friend... send message").
Other kinds of interface include command line interfaces, which allow us to invoke actions by typing in commands, web services which make it possible for one program to issue commands to another over the World Wide Web, and application-specific input/output devices like cash registers used by shops and ATMs used by bank customers.
When we view interfaces as "things users communicate with the software through", it can help us to understand what might distinguish a good interface design from a not-so-good one, if we contemplate some basic rules for effective communication.
Interface design is a wide topic, but let's just cover a few key examples to help illustrate the point.
Firstly, effective communication requires that the parties talking to each other both speak the same language. A Graphical User Interface, for example, defines a visual language made of icons/symbols and gestures that need to mean the same thing to the user and the software. What does that picture of a piece of paper with writing on it mean, and what does it mean when I double-click on it?
An important question when designing interfaces is "whose language should we be speaking?" Should the user be required to learn a language in order to use the software? Or should the software speak the user's language?
Ideally, it's the latter, since the whole point of our software is to enable the user to communicate with the computer. So an interface needs to make sense to the user. We need to strive to understand the user's way of looking at the problem and, wherever possible, reflect that understanding back in the design of our interface.
Interfaces that users find easy to understand and use are said to be intuitive.
In reality, some compromise is needed, because it's not really possible yet to construct computer interfaces that behave exactly like the real world. But we can get close enough, usually, and seek to minimise the amount of learning the end users have to do.
Another basic rule is that interfaces need to make it clear what effect a user's actions have had. Expressed in terms of effective communication, interfaces should give the user meaningful feedback on their actions.
It really bugs me, as someone who runs a small business, when I have to deal with peoople who give misleading feedback or who give no feedback at all when we communicate. I might send someone an important document, and it would be very useful to know that the document's been receieved and that they're acting on it. Silence is not helpful to me in planning what I should do next. Even less helpful is misleading feedback, like being told "I'll get right on it" when they are, in fact, about to go on holiday for two weeks.
If I delete a file, I like to see that it's been deleted and is no longer in that folder. If I add a friend on a social network, I like to see that they're now in my friends list and that we can see each other's posts and images and wotnot and send private messages. When I don't get this feedback, I worry. I worry my action may not have worked. I worry that the effect it had might be something I didn't intend. Most annoyingly, because I can't see what effect my actions have had, I struggle to learn how to use an interface which is perhaps not entirely intuituive to me.
An interface that gives good immediate feedback is said to be responsive. Value responsive interfaces as much as you value responsive people.
Which leads me on to a third basic rule for interface design. Because it's not always possible to make interfaces completely intuitive, and because the effect of an action is not always clear up front, users are likely to make the occasional boo-boo and doing something to their data that they didn't mean to do.
I remember years ago, a team I joined had designed a toolbar for a Windows applictaion where the "Delete" button had a picture of a rabbit on it. Quite naturally, I clicked on the rabbit, thinking "I wonder what this does..."
Oops. Important file gone. In the days before the Recycle Bin, too. The one button they didn't have was the one I really, really needed at that point - Undo!
Interfaces that allow users to undo mistakes are said to be forgiving, and making them so can be of enormous benefit to users.
There will be times, of course, when an action can't be undone. Once an email is sent, it's sent. Once a bank payment is made, it's made. Once you've threatened to blow up an airport on a public forum, and so on and etc.
When actions can't be undone, the kindest thing we can do is warn users before they commit to them.
Another way we can protect users is by presenting them only with valid choices. How annoying is it when an ATM prompts you to withdraw £10, £30, and £50, and when you select one of those options you get a message saying "Only multiples of £20 available". Like it's your fault, somehow!
Interface design should clearly communicate what users can do, and whenever possible should not give them the opportunity to try to do things that they shouldn't. For example, a file that's in use can't be deleted. So disable that option in the File menu if a file that's in use is selected.
Similarly, when users input data, we should protect them from inputting data that would cause problems in the software. If the candidate's email address in a job application is going to be used throughout the application process, it had better be a valid email address. If you let them enter "wibble" in that text box, the process is going to fall over at some point.
Interfaces that protect the user from performing invalid actions or inputting invalid data are said to be strict. It may sound like a contradiction in terms to suggest that interfaces need to be strict AND forgiving, but it's all a question of context.
If, according to the rules of our software, there's no way of knowing that the user didn't intend to do that, then we need to be forgiving. If the rules say "that's not allowed in these circumstances", then we should be strict.
One final example, going back to this amazingly well-designed GUI with the rabbit Delete button. On the main toolbar, it was a rabbit. But there was also a Delete button on the individual File dialogue, which sported a picture of an exclamation mark. So having figured out once that "rabbit = delete", I had to figure it out again for "exclamation mark = delete". Boo! Hiss! Bad interface doggy - in your basket!
We're bad at this in our industry generally. We tend to have various different terms that all mean the same thing. And it makes learning much harder, and communicating with each other potentially hazardous.
In physics, they resolved to be very, very careful about their use of language when it mattered. To a physicist, terms like "dimension" and "energy" have very precisely defined meanings, so when physicists explain their theories, we're less likely to misinterpret.
They're not quite so strict about their use of language in, say, alternative medicine and New Age philosophy, where terms like "dimension" and "energy" can mean pretty much anything we want them to mean.
I'm sad to report that, in our use of language, software development is as bad as New Age philosophy, with commonly used terms like "Agile" and "test-driven" taking on many different meanings.
Two teams could both be claiming to be "Agile", but working in remarkably different ways. So when a developer learns about "Agile Software Development" working for one company, they may come to realise that what they've been doing is considered not "Agile Software Development" to teams at another company.
Imagine if you learned physics at Cambridge, but when you applied for a job at CERN, they told you "no, that's not physics as we understand it"...
Anyway, moan moan grumble grr etc.
My point is this, in order for us to communicate effectively we must not just be clear, but also consistent in our use of language. When we're inconsistent (e.g., "rabbit = exclamation mark = delete"), we significantly increase the amount of learning the user has to do.
When designing interfaces, we should also remember Basic Principle #3 - Software Development Is A Learning Process. It's vanishingly rare to find teams who get it right first time. We should iterate our interface designs frequently, seeking meaningful feedback from end users and the customer and allowing the design to evolve to become as intuitive, responsive, forgiving, strict and consistent as it needs to be to allow users to get the best from our software.
There is, as I said, a whole lot more to interface design than this, but hopefully this gives you some flavour. In particular, we need to remember that good interface design is about effective communication.