August 12, 2014

TDD is TDD (And Far From Dead)

Now, it would take enormous hubris for me to even suggest that this blog post that follows is going to settle the "What is TDD?" "Is TDD dead?" "Did weazels rip my TDD?" debates that have inexplicably sprung up around and about the countryside of late.

But it will. (In my head, at any road.)

First of all, what is TDD? I'm a bit dismayed that this debate is still going on, all these years later. TDD is what it always was, right from the time the phrase appeared:

Test-driven Development = Test-driven Design + Refactoring

Test-driven Design is the practice of designing our software to pass tests. They can be any kind of tests that software can pass: unit tests, integration tests, customer acceptance tests, performance tests, usability tests, code quality tests, donkey jazz hands tests... Any kind of tests at all.

The tests provide us with examples of how the software must be - at runtime, at design time, at tea time, at any time we say - which we generalise with each new test case to evolve a design for software that does a whole bunch of stuff, encompassed by the set of examples (the suite of tests) for that software.

We make no distinction in the name of the practice as to what kind of tests we're aiming to pass. We do not call it something else just because the tests we're driving our design with happen not to be unit tests.

Refactoring is the practice of improving the internal design of our software to make it easier to change. This may mean making the code easier for programmers to understand, or generalising duplicate code into some kind of reusable abstraction like a parameterised method or a new module, or unpicking a mess of dependencies to help localise the impact of making changes.

As we're test-driving our designs, it's vitally important to keep our code clean and maintainable enough to allow us to evolve it going forward to pass new tests. Without refactoring, Test-driven Design quickly becomes hard going, and we lose the ability to adapt to changes and therefore to be agile for our customer.

The benefits of TDD are well understood, and backed up by some hard data. Software that is test-driven tends to be more reliable. It tends to be simpler in its design. Teams that practice TDD tend to find it easier to achieve continuous delivery. From a business perspective, this can be very valuable indeed.

Developers who are experienced in TDD know this to be true. Few would wish to go back to the Bad Old Days before they used it.

That's not say that TDD is the be-all and end-all of software design, or that the benefits it can bring are always sufficient for any kind of software application.

But it is very widely applicable in a wide range of applications, and as such has become the default approach - a sort of "start for ten" - for many teams who use it.

It is by no means dead. There are more teams using it today than ever before. And, as a trainer, I know there are many more that aspire to try it. It's a skill that's highly in demand.

Of course, there are teams who don't succeed at learning TDD. Just like there are people who don't succeed at learning to play the trombone. The fact that not everybody succeeds at learning it does not invalidate the practice.

I've trained and coached thousands of developers in TDD, so I feel I have a good overview of how folk get on with it. Many - most, let's be honest - seriously underestimate the learning curve. Like the trombone, it may take quite a while to get a tune out of it. Some teams give up too easily, and then blame the practice. Many thousands of developers are doing it and succeeding with it. I guess TDD just wasn't for you.

So there you have it, in a nutshell: TDD is what it always was. It goes by many names, but they're all pseudonyms for TDD. It's bigger today than it ever was, and it's still growing - even if some teams are now calling it something else.

There. That's settled, then.





July 31, 2014

My Top 5 Most Under-used Dev Practices

So, due to a last-minute change of plans, I have some time today to fill. I thought I'd spend it writing about those software development practices that come highly recommended by some, but - for whatever reason - almost no teams do.

Let's count down.

5. Mutation Testing - TDD advocates like me always extol the benefits of having a comprehensive suiite of tests we can run quickly, so we can get discover if we've broken our code almost immediately.

Mutation testing is a technique that enables us to ask the critical question: if our code was broken, would our tests show it?

We deliberately introduce a programming error - a "mutation" - into a line of code (e.g., turn a + into a -, or > into a <) and then run our tests. If a test fails, we say our test suite has "killed the mutant". We can be more assured that if that particular line of code had an error, our tests would show it. If no tests fail, that potentially highlights an gap in our test suite that we need to fill.

Mutation testing, done well, can bring us to test suites that offer very high assurance - considerably higher than I've seen most teams achieve. And that extra assurance tends to bring us economic benefits in terms of catching more bugs sooner, saving us valuable time later.

So why do so few teams do it? Well, tool support is one issue. The mutation testing tools available today tend to have a significant learning curve. They can be fiddly, and they can throw up false positives, so teams can spend a lot of time chasing ghosts in their test coverage. It takes some getting used to.

In my own experience, though, it's worth working past the pain. The pay-off is often big enough to warrant a learning curve.

So, in summary, reason why nobody does it : LEARNING CURVE.

4. Visualisation - pictures were big in the 90's. Maybe a bit too big. After the excesses of the UML days, when architects roamed the Earth feeding of smaller prey and taking massive steaming dumps on our code, visual modeling has - quite understandably - fallen out of favour. So much so that many teams do almost none at all. "Baby" and "bathwater" spring to mind.

You don't have to use UML, but we find that in collaborative design, which is what we do when we work with customers and work in teams, a picture really does speak a thousand words. I still hold out hope that one day it will be commonplace to see visualisations of software designs, problem domains, user interfaces and all that jazz prominently displayed in the places where development teams work. Today, I mainly just see boards crammed with teeny-weeny itty-bitty index cards and post-it notes, and the occasional wireframe from the UX guy, who more often than not came up with that design without any input at all from the team.

The effect of lack of visualisation on teams can be profound, and is usually manifested in the chaos and confusion of a code base that comprises several architectures and a domain model that duplicates concepts and makes little to no sense. If you say you're doing Domain-driven Design - and many teams do - then where are your shared models?

There's still a lot of mileage in Scott Ambler's "Agile Modeling" book. Building a shared understanding of a complex problem or solution design by sitting around a table and talking, or by staring at a page of code, has proven to be uneffective. Pictures help.

In summary, reason why so few do it: MISPLACED AGILE HUBRIS

3. Model Office - I will often tell people about this mystical practice of creating simulated testing environments for our software that enable us to see how it would perform in real-world scenarios.

NASA's Apollo team definittely understood the benefits of a Model Office. Their lunar module simulator enabled engineers to try out solutions to systemm failures on the ground before recommending them to the imperilled astronauts on Apollo 13. Tom Hanks was especially grateful, but Bill Paxton went on to star in the Thunderbirds movie, so it wasn't all good.

I first heard about them doing a summer stint in my local W H Smith in the book department. Upstairs, they had a couple of fake checkouts and baskets of fake goods with barcodes.

Not only did we train in their simulated checkout, but they also used them to analyse system issues and to plan IT changes, as well as to test those changes in a range of "this could actually happen" scenarios.

A Model Office is a potentially very powerful tool for understanding problems, for planning solutions and for testing them - way more meaningful than acceptance tests that were agreed among a bunch of people sitting in a room, many of whom have never even seen the working environment in which the software's going to be used, let alone experienced it for themselves.

There really is no substitute for the real thing; but the real thing comes at a cost, and often the real thing is quite busy, actually, thank you very much. I mean, dontcha just hate it when you're at the supermarket and the checkout person is just learning how it all works while you stand in line? And mistakes that get made get made with real customers and real money.

We can buy ourselves time, control and flexibility by recreating the real thing as faithfully as possible, so we can explore it at our leisure.

Time, because we're under no pressure to return the environment to business use, like we would be if it was a real supermarket checkout, or a real lunar module.

Control, because we can deliberately recreate scenarios - even quite rare and outlandish ones - as often as we like, and make it exactly the same, or vary it, as we wish. One of the key reasons I believe many business systems are not very robust is because they haven't been tested in a wide-enough range of possible circumstances. In real life, we might have to wait weeks for a particular scenario to arise.

Flexibility, because in a simulated environment, we can do stuff that might be difficult or dangerous in the real world. We can try out the most extraordinary situations, we can experiment with solutions when the cost of failure is low, and we can explore the problem and possible solutions in ways we just couldn't or wouldn't dare to if real money, or real lives, or real ponies were at stake.

For this reason,, from me, Model Offices come very highly recommended. Which is very probably why nobody uses them.

Reason why nobody does it - NEVER OCCURRED TO THEM

2. Testing by Inspection - This is another of those blind spots teams seem to have about testing. Years of good research have identified reading the code to look for errors as one of the most - if not the most - effective and efficient ways of finding bugs.

Now, a lot of teams do code reviews. It's a ritual humiliation many of us have to go through. But commonly these reviews are about things like coding style, naming conventions, design rules and so forth. It's vanishingly rare to meet a team who get around a computer, check out some code and ask "okay, will this work?"

Testing by inspection is actually quite a straightforward skill, if we want it to be. A practice like guided inspection, for example, simply requires us to pick some interesting test cases, and step through the code, effectively executing it in our heads, asking questions like "what should be true at this point?" and "when might this line of code not work?"

If we want to, we can formalise that process to a very high degree of rigour. But the general pattern is the same; we make assertions about what should be true at key points during the execution of our code, we read the code and dream up interesting test cases that will cause those parts of the code to be executed and ask those questions at the appropriate times. When an inspection throws up interesting test cases that our code doesn't handle, we can codify this knowledge as, say, automated unit tests to ensure that the door is closed to that particular bug permanently.

Do not underestimate the power of testing by inspection. It's very rare to find teams producing high-integrity software who don't do it. (And, yes, I'm saying it's very rare to find teams producing high-integrity software.)

But, possibly because of associations with the likes of NASA, and safety-critical software engineering in general, it has a reputatioon for being "rocket science". It can be, if we choose to go that far. But in most cases, it can be straightforward, utilising things we already know about computer programming. Inspections can be very economical, and can reap considerable rewards. And pretty much anyone who can program can do them. Which is why, of course, almost nobody does.

Reason why nobody does it - NASA-PHOBIA

1. Business Goals - Okay, take a deep breath now. Imminent Rant Alert.

Why do we build software?

There seems to be a disconnect between the motivations of developers and their customers. Customers give us money to build software that hopefully solves their problems. But, let's be honest now, a lot of developers simply could not give two hoots about solving the customer's problems.

Which is why, on the vast majority of software teams, when I ask them what the ultimate business goals of what they're doing are, they just don't know.

Software for the sake of software is where our heads are mostly at. We buiild software to build software.

Given a free reign, what kind of software do developers like to build? Look on Github. What are most personal software projects about?

We don't build software to improve care co-ordination for cancer sufferers. We don't build software to reduce delivery times for bakeries. We don't build software to make it easier to find a hotel room with fast Wi-Fi at 1am in a strange city.

With our own time and resources, when we work on stuff that interests us, we won't solve a problem in the real world. We'll write another Content Management System. Or an MVC framework. Or another testing tool. Or another refactoring plug-in. Or another VCS.

The problems of patients and bakers and weary travelers are of little interest to us, even though - in real life - we can be all of these things ourselves.

So, while we rail at how crappy and poorly thought-out the software we have to use on a daily basis tends to be ("I mean, have they never stayed in a hotel?!"), our lack of interest in understanding and then solving these problems is very much at the root of that.

We can be so busy dreaming up solutions that we fail to see the real problems. The whole way we do development is often a testament to that, when understanding the business problem is an early phase in a project that, really, shouldn't exist until someone's identified the problem and knows at least enough to know it's worth writing some software to address it.

Software projects and products that don't have clearly articulated, testable and realistic goals - beyond the creation of software for its own sake - are almost guaranteed to fail; for the exact same reason that blindly firing arrows in random directions with your eyes closed is almost certainly not going to hit a valuable target. But this is what, in reality, most teams are doing.

We're a solution looking for a problem. Which ultimately makes us a problem. Pretty much anyone worth listening to very, very strongly recommends that software development should have clear and testable business goals. So it goes without saying that almost no teams bother.

Reason why so few teams do it - APATHY





July 16, 2014

What Level Should We Automate Most Of Our Tests At?

So this blog post has been a long time in the making. Well, a long time in the procrastinating, at any rate.

I have several clients who have hit what I call the "front-end automated test wall". This is when teams place greatest emphasis on automating acceptance tests, preferring to verify the logic of their applications at the system level - often exercised through the user interface using tools like Selenium - and rely less (or not at all, in some cases) on unit tests that exercise the code at a more fine-grained level.

What tends to happen when we do this is that we end up with large test suites that require much set-up - authentication, database stuff, stopping and starting servers to reset user sessions and application state, and all the fun stuff that comes with system testing - and run very slowly.

So cumbersome can these test suites become that they slow development down, sometimes to a crawl. If it takes half an hour to regression test your software, that's going to make the going tough for Clean Coders.

The other problem with these high-level tests is that, when they fail, it can take a while to pin down what went wrong and where it went wrong. As a general rule of thumb, it's better to have tests that only have one reason to fail, so when something breaks it's alreay pretty well pinpointed. Teams who've hit the wall tend to spend a lot time debugging.

And then there's the modularity/reuse issue: when the test for a component is captured at a much higher level, it can be tricky to take that chunk and turn it into a reusable chunk. Maybe the risk calculation component of you web application could also be a risk calculation component of a desktop app, or a smartwatch app. Who knows? But when its contracts are defined through layers of other stuff like web pages and wotnot, it can be difficult to spin it out into a product in its own right.

For all these reasons, I follow the rule of thumb: Test closest to the responsibility.

One: it's faster. Every layer of unnecessary wotsisname the tests have to go through to get an answer adds execution time and other overheads.

Two: it's easier to debug. Searching for lost car keys gets mighty complicated when your car is parked three blocks away. If it's right outside the front door, and you keep the keys in a bowl in the hallway, you should find them more easily.

Three: it's better for componentising your software. You may call them "microservices" these days, but the general principles is the same. We build our applications by wiring together discrete components that each have a distinct responsibility. The tests that check if a component fulfils its reponsibility need to travel with that components, if at all possible. If only because it can get horrendously difficult to figure out what's being tested where when we scatter rules willy nilly. The risk calculation test wants to talk to the Risk Calculator component. Don't make it play Chinese Whsipers through several layers of enterprise architecture.

Sometimes, when I suggest this, developers will argue that unit tests are not acceptance tests, because unit tests are not written from the user's perspective. I believe - and find from experience - that this is founded on an artificial distinction.

In practice, an automated acceptance test is just another program written by a programmer, just like a unit test. The programmer interprets the user's requirements in both cases. One gives us the illusion of it being the customer's test, if we want it to be. But it's all smoke and mirrors and given-when-then flim-flam in reality.

The pattern, known of old, of sucking test data provided by the users into parameterised automated tests is essentially what our acceptance test automation tools do. Take Fitnesse, for example. Customer enters their Risk Calculation inputs and expected outputs into a table on a Wiki. We write a test fixture that inserts data form the table into program code that we write to test our risk calculation logic.

We could ask the users to jot those numbers down onto a napkin, and hardcode them into our test fixture. Is it still the same test? It it still an automated acceptance test? I believe it is, to all intents and purposes.

And it's not the job of the user interface or our MVC implementation or our backend database to do the risk calculation. There's a distinct component - maybe even one class - that has that responsibility. The rest of the architecture's job is to get the inputs to that component, and marshall the results back to the user. If the Risk Calculator gets the calculation wrong, the UI will just display the wrong answer. Which is correct behaviour for the UI. It should display whatever output the Risk Calculator gives it, and display it correctly. But whether or not it's the correct output is not the UI's problem.

So I would test the risk calculation where the risk is calculated, and use the customer's data from the acceptance test to do it. And I would test that the UI displays whatever result it's given correctly, as a separate test for the UI. That's what we mean by "separation of concerns"; works for testing, too. And let's not also forget that UI-level tests are not the same thing as system or end-to-end tests. I can quite merrily unit test that a web template is rendered correctly using test data injected into it, or that an HTML button is disabled running inside a fake web browser. UI logic is UI logic.

And I know some people cry "foul" and say "but that's not acceptance testing", and "automated acceptance tests written at the UI level tend to be nearer to the user and therefore more likely to accurately reflect their requirements."

I say "not so fast".

First of all, you cannot automate user acceptance testing. The clue is in the name. The purpose of user acceptance testing is to give the user confidence that we delivered what they asked for. Since our automated tests are interpretations of those requirements - eevery bit as much as the implementations they're testing - then, if it were my money, I wouldn't settle for "well, the acceptance tests passed". I'd want to see those tests being executed with my own eyes. Indeed, I'd wanted to execute them myself, with my own hands.

So we don't automate acceptance tests to get user acceptance. We automate acceptance tests so we can cheaply and effectively re-test the software in case a change we've made has broken something that was previously working. They're automated regression tests.

The worry that the sum total of our unit tests might deviate from what the users really expected is mitigated by having them manually execute the acceptance tests themselves. If the software passes all of their acceptance tests AND passes all of the unit tests, and that's backed up by high unit test assurance - i.e., it is very unlikely that the software could be broken from the user's perspsctive without any unit tests failing - then I'm okay with that.

So I still have user acceptance test scripts - "executable specifications" - but I rely much more on unit tests for ongoing regression testing, because they're faster, cheaper and more useful in pinpointing failures.

I still happily rely on tools like Fitnesses to capture users' test data and specific examples, but the fixtures I write underneath very rarely operate at a system level.

And I still write end-to-end tests to check that the whole thing is wired together correctly and to flush out configuration and other issues. But they don't check logic. They just the engine runs when you turn the key in the ignition.

But typically I end up with a peppering of these heavyweight end-to-end tests, a feathering of tests that are specifically about display and user interaction logic, and the rest of the automated testing iceberg is under the water in the form of fast-running unit tests, many of which use example data and ask questions gleaned from the acceptance tests. Because that is how I do design. I design objects directly to do the work to pass the acceptance tests. It's not by sheer happenstance that they pass.

And if you simply cannot let go of the notion that you must start by writing an automated acceptance test and drive downwards from there, might I suggest that as new objects emerge in your design, you refactor the test assertions downwards also and push them into new tests that sit close to those new objects, so that eventually you end up with tests that only have one reason to fail?

Refactorings are supposed to be behaviour-preserving, so - if you're a disciplined refactorer - you should end up with a cluster of unit tests that are logically directly equivalent to the original high-level acceptance test.

There. I've said it.






June 8, 2014

Reliability & Sustaining Value Are Entirely Compatible Goals

This is a short blog post about having your cake and eating it.

The Agile Software Development movement has quite rightly shifted the focus in what we do from delivering to meet deadlines to delivering sustainable value.

A key component in sustaining the delivery of value through software is how much it costs to change our code.

The Software Craftsmanship schtick identifies primary factors in the cost of changing software; namely:

1. How easy is it to understand the code?

2. How complicated is the code?

3. How much duplication is there in the code?

4. How interdependent are all the things in the code?

5. How soon can we find out if the change we made broke the code?

By taking more care over these factors, we find that it's possible to write software in a way that not only delivers value today, but doesn't impede us from delivering more value tomorrow. In the learning process that is software development, this can be critical to our success.

And it's a double win. Because, as it turns out, when we take more care over readability, simplicity, removing duplication, managing dependencies and automating tests, we also make our software more reliable in the first instance.

Let us count the ways:

1. Code that's easier to understand is less likely to suffer from bugs caused by misunderstandings.

2. Code that is simpler tends to have less ways to go wrong - fewer points of failure - to achieve the same goals

3. Duplicated code can include duplicated bugs. Anyone who's ever "reused" code from sites like The Code Project will know what I mean.

4. Just as changes can propagate through dependencies, so can failures. If a critical function is wrong, and that function is called in many places and in many scenarios, then we have a potential problem. It's possible for a single bug in a single line of code to bring down the entire system. We call them "show-stoppers". It's for this reason I dreamed up the Dependable Dependencies Principle for software design.

5. High levels of automated test assurance - notice I didn't say "coverage" - tends to catch more programming errors, and sooner. This makes it harder for bugs to slip unnoticed into the software, which can also have economic benefits.


So there's your cake. Now eat it.






May 11, 2014

When Really To Use Mocks? First Ask: "What Are Mocks?"

I should probably stay out of this, but just couldn't resist sharing my own thoughts about Uncle Bob's latest blog post on the subject of when to use mock objects.

He makes some fair points about isolation and using test doubles at architectural boundaries, as well as testing exceptional paths and making "random" things repeatable using the built-in features of many mocking frameworks that allow developers to deliberatly throw exceptions and return hardcoded dates and times and all that sort of thing.

But, and we need to be clear about this, this is the not intended purpose of mock objects, as far as I understand it.

Mocks exist to address a specific design need - not a testing need. The inventors of mock objects were answering the question: how do we test-drive the interactions between the key roles in our design?

How can I write a test that fails - a specification in test form - because a collaboration between two objects didn't take place in the way I wanted it to?

Enter stage right mock objects, and their - now legion - associated enabling frameworks, like JMock, Mockito, MockStockAndTwoSmokingBarrels, MockTheWeek, and, of course, Mockney Radio 1 DJ.

The idea's very simple; in thinking about object oriented design, we can organise our thoughts - expressed perhaps using simple modeling tools like index cards or UML diagrams - into three categories:

1. Roles - what roles do objects play in achieving a goal?

2. Responsibilities - what does each role do in achieving the goal?

3. (And here's the crux of good OO design) Collaborations - how do these roles collaborate - by sending messages to each other - to co-ordinate getting the job done?

Now, however you're driving the implementation of your design, in OO paradigms - indeed, in any paradigm where code is organised into modules that do chunks of work and invoke functions on other modules to do the rest - those 3 things form the basis of how code is organised.

But we can come at an implementation from different angles, all of which can overlap. I might choose to test-drive responsibilities using traditional assertion-based tests, and isolate that code from external systems or frameworks using stubs. I might choose to drive out key collaborations in my code using mock objects. I might sketch out an OO design and build the thing bottom up, I might start by test-driving the outermost objects and work my way in, using test doubles to defer the implementations further down the call stack. Tomato. Tomato. (That sounds funnier than it reads.)

You might just start by passing the tests in the simplest way possible and refactoring to an OO design, allowing roles, responsibilities and collaborations to emerge completely organically, with design decisions coming down purely to "what's the cleanest code that will pass these tests?"

Or you might plan the whole OO design up-front and just implement it.

In practice, these are two extremes: no up-front planning of our OO design requires very strong refactoring muscles. Not recommended for mere mortals like me and you.

Similarly, trying to think of everything up-front tends to take a very long time, and inevitably we discover as we get into the implementation details that the map is not the terrain. Hence, also not recommended for us mere mortals.

And so, we strike a balance. And that balance differs from person to person and from one team cultue to the next.

The same also goes for the balance between "classic TDD", which is mostly feedback-driven when it comes to OO design, but not completely, and the "London School", which relies on more up-front thought and planning about key roles (interfaces) and the collaborations between them than classic TDD.

The lines are necessarily blurred between these different approaches, just as the lines are blurred between mocking and stubbing in many popular mock object frameworks.

At the risk of giving away pithy and meaningless advice, you should use what's appropriate. But it helps to be clear in your mind what question your test is asking. Are you asking whether a calculation was done correctly, perhaps using hardcoded data returned by a stub pretending to be the database, or are you asking if the database request was built and sent correctly, regardless of the response?

You should also be very careful not to fall back on mock objects as a crutch for making code that's untestable because of dependencies more testable. For that way lies the madness of baking in a bad design. Yes, you can get more tests in there, but those tests will likely as not expose the internal interactions of your legacy code, making them doubly difficult to refactor.

Ultimately, remember this: mock obects - test doubles used to test interactions - are supposed to be an OO design tool that enables us to drive out the collaborations in our architecture by writing failing tests. Their purpose is not specifically about isolating code to make it more testable. Indeed, testability is not the point of mock objects - it's a nice side-effect of doing code-level design with them.

Your own comfort-level - your tolerance for mocks, if you like - will probably be different to mine. Mine is quite low. I like to mosty discover designs, but undoubtedly will test-drive collaborations at system boundaries, as Uncle Bob recommends.

But it's not my way or the highway on this matter. If your OO designs are effectively modular, with cohesive and loosely-coupled components that perform a specific job and have few collaborations, mocks won't hurt you and can be an effective TDD aid. That's their original purpose, after all. TDD-ing code in the Tell, Don't Ask style is undoubtedy easier with mocks.

We have no reason to doubt that well-known exponents of the London School of TDD are successful in their approach. Even though we see many teams fall foul of "mock abuse", relying on mocking frameworks to make bad designs testable.

Likewise, we have no reason to doubt that key exponents of classic TDD are not equally successful in the designs their more feedback-driven approach reaps. Even though we see many teams fall foul of both too little up-front planning, and too little after-the-fact refactoring to keep the code clean.

So, when to use mocks? When it makes sense





April 11, 2014

Reliable Everyday Software London, May 15th

Announcing the first of what I hope will be a regular meet-up in London for folk interested in bringing more reliable software to the mainstream.

Reliable Everyday Software London (#RESL) aims to bring together communities of practitioners, researchers, teachers and other interested folk to think about, talk about, and maybe even do something about making some of the techniques and tools we typically associate with more critical software into everyday software development.

Many of us share a belief that some of these techniques have been unwisely overlooked by the vast majority of teams, and know from experience that there are times on every project, and parts of every code base, that would benefit from a more rigorous eye.

We also know that, on a daily basis, we suffer at the hands of software that, while you might not think of it as "critical", can cause a constant low level of pain and inconvenience which every now and again explodes into something serious. Whether it's a utility bill payment that somehow got "lost in the post", or losing that vital piece of data, or a sensitive piece of information accidentally exposed, or getting stuck in an infinite logical loop that means we can't see our email because we forgot our password (and they'll only send a password reminder to that email address we can't see - grrr!), software defects have the potential to ruin our lives. At best, they are a constant low-level annoyance that eats up time and raises blood pressures (thus shortening our lives.)

It doesn't have to be this way. If the communities of practitioners and researchers can get past the "them & us" mentality that currently pervades, with both sides believing the other side are "doing it wrong", we may well see that we have much to learn from each other. The potential benefits of injecting a healthy dose of greater rigour into everyday software development, and a healthy dose of everyday realism into research, cannot be overstated.

So, to foster a spirit of enquiry and co-operation, we'll be meeting at Unruly Media at 6:30-8pm on Thursday May 15th to kick things off. Hope you can joins us. But if you can't, please sign up for our meetup.com group anyway and join in the discussion.






March 13, 2014

Waterfall, Reality Avoidance & People Who Say "No"

I have this theory.

One of the problems some managers have with iterative software development is that, when it's done well - seeking early and frequent feedback and acting on it, as opposed to just incrementally executing a waterfall plan - it reduces the scope for avoiding reality.

On a waterfall project, reality can be avoided for months or even years. The illusion of progress can be maintained, through form filling and the generation of reams of reports that nobody ever reads, right up until the point that software needs to be seen to be working.

If it were my money, this would scare the shit out of me - not knowing what my money's been spent on until the last moment.

But I can see the attraction for managers. It's not their money. And typically they get rewarded for this illusion of progress, which can go as far as pretending the software is ready the night before it's deployed into a live environment.

One of my early experiences as a freelancer in London was leading a team on the development of a web site that the company had been kicking around as an idea for 2-3 years. Naturally, after 2 years of business analysis, and 6 months of database design - based on what use cases, one can only imagine - the team were given six whole weeks to implement the site.

Our project manager was pretty canny, and understood how much of a squeeze this was going to be. So, back in 1999, I first tried my hand at a new thing called "Extreme Programming", because we felt what we were doing was extremely ambitious, and the right thing to do with such short timescales was to iterate extremely.

But the customer wouldn't play ball. We wanted to show him working software, but he literally refused to come downstairs and take a look at what the company's money was being spent on. He insisted, instead, that the project manager wrote reports. Lots of reports. Daily reports. Detailed reports.

And when the reports said "we are not as far as the plan says we should be", the reports were rejected. And new reports had to be written, saying that, actually, we were on plan.

For doing this, the project manager got promoted to Chief Technology Officer. The developers, who unanimously refused to play along and kicked up quite a fuss, got let go almost immediately afterwards.

A new project manager was appointed, who was more than happy to live with the illusion. I recall distinctly listening to a conversation with the business in which he told them that a piece of work that could take months and had not even been started was actually done, tested and ready to deploy. There's delusion, and then there's DELUSION.

Of course, it dodn't deploy. It couldn't. It didn't exist. But, again, months of him telling the business what they wanted to hear, regardless of the reality that unfolded, got him promoted to senior programme manager. I was long gone by this point, thankfully. (On to the next delusional dotcom boom shitstorm.)

This was an important lesson for me - most failure is. I learned that, in many organisations, people aren't rewarded for what they achieve. They're rewarded for what they're prepared to claim they've achieved. And even when it turns out to be an out-and-out lie, and the business is left with egg all over their faces, they may still get rewarded.

To their own detriment, too many businesses reward people for saying "Yes", even when the real answer - the answer they need to hear - is "No".

Software developers - people who actually make software happen - do not have that luxury, though many try. Software either is, or isn't. It either does, or it doesn't. When the chips are down, we can't fake it. We may say "it's ready", but when it ships reality will pull its pants down for all the world to see.

Among other reasons, I believe this one of the key reasons why managers don't like us. They need us to make things happen, but we have this annoying habit of saying "No" and telling them things they didn't want to hear.

Iterative development throws light in dark corners they'd rather we didn't look, and as such that's why I believe waterfall is a reality avoidance mechanism for many managers. Specifically, they want to avoid having to go back to their bosses or their customers and say "No", because in the game they're playing - which can be s distinctly different game with distinctly different aims to the game developers play - they lose points for doing that.

The only meaningful way to bring reality back into play is to realign the goals of managers and developers and get everyone playing the Let's Actually Deliver Something Of Value game. Managers need to be rewarded for testable achievements, and steered away from peddling illusions. The reason this doesn't happen more often, I suspect, is because the value of illusion increases the further up the ranks you go. If a PM gets a pat on the back for saying "we're on track", the CTO gets a trip to Disneyland, and the CEO gets a new Mercedes. Hence, the delusion gets stronger as we go higher. People running governments tend to be the most delusional of all, such is their power and influence. This effect is what produces the sometimes gargantuan IT failures only governments seem capable of creating.

The Emperor has no clothes. Bah humbug.






February 18, 2014

TDD, Architecture & Non-Functional Goals - All Of These Things Belong Together

One of the most enduring myths about Test-driven Development is that it is antithetical to good software or system architecture. Teams who do TDD, they say, don't plan design up-front. Nor do they look at the bigger picture, or consider key non-functional design quality attributes enough.

Like many of these truisms, TDD gets this reputation from teams doing it poorly. And, yes, I know "you must have not been doing it right" is the go-to excuse for many a snakeoil salesman, so let me qualify my hifalutin claims.

First of all, there's this perception that when we do TDD, we pick up a user story, agree a few acceptance tests and then dive straight into the code without thinking about, visualising or planning design at a higher level.

This is not true.

The first thing I recommend teams do when they pick up user stories - especially in the earlier stages of development on a new product or system - is to go to the whiteboards. Collaborative design is the key to software development. It's all about communicating - this is one of my core principles of writing software.

Scott Ambler's book, Agile Modeling (now roughly 300 years old, or something like that), painted a very clear picture of where collaborative analysis and design could fit into, say, Extreme Programming.

The fact is, TDD is a design discipline that can be effectively applied at every level of abstraction.

There are teams using test cases to plan their high-level architecture - which I highly recommend, as designs are best when they answer real needs, which can be clearly expressed using examples/test cases. Just because they're not writing code yet doesn't mean they're not being test-driven.

And, as most of us will probably have seen by now, test-driving the implementation of our high-level designs is a great way to drive out the details and "build it right".

At the other end of the detail spectrum, there are teams out there test-driving the designs of FPGAs, ASICs and other systems on silicon.

If it can be tested, it can be test-driven. And that applies to a heck of a lot of things, including packages architectures and systems of systems.

As to the non-functional considerations of design and architecture, I've learned from 15 years of TDD-ing a wide variety of software that these are best expressed as tests.

I'm no great believer in the Dark Arts of software architecture. Like a sort of object oriented Gandalf, I've worn that cloak and carried that magical staff more times than I care to remember, and I know from acres of experience that the best design authorities are empiricists (i.e., testers).

How many times have you heard an architect say "this design is wrong" when what they really mean is "that's not how I would have designed it"? I try to steer clear of that kind of highly subjective voodoo.

Better, I find, to express design goals using unambiguous tests that the software needs to pass. It could be a very simple design rules that says "methods shouldn't make more than one decision", or something more demanding like "the software should occupy a memory footprint of no more than 100KB per user session".

Be it about complexity, dependencies, performance, scalability, or even how much entropy an algorithm generates, we can be more scientific about these things if we're of a mind to.

My own experiences have taught me that, when design goals are ambiguous and/or not tested frequently, the software doesn't meet those goals. Because you get what you measure.

Both of these things are no just do-able in a test-driven approach, but I'd highly recommend teams do them. Not only do they make TDD more effective when considering the picture, but they also benefit from being made more effective by being test-driven. That's a Win-Win in my book.







February 4, 2014

Five Tips For Software Customers

Congratulations! You are now the owner of a software development project.

Bespoke software, tailored to your requirements, can bring many benefits to you and your business. But before you dive in, here are 5 tips for getting the most out of your software development team.

1. Set Clear Goals

Many customers who have reported faults with their software development team can trace the problem back to one common factor: the developers didn't have a clear idea of what it was you were hoping to achieve with the software. Work closely with them to build a shared, testable understanding of your business goals so they know what to aim for and can more objectively measure their progress towards achieving those goals.

2. Be Available

Another very common fault reported with software development teams can be traced back to the fact that the customer - that's you - wasn't there when they needed input. This can lead to delays while teams wait for feedback, or to costly misunderstandings if the team starts to fill in the blanks for themselves. The easier it is to get to speak to you, the sooner things can move along.

3. Make Small Bets

Your software development team costs money to run. A day spent working on a feature you asked for can represent an investment of thousands of pounds. Software is expensive to write. And there are no guarantees in software development. Even if the developers deliver exactly what you ask for, there's a good chance that what you wanted might not be what you really needed, and the only way to to find that out is to "suck it and see". In that sense, everything they create for you is an experiment, and experiments are risky. Sometimes the experiments will work, sometimes they won't. The key to succeeding with software development is to invest wisely in those experiments. rather than take the entire budget you have available and bet it all on one giant experiemnt to solve all the problems, consider breaking it up into lots of smaller experiments that can be completed faster. If your total budget is £1,000,000, see what the team can achieve with £20,000. The more throws of the dice you can give yourself, the more likely you are to come out a winner.

4. Don't Ask, See

You may have read horror stories in the news about £multi-million (or even £multi-billion) software project failures. A typical feature in these stories is how the executive management (i.e., the customer) didn't know that the project or programme wasn't on track. That is because they have made the most basic mistake any customer on a software project of any size can make - they relied on hearsay to measure progress. Large IT projects often have complex reporting systems, where chains of progress reports are filtered upwards through layers of management. They're relying on middle managers to report honesty and accurately, but on these large projects, when things start to deviate from the plan, managers can face severe consequences for being the bearers of bad news. So they lie. Small wonder, then, when the truth finally emerges (usually on the day the software was supposed to go live) executive management are caught completely unawares.

By far and away the best mechanism to gauging progress in software development is to see the software working. See it early, see it often. If you've spent 10% of your budget, ask to see 10% of the software working. If you've spent half your budget and the team can't show you anything, then it's time to call in the mechanic, because your development team is broken. Good development teams will deliver working software iteratively and incrementally, and will ensure that at every stage the software is working and fit to be deployed, even if it doesn't do enugh to be useful yet.

5. Let The Programmers Program

You know that guy who designs his own house and tells the builders "this is what I want" and the builders say "it won't work" but the guy won't budge and insists that they build it anyway, and, inevitably, the builder was right and the design didn't work?

Don't be that guy.

Let the technicians make the technical decisions, and expect them to leave the business decisions to you. Each to their own.

And if you don't trust the developers to make good technical decisions, then why did you hire them?

6. Things Change

It's worth restating: everything in software development is an experiment. Nobody gets it right first time. At the beginning, we understand surprisingly little about the problems we're trying to solve. At its heart, software development is the process of learning what software is needed to solve those problems. If we cling doggedly to the original requirements and the original plan, that means we cannot apply what we learn. And that leads to software that is not as useful. (Indeed, often useless.) Your goal is is to solve the problem, not to stick to a plan that was conceived when we knew the least.

The plan will change, and that's a good thing. Get used to it. Embrace change






January 8, 2014

Why Government Should Treat Software Development As A Planning Activity

Just a quick thought as more wailing and gnashing off teeth emenates from the UK government over Yet Another IT Fiasco (TM)

Alas for the software development community, the A-word ("Agile") has been dragged through the mud in this latest debacle. More fools the UK Agile community for getting involved, no doubt. But it's unfair, of course. Nobody in their right mind sets out to create a £2.4 billion IT system following any kind of approach, let alone one claiming to be "Agile".

If the government's team of "cutting-edge digital experts" (awww, bless!) really had applied Agile values to the Universal Credit programme, then the DWP wouldn't be facing this mess now.

Agile teaches us to fail early and fail cheaply. You don't go committing hundreds of millions of pounds to an idea that might not work. Nor do you wait years to discover that it doesn't work.

This is particularly pertinent to the formulation of government policy. Ministers and their departments devote years to developing bills that, if they get that far, are voted into law by parliament. They do all sorts of studies, hire armies of experts to advise, run public consultations and wotnot to beat the legislation into some kind of shape.

Given how high the stakes are, and how often in the last decade IT failures have severely compromised government policy (costing the taxpayer £billions in the process), surely it would make sense - as part of the process of developing legislation - to at the very least develop a basic working prototype of the software that will enable it?

Along with all the lawyers, economists, scientists and other experts, hire a couple of good software developers, hook them up with some tame users, and - in a low-impact, low-risk environment - get them to write some software that tests some of the key assumptions in your plan. Without those insights, so much of government policy - which increasingly relies on software - is little more than Magic Beans.


There's a heck of a lot - including people's lives - riding on some of these projects, and the time to ask "is the software actually do-able?" is before you commit the nation to The Big Plan.