September 13, 2016

Learn TDD with Codemanship

4 Things You SHOULDN'T Do When The Schedule's Slipping

It takes real nerve to do the right thing when your delivery date's looming and you're behind on your plan.

Here are four things you should really probably avoid when the schedule's slipping:

1. Hire more developers

It's been over 40 years since the publication of Fred L. Brooks' 'The Mythical Man-Month'. This means that our industry has known for almost my entire life that adding developers to a late project makes it later.

Not only is this born out by data on team size vs. productivity, but we also have a pretty good idea what the causal mechanism is.

Like climate change, people who reject this advice should not be called "skeptics" any more. In the face of the overwhelming evidence, they're Small Team Deniers.

Hiring more devs when the schedule's slipping is like prescribing cigarettes, boxed sets and bacon for a patient with high blood pressure.

2. Cut corners

Still counterintuitively, for most software managers, the relationship between software quality and the time and cost of delivery is not what most of us think it is.

Common sense might lead us to believe that more reliable software takes longer, but the mountain of industry data on this clearly shows the opposite in the vast majority of cases.

To a point - and it's a point 99% of teams are in no danger of crossing - it actually takes less effort to deliver more reliable software.

Again, the causal mechanism for this is well understood. And, again, anyone who rejects the evidence is not a "skeptic"; they're a Defect Prevention Denier.

The way to go faster on 99% of projects is to slow down, and take more care.

3. Work longer hours

Another management myth that's been roundly debunked by the evidence is that, when a software delivery schedule's slipping significantly, teams can get back on track by working longer hours.

The data very clearly shows that - for most kinds of work - longer hours is a false economy. But it's especially true for writing software, which requires a level of concentration and focus that most jobs don't.

Short spurts of extra effort - maybe the odd weekend or late night - can make a small difference in the short term, but day after day, week after week overtime will burn your developers out faster than you can say "get a life". They'll make stupid, easily avoidable mistakes. And, as we've seen, mistakes cost exponentially more to fix than to avoid. This is why teams who routinely work overtime tend to have lower overall productivity: they're too busy fighting their own self-inflicted fires.

You can't "cram" software development. Like your physics final exams, if you're nowhere near ready a week before, then you're not gong to be ready, and no amount of midnight oil and caffeine is going to fix that.

You'll get more done with teams who are rested, energised, feeling positive, and focused.

4. Bribe the team to hit the deadline

Given the first three points we've covered here, promising to shower the team with money and other rewards to hit a deadline is just going to encourage them to make those mistakes for you.

Rewarding teams for hitting deadlines fosters a very 1-dimensional view of software development success. It places extra pressure on developers to do the wrong things: to grow the size of their teams, to cut corners, and to work silly hours. It therefore has a tendency to make things worse.

The standard wheeze, of course, is for teams to pretend that they hit the deadline by delivering something that looks like finished software. The rot under the bonnet quickly becomes apparent when the business then expects a second release. Now the team are bogged down in all the technical debt they took on for the first release, often to the extent that new features and change requests become out of the question.

Yes, we hit the deadline. No, we can't make it any better. You want changes? Then you'll have to pay us to do it all over again.

Granted, it takes real nerve, when the schedule's slipping and the customer is baying for blood, to keep the team small, to slow down and take more care, and to leave the office at 5pm.

Ultimately, the fate of teams rests with the company cultures that encourage and reward doing the wrong thing. Managers get rewarded for managing bigger teams. Developers get rewarded for being at their desk after everyone else has gone home, and appearing to hit deadlines. Perversely, as an industry, it's easier to rise to the top by doing the wrong thing in these situations. Until we stop rewarding that behaviour, little will change.

April 20, 2016

Learn TDD with Codemanship

A* - A Truly Iterative Development Process

Much to my chagrin, having promoted the idea for so many years, software development still hasn't caught on to the idea that what we ought to be doing is iterating towards goals.

NOT working through a queue of tasks. NOT working through a queue of features.

Working towards a goal. A testable goal.

We, as an industry, have many names for working through queues: Agile, Scrum, Kanban, Feature-driven Development, the Unified Process, DSDM... All names for "working through a prioritised list of stuff that needs to be done or delivered". Of course, the list is allowed to change depending on feedback. But the goal is usually missing. Without the goal, what are we iterating towards?

Ironically, working through a queue of items to be delivered isn't iterating - something I always understood to be the whole point of Agile. But, really, iterating means repeating a process, feeding back the results of each cycle, until we reach some goal. Reaching the goal is when we're done.

What name do we give to "iterating towards a testable goal"? So far, we have none. Buzzword Bingo hasn't graced the door of true iterative development yet.

Uncatchy names like goal-driven development and competitive engineering do exist, but haven't caught on. Most teams still don't even have even a vague idea of the goals of their project or product. They're just working through a list that somebody - a customer, a product owner, a business analyst - dreamed up. Everyone's assuming that somebody else knows what the goal is. NEWSFLASH: They don't.

The Codemanship way compels us to ditch the list. There is no release plan. Only business/user goals and progress. Features and change requests only come into focus for the very near future. The question that starts every rapid iteration is "where are we today, and what's the least we could do today to get closer to where we need to be?" Think of development as a graph algorithm: we're looking for the shortest path from where we are to some destination. There are many roads we could go down, but we're particularly interested in exploring those that bring us closer to our destination.

Now imagine a shortest-path algorithm that has no concept of destination. It's just a route map, a plan - an arbitrary sequence of directions that some product owner came up with that we hope will take us somewhere good, wherever that might be. Yup It just wouldn't work, would it? We'd have to be incredibly lucky to end up somewhere good - somewhere of value.

And so it is - in my quest for a one-word name to describe "iteratively seeking the shortest (cheapest) path to a testable goal", I propose simply A*

As in:

"What method are we following on this project?"


Of course, there are prioritised lists in my A* method: but they are short and only concern themselves with what we're doing next to TRY to bring us closer to our goal. Teams meet every few days (or every day, if you're really keen), assess progress made since last meeting, and come up with a very short plan, the results of which will be assessed at the next meeting. And rinse and repeat.

In A*, the product owner has no vision of the solution, only a vision of the problem, and a clear idea of how we'll know when that problem's been solved. Their primary role is to tell us if we're getting warmer or colder with each short cycle, and to help us identify where to aim next.

They don't describe a software product, they describe the world around that product, and how it will be changed by what we deliver. We ain't done until we see that change.

This puts a whole different spin on software development. We don't set out with a product vision and work our way through a list of features, even if that list is allowed to change. We work towards a destination - accepting that some avenues will turn out to be dead-ends - and all our focus is on finding the cheapest way to get there.

And, on top of all that, we embrace the notion that the destination itself may be a moving target. And that's why we don't waste time and effort mapping out the whole route beyond the near future. Any plan that tries to look beyond a few days ends up being an expensive fiction that we become all too easily wedded to.

January 21, 2015

Learn TDD with Codemanship

My Solution To The Dev Skills Crisis: Much Smaller Teams

Putting my Iconoclast hat on temporarily, I just wanted to share a thought that I've harboured almost my entire career: why aren't very small teams (1-2 developers) the default model in our industry?

I think back to products I've used that were written and maintained by a single person, like the guy who writes the guitar amp and cabinet simulator Recabinet, or my brother, who wrote a 100,000 line XBox game by himself in a year, as well as doing all the sound, music and graphic design for it.

I've seen teams of 4-6 developers achieve less with more time, and teams of 10-20 and more achieve a lot less in the same timeframe.

We can even measure it somewhat objectively: my Team Dojo, for example, when run as a one day exercise seems to be do-able for an individual but almost impossible for a team. I can do it in about 4 hours alone, but I've watched teams of very technically strong developers fail to get even half-way in 6 hours.

People may well counter: "Ah, but what about very large software products, with millions of lines of code?" But when we look closer, large software products tend to be interconnected networks of smaller software products presenting a unified user interface.

The trick to a team completing the Team Dojo, for example, is to break the problem down at the start and do a high-level design where interfaces and contracts between key functional components are agreed and then people go off and get their bit to fulfil its contracts.

hence, we don't need to know how the spellcheck in our word processor works, we just need to know what the inputs and expected outputs will be. We could sketch it out on paper (e.g., with CRC cards), or we could sketch it out in code with high-level interfaces, using mock objects to defer the implementation design.

There'll still be much need for collaboration, though. It's especially important to integrate your code frequently in these situations, because there's many a slip 'twixt cup and microservice.

As with multithreading (see previous blog post), we can aim to limit the "touch points" in component-based/service-oriented/microservice architectures so that - as much as possible - each component is self-contained, presents a simple interface and can be treated as a black box by everyone who isn't working on its implementation.

Here's the thing, though: what we tend to find with teams who are trying to be all hifalutin and service-oriented and enterprisey-wisey is that, in reality, what they're working on is a small application that would probably be finished quicker and better by 1-2 developers (1 on her own, or 2 pair programming).

You only get an economy of scale with hiding details behind clean interfaces when the detail is sufficiently complex that it makes sense to have people working on it in parallel.

Do you remember from school biology class (or physics, if you covered this under thermodynamics) the lesson about why small mammals lose heat faster than large mammals?

It's all about the surface area-to-volume ratio: a teeny tiny mouse presents a large surface area proportional the volume of its little body, so more of its insides are close to the surface and therefore it loses heat through its skin faster than, say, an elephant who has a massive internal volume proportional to its surface area, and so most of its insides are away from the surface.

It may be stretching the metaphor to breaking point, but think of interfaces as the surface of a component, and the code behind the interfaces as the internal volume. When a component is teeny-tiny, like a wee mouse, the overhead in management, communication, testing and all that jazz in splitting off developers to try to work on it in parallel makes it counterproductive to do that. Not enough of the internals are hidden to justify it. And so much development effort is lost through that interface as "heat" (wasted energy).

Conversely, if designed right, a much larger component can still hide all the detail behind relatively simple interfaces. The "black box-iness" of such components is much higher, in so much as the overhead for the team in terms of communication and management isn't much larger than for the teeny-tiny component, but you get a lot more bang for your buck hidden behind the interfaces (e.g., a clever spelling and grammar checker vs. a component that formats dates).

And this, I think, is why trying to parallelise development on the majority of projects (average size of business code base is ~100,000 lines of code) is on a hiding to nowhere. Sure, if you're creating on OS, with a kernel, and a graphics subsystem, and a networking subsystem, etc etc, it makes sense to a point. But when we look at OS architectures, like Linux for example, we see networks of "black-boxy", weakly-interacting components hidden behind simple interfaces, each of which does rather a lot.

For probably 9 our of 10 projects I've come into contact with, it would in practice have been quicker and cheaper to put 1 or 2 strong developers on it.

And this is my solution to the software development skills crisis.

September 8, 2014

Learn TDD with Codemanship

Iterating Is Fundamental

Just like it boggles my mind that, in this day and age of electric telephones and Teh Internets, we still debate whether an invisible man in the sky created the entire universe in 6 days, so too is my mind boggled that - in 2014 - we still seem to be having this debate about whether or not we should iterate our software designs.

To me, it seems pretty fundamental. I struggle to recall a piece of software I've worked on - of any appreciable complexity or sophistication - where getting it right first time was realistic. On my training courses, I see the need to take multiple passes on "trivial" problems that take maybe an hour to solve. Usually this is because, while the design of a solution may be a no-brainer, it's often the case that the first solution solves the wrong problem.

Try as I might to spell out the requirements for a problem in clear, plain English, there's still a need for me to hover over developers' shoulders and occasionally prod them to let them know that was not what I meant.

That's an example of early feedback. I would estimate that at least half the pairs in the average course would fail to solve the problem if I didn't clear up these little misunderstandings.

It's in no way an indictment of those developers. Put me in the exact same situation, and I'm just as likely to get it wrong. It's just the lossy, buggy nature of human communication.

That's why we agree tests; to narrow down interpretations until there's no room for misunderstandings.

In a true "waterfall" development process - bearing in mind that, as I've said many times, in reality there's no such thing - all that narrowing down would happen at the start, for the entire release. This is a lot of work, and requires formalisms and rigour that most teams are unfamiliar with and unwilling to attempt.

Part of the issue is that, when we bite off the whole thing, it beecomes much harder to chew and much harder to digest. Small, frequent releases allow us to focus on manageable bitesized chunks.

But the main issue with Big Design Up-Front is that, even if we pin down the requirements precisely and deliver a bug-free implementation of exactly what was required, those requirements themselves are open to question. Is that what the customer really needs? Does it, in reality, solve their problem?

With the best will in the world, validating a system's requirements to remove all doubt about whether or not it will work in the real world, when the system is still on the drawing board, is extremely difficult. At some point, users need something that's at the very least a realistic approximation of the real system to try out in what is, at the very least, a realistic approximation of the real world.

And here's the the thing; it's in the nature of software that a realistic approximation of a program is, in effect, the program. Software's all virtual, all simulation. The code is is the blueprint.

So, in practice, what this means is that we must eventually validate our software's design - which is the software itself - by trying out a working version in the kinds of environments it's intended to be used in to try and solve the kinds of problems the software's designed to solve.

And the sooner we do that, the sooner we learn what needs to be changed to make the software more fit for purpose.

Put "agility" and "business change" to the back of your mind. Even if the underlying problem we want to solve stays completely static throughout, our understanding of it will not.

I've seen it time and again; teams agonise over features and whether or not that's what the customer really needs, and then the software's released and all that debate becomes academic, as we bump heads with the reality of what actually works in the real world and what they actually really need.

Much - maybe most - of the value in a software product comes as a result of user feedback. Twitter is a classic example. Look how many features were actually invented by the users themselves. We invented the Retweet (RT). We invented addressing tweets to users (using @). We invented hastags (#) to follow conversations and topics. All of the things that make tweets go viral, we invented. Remember that the founders of Twitter envisioned a micro-blogging service in the beginning. Not a global, open messaging service.

Twitter saw what users were doing with their 140 characters, and assimilated it into the design, making it part of the software.

How much up-front design do you think it would have taken them to get it right in the first release? Was their any way of knowing what users would do with their software without giving them a working version and watching what they actually did? I suspect not.

That's why I believe iterating is fundamental to good software design, even for what many of us might consider trivial problems like posting 140-character updates on a website.

There are, of course, degrees of iterativeness (if that's a word). At one extreme, we might plan to do only one release, to get all the feedback once we think the software is "done". But, of course, it's never done. Which is why I say that "waterfall" is a myth. What typically happens is that teams do one very looooong iteration, which they might genuinely believe is the only pass they're going to take at solving the problem, but inevitably when the rubbers meets the road and working software is put in front of end users, changes become necessary. LOTS OF CHANGES.

Many teams disguise these changes by re-classifying them as bugs. Antony Marcano has written about the secret backlogs lurking in many a bug tracking system.

Ambiguity in the original spec helps with this disguise: is it what we asked for? Who can tell?

Test-driven design processes re-focus testers on figuring our the requirements. So too does the secret backlog, turning testers into requirements analysts in all but name only, who devote much of their time to figuring out in what ways the design needs to change to make it more useful.

But the fact remains that producing useful working software requires us to iterate, even if we save those iterations for last.

It's for these reasons that, regardless of the nature of the problem, I include iterating as one of my basics of software development. People may accuse me of being dogmatic in always recommending that teeams iterate their designs, but I really do struggle to think of a single instance in my 30+ years of programming when that wouldn't have been a better idea than trying to get it absolutely right in one pass. And, since we always end up iterating anyway, we might as well start as we will inevitably go on, and get some of that feedback sooner.

There may be those in the Formal Methods community, or working on safety-critical systems, who argue that - perhaps for compliance purposes - they are required to follow a waterfall process. But I've worked on projects using Formal Methods, and consulted with teams doing safety-critical systems development, and what i see the good ones doing is faking it to tick all the right boxes. The chassis may look like a waterfall, but under the hood, it's highly iterative, with small internal releases and frequent testing of all kinds. Because that's how we deliver valuable working software.

March 21, 2014

Learn TDD with Codemanship

Software Correctness - How For & Why They?

In recent weeks, this has been coming up regularly. So I think it's probably time we had a little chat about software correctness.

I get a sense that a lot of younger developers have skipped the theory on this, and I felt it would be good to cover that here so I can at least point people at the blog. It's sort of my FAQ.

First of all, what do we mean by "software correctness" (or "program correctness", as us old timers might call it)?

To risk being glib, a program - any executable body of code - is only correct if it does what we expect it should, and only when we expect that it should.

So, a program to calculate the square root of a number is only correct if the result multipled by itself is equal to the input. And we might expect that such a program will only work correctly if the input isn't zero or less.

Tony Hoare's major contribution to the field of software engineering is a precise definition of program correctness:

{P} C {Q}

C is the program. Q is what must be true after C has successfully executed. (The outcome.) And P is precisely when we can expect C to achieve Q. In other words, P describes when it is valid to invoke the program C.

e.g., {input > 0} Square Root {result * result = input }

These three elements, pre-condition, program/function and post-condition taken together are called a Hoare Triple. As strikingly simple as this definition of program corretness is, it's turned out to be very powerful logic, forming the basis of a great deal of what we think of as "software engineering".

To see how this translates into something practical, let's take a look at an example:

This is a simple algorithm for calculating the average price of a collection of houses. What might the Hoare Triple look like for this simple program (using pseudo-code)?

{ count of house > 0 } average() { result = sum of house prices/count of house}

You'll notice probably straight away that the outcome could form the basis of unit test assertions. Many of us are already testing post-conditions in this way. So, hooray for our side.

But what about the pre-condition? What does it mean when we say that the house array must not be empty for the program to work?

We have three possible choices;

1. Change the post-condition to handle this scenario.

In other words, the method average() will always handle any input.

For example, here I've changed the initialised count of houses to 1 if there are zero houses. That way, even if the array is empty, the total will always be divided by at least one. (And the average for an empty array will be zero, which kids of makes sense.)

2. Guard the body of average() from inputs that will break it.

This approach is called Defensive Programming.
Bear in mind now that calling code will need to know how to handle this exception meaningfully.

3. Only ever call average() when there's at least one house.

This approach is called Design By Contract.

It puts the responsibility on the calling code to ensure it doesn't break the pre-condition when invoking average().

Typically, developers practicing DbC will use assertions embedded in their code, the checking of which at runtime can be switched on and off, so we can have assertion checking during testing but then switch them off once we're happy to release the software. The distinction between failing assertions and having our code throw exceptions when rules are broken is very clear: in Design By Contract, when assertions fail it's because our code is wrong!

The advantage of DbC is that it tends to allow us to write cleaner, simpler implementations, since we assume that pre-conditions are satisfied and don't have to write extra code to handle a bunch of extra edge cases.

Remember that in strategies 1. and 2., handling the edge case is part of the program's correct behaviour. In DbC, if that edge case ever comes up, the program is broken and needs to be fixed.

The important thing to remember is that, whether it's handled in the post-condition (e.g., average price of zero houses = 0), whether it's guarded against before the body of the program/function, or whether it's forbidden to invoke methods when pre-conditions are broken, the interaction between client code and supplier code (the caller and the callee) must be correct overall. It has to be handled meaningfully somewhere.

When it comes to assuring ourselves that our code is correct, there's a cornucopia of techniques we can employ, ranging from testing to mathematical proofs of correctness (often by proving it correct for one case, then for all N+1 cases by induction).

But, regardless of the technique, a testable definition of correctess gives us the formal foundation we need to at the very least ask "What do mean by 'correct'?", and is the basis for almost all of them.

I'll finish off with one more example to illustrate. Let's think about how our definition of program correctness might be exploited in rigorous code inspections.

Revisit my early implied definition of "program"; what am I really saying? When it comes to testing and verification - especially inspections - I consider a program to be any chunk of executable code, from a software executable, down to individual functions, and even individual program statements and expressions. To me, these are all "programs" that have rules for their correctess - pre- and post-conditions.

If you're using a modern IDE like Eclipse (okay, maybe not that modern...), your refactoring tools can guide you as to where these units of executable code are. Essentially, if you can extract it into its own method (perhaps with a bit of jiggery pokery turning multiple return values into fields etc), then it has pre- and post-conditions.

Just for illustration, mainly, I've refactored the example program into composed methods, each containing the smallest unit of executable code from the which the complete program is composed.

Theoretically, everyone of these little methods could be tested individually. I'm not suggesting we should write unit tests for them all, of course. But stop and think about the granularity of your testing practices. In a guided inspection, where we walk through the code, guided by test inputs, we could be asking of all these teeny-tiny blocks of code: "What must this do, and when will it work? When won't it work?" You'd be surprised how easy it is to miss pre-conditions.

So there you have: the theoretical basis for the lion's share of software testng and verification - software correctness.

February 19, 2014

Learn TDD with Codemanship

Programming Laws & Reality (& Why Most Teams Remain Flat Earthers)

An article on Dr Dobbs by Capers Jones has been doing the rounds on That Twitter, all about whether famous "programming laws" that we hold dear stand up to scrutiny with real-world data.

The answer from Capers is; yes, we do know what we think we know. For the most part, these programming laws are backed up by the available data.

For example, Fred Brooks' law that adding programmers to a late project makes it later is mostly true, for teams above a certain size. Adding one good programmer to a team of two probably won't cause delays. Adding another programmer to a team of 50 probably will. Also, adding an inexperienced or less capable programmer to any team will probably slow that team down.

This should come as no surprise. We've known this for decades. It's Software Development 101.

And yet, when a project is overunning, what do 99% of managers do? They hire more programmers. Still. To this day.

Not only do the hire more programmers, but many insist on hiring cheap junior programmers, so they can hire more of them. This tends to compound the problem. The schedule slips further, and they prescribe more of the same medicine.

It's a classic management mistake: the perceived solution is actually the cause of the problem, and the worse the problem gets, the more fuel gets thrown on the fire to try and put it out. It creates a negative feedback loop that can spiral out of control. Hence you will find enormous teams of hundreds of developers barely achieving what a team of four could.

If Brooks' Law is well know, why do so many managers continue to do the exact opposite?

Similarly, with Jones' own law about software defect removal, that states that teams who are better at removing defects before testing tend to be more productive than teams who are worse at it; and Peter Senge's law that simply states Faster is slower.

The jury's really not out about the relationship between quality and time and cost. As Jones' reminds us:

"Empirical data from about 20,000 projects supports this law."

To wit, the way to go faster is to take more care over quality. Teams moan about "not having time" for defect prevention practices like developer testing or inspections, and yet there's a mountain of evidence that suggests that if they did more of these things, they'd actually get done quicker. Again, it's a negative feedback loop. We don't have time, so we skimp on quality, which creates costly delays downstream, which eat up more of our time. Rinse and repeat.

It's another Software Development 101. Being a software developer and not believing it is like being a doctor who doesn't believe in germs, or an astronomer who believes the Earth is flat.

And, yet again, the vast majority of teams are encouraged to do the exact opposite - sometimes even rewarded for doing it.

The evidence tells us that, when the schedule's slipping and we're up against it, the right thing to do is keep the team small and highly skilled - indeed, maybe even move some developers off the team - and focus more on quality.

That's the tragedy of our industry: higher quality, more reliable software is not just compatible with commercial realities, it can actually improve them.

One can't help but wonder why. What motivates teams and their managers to wilfully persue what we might call "Flat Earth" strategies - strategies that are known to be likely to fail because the Earth is, in fact, round?

In considering how we might bring techniques for more reliable software into the mainstream, perhaps we need to devote much time to thinking about why the industry has ignored the facts for so long.

February 4, 2014

Learn TDD with Codemanship

Five Tips For Software Customers

Congratulations! You are now the owner of a software development project.

Bespoke software, tailored to your requirements, can bring many benefits to you and your business. But before you dive in, here are 5 tips for getting the most out of your software development team.

1. Set Clear Goals

Many customers who have reported faults with their software development team can trace the problem back to one common factor: the developers didn't have a clear idea of what it was you were hoping to achieve with the software. Work closely with them to build a shared, testable understanding of your business goals so they know what to aim for and can more objectively measure their progress towards achieving those goals.

2. Be Available

Another very common fault reported with software development teams can be traced back to the fact that the customer - that's you - wasn't there when they needed input. This can lead to delays while teams wait for feedback, or to costly misunderstandings if the team starts to fill in the blanks for themselves. The easier it is to get to speak to you, the sooner things can move along.

3. Make Small Bets

Your software development team costs money to run. A day spent working on a feature you asked for can represent an investment of thousands of pounds. Software is expensive to write. And there are no guarantees in software development. Even if the developers deliver exactly what you ask for, there's a good chance that what you wanted might not be what you really needed, and the only way to to find that out is to "suck it and see". In that sense, everything they create for you is an experiment, and experiments are risky. Sometimes the experiments will work, sometimes they won't. The key to succeeding with software development is to invest wisely in those experiments. rather than take the entire budget you have available and bet it all on one giant experiemnt to solve all the problems, consider breaking it up into lots of smaller experiments that can be completed faster. If your total budget is £1,000,000, see what the team can achieve with £20,000. The more throws of the dice you can give yourself, the more likely you are to come out a winner.

4. Don't Ask, See

You may have read horror stories in the news about £multi-million (or even £multi-billion) software project failures. A typical feature in these stories is how the executive management (i.e., the customer) didn't know that the project or programme wasn't on track. That is because they have made the most basic mistake any customer on a software project of any size can make - they relied on hearsay to measure progress. Large IT projects often have complex reporting systems, where chains of progress reports are filtered upwards through layers of management. They're relying on middle managers to report honesty and accurately, but on these large projects, when things start to deviate from the plan, managers can face severe consequences for being the bearers of bad news. So they lie. Small wonder, then, when the truth finally emerges (usually on the day the software was supposed to go live) executive management are caught completely unawares.

By far and away the best mechanism to gauging progress in software development is to see the software working. See it early, see it often. If you've spent 10% of your budget, ask to see 10% of the software working. If you've spent half your budget and the team can't show you anything, then it's time to call in the mechanic, because your development team is broken. Good development teams will deliver working software iteratively and incrementally, and will ensure that at every stage the software is working and fit to be deployed, even if it doesn't do enugh to be useful yet.

5. Let The Programmers Program

You know that guy who designs his own house and tells the builders "this is what I want" and the builders say "it won't work" but the guy won't budge and insists that they build it anyway, and, inevitably, the builder was right and the design didn't work?

Don't be that guy.

Let the technicians make the technical decisions, and expect them to leave the business decisions to you. Each to their own.

And if you don't trust the developers to make good technical decisions, then why did you hire them?

6. Things Change

It's worth restating: everything in software development is an experiment. Nobody gets it right first time. At the beginning, we understand surprisingly little about the problems we're trying to solve. At its heart, software development is the process of learning what software is needed to solve those problems. If we cling doggedly to the original requirements and the original plan, that means we cannot apply what we learn. And that leads to software that is not as useful. (Indeed, often useless.) Your goal is is to solve the problem, not to stick to a plan that was conceived when we knew the least.

The plan will change, and that's a good thing. Get used to it. Embrace change

November 20, 2013

Learn TDD with Codemanship

Retro Programming

After being reminded of some key information sources that pre-date the year I was born (1971, if you please, and not 10,000 BC as some have suggested), I've set myself a little challenge. Well, it's good to have a hobby, right?

I've long known that many of the "new" practices I use as a software developer are actually not as new as people think. In fact, some date back - in one form or another - to the 1950's.

Take Test-driven Development, for example. Kent Beck spoke of how he "rediscovered" TDD, referencing a book - whose name he appears to have forgotten - that spelled out how programmers could define a program by examples of inputs and outputs (in this case recorded on magnetic tape) and writing code to map the input onto the output. That's TDD.

Other TDD-like references date as far back as 1957 (and lets assume that the actual application of those practices goes back even further). From Arialdo Martini's blog post "You won't believe how old TDD is ", he quotes from the book "Digital Computer Programming" by Daniel D McCracken:

"The first attack on the checkout problem may be made before coding is begun. In order to fully ascertain the accuracy of the answers, it is necessary to have a hand-calculated. check case with which to compare the answers which will later be calculated by the machine. This means that stored program machines are never used for a true one-shot problem. There must always be an element of iteration to make it pay. The hand calculations can be done at any point during programming. Frequently, however, computers are operated by computing experts to prepare the problems as a service for engineers or scientists. In these cases it is highly desirable that the “customer” prepare the check case, largely because logical errors and misunderstandings between the programmer and customer may be pointed out by such procedure. If the customer is to prepare the test solution is best for him to start well in advance of actual checkout, since for any sizable problem it will take several days or weeks to and calculate the test."

What jumps out at me from this paragraph is the allusion to an iterative process of programming, driven by tests that are written by the customer (a domain expert).

Similar references to TDD-like practices can be found in interviews with Jerry Weinberg where he talks about his experiences on NASA's Project Mercury in the early 1960s. It was also stated by Craig Larman that development on Project Mercury was done "top-down" (we now call it "outside-in") using test doubles.

So we can conclude that some of form of iterative, test-driven software development was being practiced on real projects before 1971.

Years of my own digging has revealed that many other practices go back to the 1960's and earlier. It's believed, for example, that teams working on IBM's OS/360 in the early 1960's were doing integration builds.

The Report from the 1968 NATO Conference on Software Engineering describes some strikingly familiar problems faced by programmers then and programmers today, and includes pieces of advice that modern teams would do well to heed.

And then there's the technology itself. Advancements in programming languages had reached a plateau by the time I was born. By 1971 we had progressed from programming in binary with punchcards, via Assembler and 3GLs like FORTRAN, to typing early object oriented programs using languages like Simula and Smalltalk into text editors (maybe even using a Graphical User Interface, if you happened to work at Xerox Parc). LISP was invented in the 1950's, and folk who think functional programming is all trendy and new might like to reflect that on the fact that FP was old news by the year of my birth. The programming tools of today, while far more powerful in their scope, are essentially little changed from the programming tools of 1971. They typed programs into a text editor, and used a compiler to read the source code and generate machine-executable code.

And last night it struck me; would it be possible to synthesize a recognisably "modern" approach to software development using only the principles and practices and types of tools that were available in 1971?

If I shopped around the various texts written before 1971, could I create the requirements, design, programming, testing, configuration management and other disciplines I might need to produce valuable software in 2013?

Would it be simply, as some suggest, that all I would need to do is find out what we were calling it then and map that onto things we do today? Would there be big gaps? Would there be no gaps? Would it cover all 11 essential disciplines in my Back To Basics paper

Just for jolly, I intend to try and create a methodology synthesized out of these vintage disciplines that might be fit for 21st century purposes.

Wish me well!

July 12, 2013

Learn TDD with Codemanship

PRO TIP: Iterative Design Is A Goal-Seeking Process

One aspect of software development that gets glossed over even by those presumably in the know is this question of how we judge success.

This is kind of important, particularly when we're supposed to be iterating. The vital question ought to be: iterating towards what?

Iterative software development is, in its essence, evolutionary design. Each new version of the software could be "better" or "worse" than the last. If it's better, we move forward with that version, mutating it to the next. if it's worse, we reject it and go back to the drawing board.

Doing evolutionary design effectively requires us to have at least a rough idea of what is "better" and what is "worse", and the only meaningful context for that is a piece of knowledge that eluded even the original signatories of the Agile Manifesto.

Contrary to what we've been told, we should not judge our progress by how much "working software" has been delivered. The purpose of building bridges is not to build bridges, it is to allow us to get to the other side.

Software is a tool, and should be helping us to solve a problem. It's solving the problem that's the goal, and the only meaningful way to judge our progress against that is to test the software in that context.

If the goal of the software is to make it easier to find a hotel room when you're in a strange town, we should use each version of the software to try and find a hotel room in a strange town, and measure that against how we were doing it before. Is it easier? How much easier?

When we lose sight of the real goals of the software we're writing, we all too easily end up with software that's an end in itself. I've lost count of the times I've seen businesses waste millions of pounds failing to solve a problem with software.

Now, this shouldn't be news. I've been telling teams this for nearly two decades, and I learned it from books that were written years before then. Parts of the Agile community may have recently rediscovered the idea - and rebranded it, of course, as they seem to do with every old idea - but the notion that iterative processes should be goal-seeking if we want to converge on a workable solution is pretty fundamental.

Without clear goals, we're doomed to wander endlessly in a sandstorm without a map or a compass.

September 16, 2012

Learn TDD with Codemanship

Are Woolly Definitions Of "Success" At The Heart Of Software Development's Thrall To Untested Ideas?

In the ongoing debate about what works and what doesn't in software development, we need to be especially careful to define what we mean by "it worked".

In my Back To Basics paper, I made the point that teams need to have a clear, shared and testable understanding of what is to be achieved.

Without this, we're a ship on a course to who-knows-where, and I've observed all manner of ills stemming from this.

Firstly, when we don't know where we're supposed to be headed, steering becomes a fruitless exercise.

It also becomes nigh-on impossible to gauge progress in any meaningful way. It's like trying to score an archery contest with an invisible target.

To add to our worries, teams that lack clear goals have a tendency to eat themselves from the inside. We programmers will happily invent our own goals and persue our own agendas in the absence of a clear vision of what we're all meant to be aiming for.

This can lead to excess internal conflict as team members vie to stamp their own vision on a product or project. Hence an HR system can turn into a project to implement an "Enterprise Service Bus" or to "adopt Agile".

Since nobody can articulate what the real goals are, any goal becomes more justifiable, and success becomes much easier to claim. I've met a lot of teams who rated their product or project as a "big success", much to the bemusement of the end users, project sponsors and other stakeholders, who can take a very different view.

There are times when we can display all the misplaced confidence and self-delusion of an X Factor contestant who genuinely seems to have no idea that they're singing out of tune and dancing like their Dad at a wedding.

Much of the wisdom we find on software development comes from people, and teams, who are basing their insights on a self-endowed sense of success. "We did X and we succeeded, therefore it is good to X" sort of thing.

Here's my beef with that: first off, it's bad science.

It's bad science for three reasons: one is that one data point doesn't make a trend, two is that perhaps you have incorrectly attributed your success to X rather than one of the miriad other factors in software development, and three is that can we really be sure that you genuinely succeeded?

If I claim that rubbing frogspawn into your eyes cures blindness, we can test that by rubbing frogspawn into the eyes of blind people and then measuring the accuity of their eyesight afterwards.

If, on ther hand, I claim that rubbing frogspawn into your eyes is "a good thing to do", and that after I rubbed frogspawn into my eyes, I got "better" - well, how can we test that? What is "better"? Maybe I rubbed frogspawn into my eyes and my vocabulary improved.

My sense is that a worrying proportion of what we read and hear about "things that are good to do" in software development is based on little more than "how good (or how right) it felt" to do them. Who knows; maybe rubbing fresh frogspawn in your eyes feels great. But that has little bearing on its efficacy as a treatment.

Without clear goals, it's not easy to objectively determine if what we're doing is working, and this - I suspect - is the underlying reason why so much of what we know, or we think we know, about software development is so darned subjective.

Teams who've claimed to me that they're "winning" (perhaps because of all the tiger blood) have turned out to be so wide of the mark that, in reality, the exact oppsosite was true. These days, when I hear proclamations of great success, it's usually a precursor to the whole project getting canned.

The irony is that those few teams who knew exactly what they were aiming for often measure themselves more brutally against their goals, and are more pessimistic, despite in real terms being more "winning" than teams who were prematurely doing their victory lap.

This, I suspect, has also contributed to the dominance of subjective ideas in software development. Ideas backed up by objective successes seem to be expressed more tentatively and with more caveats than ideas backed up by little more than feelgood and tiger blood, which are expressed more confidently and in more absolute terms.

The naked ape in all of us seems to respond more favourably to people who present their ideas with confidence and a greater sense of authority. In reality, many of these ideas have never really been put to the test.

Once an idea's gained traction, there can be benefits within the software development community to being its originator or a perceived expert in it. Quickly, vested interests build up and the prospect of having their ideas thoroughly tested and potentially debunked becomes very unattractive. The more popular the idea, and the deeper the vested interests, the more resistance to testing it. We do not question whether a burning bush really could talk when we're in the middle of a fundraising drive for the church roof...

It's saddening to see, then, that in the typical lifecycle of an idea, publicising it often preceds testing it. More fools us, though. We probably need to be much more skeptical and demanding of hard evidence to back these ideas up.

Will that happen? I'd like to think it could, but the pessimist in me wonders if we'll always opt for the shiny-and-new and leave our skeptical hats at home when sexy new ideas - with sexy new acronyms - come along.

But a good start would be to make the edges of our definition of "success" crisper and less forgiving.