February 19, 2014

...Learn TDD with Codemanship

Programming Laws & Reality (& Why Most Teams Remain Flat Earthers)

An article on Dr Dobbs by Capers Jones has been doing the rounds on That Twitter, all about whether famous "programming laws" that we hold dear stand up to scrutiny with real-world data.

The answer from Capers is; yes, we do know what we think we know. For the most part, these programming laws are backed up by the available data.

For example, Fred Brooks' law that adding programmers to a late project makes it later is mostly true, for teams above a certain size. Adding one good programmer to a team of two probably won't cause delays. Adding another programmer to a team of 50 probably will. Also, adding an inexperienced or less capable programmer to any team will probably slow that team down.

This should come as no surprise. We've known this for decades. It's Software Development 101.

And yet, when a project is overunning, what do 99% of managers do? They hire more programmers. Still. To this day.

Not only do the hire more programmers, but many insist on hiring cheap junior programmers, so they can hire more of them. This tends to compound the problem. The schedule slips further, and they prescribe more of the same medicine.

It's a classic management mistake: the perceived solution is actually the cause of the problem, and the worse the problem gets, the more fuel gets thrown on the fire to try and put it out. It creates a negative feedback loop that can spiral out of control. Hence you will find enormous teams of hundreds of developers barely achieving what a team of four could.

If Brooks' Law is well know, why do so many managers continue to do the exact opposite?

Similarly, with Jones' own law about software defect removal, that states that teams who are better at removing defects before testing tend to be more productive than teams who are worse at it; and Peter Senge's law that simply states Faster is slower.

The jury's really not out about the relationship between quality and time and cost. As Jones' reminds us:

"Empirical data from about 20,000 projects supports this law."

To wit, the way to go faster is to take more care over quality. Teams moan about "not having time" for defect prevention practices like developer testing or inspections, and yet there's a mountain of evidence that suggests that if they did more of these things, they'd actually get done quicker. Again, it's a negative feedback loop. We don't have time, so we skimp on quality, which creates costly delays downstream, which eat up more of our time. Rinse and repeat.

It's another Software Development 101. Being a software developer and not believing it is like being a doctor who doesn't believe in germs, or an astronomer who believes the Earth is flat.

And, yet again, the vast majority of teams are encouraged to do the exact opposite - sometimes even rewarded for doing it.

The evidence tells us that, when the schedule's slipping and we're up against it, the right thing to do is keep the team small and highly skilled - indeed, maybe even move some developers off the team - and focus more on quality.

That's the tragedy of our industry: higher quality, more reliable software is not just compatible with commercial realities, it can actually improve them.

One can't help but wonder why. What motivates teams and their managers to wilfully persue what we might call "Flat Earth" strategies - strategies that are known to be likely to fail because the Earth is, in fact, round?

In considering how we might bring techniques for more reliable software into the mainstream, perhaps we need to devote much time to thinking about why the industry has ignored the facts for so long.











Posted 3 years, 10 months ago on February 19, 2014