January 13, 2006

...Learn TDD with Codemanship

Holism vs. Reductionism

In previous posts, I've more than hinted at the idea that in complex systems simple mechanisms of cause-and-effect are impossible to find.

We're a funny bunch, people. We always seek to break things down and rationalise them, and this can lead to some bizarre interpretations and theories about why certain things happen. In days gone by, we might have concluded that our crops failed because we didn't make the appropriate sacrifices to the appropriate gods. Or that a village member is behaving strangely because they're possessed by a demon. So we kill a goat, or drive nails into the villager's brain, to fix the problem. Sometimes it works, and oftentimes it doesn't.

We seem to be quite easily fooled by coincidences. Mediums and astrologers rely on this to make their living, for example. It's the handful of times they get it right that somehow sticks in the mind, and we conveniently overlook the majority of times when they got it wrong.

By the by: we seem to be programmed to seek simple causes for complex problems. And so it is that when a development team gets good results, we look for simple reasons why this is so. Much of process improvement relies on our ability to break complexity down and somehow seperate the good bits from the bad bits. We throw away the bad bits, and keep the good bits. This works if you're fixing a watch. You take it apart, find the faulty component(s), and replace them with good components.

It doesn't work so well with complex organisms, like businesses, economies, or software development teams. In complex systems, knowing which bits are "good" and which bits are "bad" is very difficult. We can know that the system is "good" as a whole, but assigning responsibility to one component of that system is tricky and error prone.

Evolutionary engineering methods have to work on the system as a whole, in the context where it exists. We cannot go into a meeting room, draw diagrams of how our complex system works and then say "hey, let's take that bit out and replace it with this" and expect to get the results we predicted. Capability in Agile SPI, for example, is a property of the whole thing - people, practices, tools, environment, culture, working hours, proximity of good pubs, and so on.

Geneticists who tinker with DNA soon discover that seemingly redundant or even harmful DNA actually plays a more complex role in the development of the whole organism than they first thought. They can't just go barging in saying "we'll take this bit out and stick that bit in" without getting some pretty unpredictable results. A successful genetic make-up is a holistic entity. It succeeds as a whole, and second-guessing which bits are not needed or cause harm is something of a lottery. What scares some people about genetic engineering is this inherent uncertainty in the outcomes.

Process engineers should be equally cautious about tinkering with development teams. Teams must evolve as a whole, since they succeed or fail as a whole. The mistake we tend to make is in thinking that we can safely remove redundant or seemingly harmful memetic material and this will automatically lead to improvements in team fitness. In reality, the results are unpredictable. Sometimes performance improves, sometimes it doesn't. And nobody can know for sure which change made the difference.

For capability to evolve, things have to change. There must be some process of mutation to move the team forward. But to all intents and purposes, the effects of these changes are random and can only be observed by measuring the capability as a whole. In Agile SPI, we use metrics to help us select the capabilities that improve performance, raising the bar a little in each SPI cycle.
Posted 15 years, 7 months ago on January 13, 2006