May 29, 2007

...Learn TDD with Codemanship

Governance - Build On Existing Code or Start Afresh?

The Agile Governance game is designed to highlight the probabilistic nature of projects and portfolios of projects. It strips away the complexity of a project and boils it down to a simple throw of the dice - representing that which is beyond the manager's control (e.g., the actual writing of code). The key aim of the game is to force us to find strategies that tend towards success in the face of this uncertainty.

If we can't control the outcome of a dice throw, how can we "win" the Agile Governance game? Once we have seen through the illusion of control, what strategies are left for us as managers?



Those who've played the game will know that prioritising our plan is critically important. To have a better chance of a higher score, we should not just start with the highest value moves - as many planning texts suggest. We should actually start with the moves that have the best balance of risk and reward. With two 6-sided dice, if we require our team to throw a 12 to land on a square containing 9 points, that has a worse balance of risk and reward than requiring to throw a 7 to land in a square containing 6 points, since a 7 is about 6 times as likely as a 12. In an XP project, a user story worth 9 value points at a cost of 6 complexity points should be scheduled after a story worth 6 values points that costs only 3 complexity points for exactly the same reasons.

So, as project managers, we can choose to schedule the low-hanging fruit first. This will dramatically improve our chances of earning more value with the same time and resources.



Another choice we can make is which dice to use. Two six-sided dice might give us the best odds of throwing a 7, but two four-sided dice would give us better odds of throwing a 5. In project terms, this might mean choosing the right team. Some software developers can probably multiply your odds of successful delivery by an order of magnitude. It's been suggested by some that this could be a factor of ten or more. Anecdotally, there seems to be profound evidence to support this. In the Agile Governance game, teams soon learn strategies for getting their hands on the best dice to support that part of their plan.

In both of these cases - prioritising by balance of risk vs. reward and choosing the best dice - there are clear foundations in probability that explain why these strategies work. Less clear, in terms of the game itself, is why iterating seems to also lead to higher scores than "waterfall plans". The game board has not changed, so why would a plan created and executed 20 dice throws at a time tend to turn out better than a plan created and executed using all 100 throws?

Certainly, teams agree that after they've played one round of the game in the waterfall style, they wish they'd had an opportunity to change their plan at points throughout the game. This fits in with my theory that the real benefit of iterating is that our first answer is almost always the wrong answer, and that - even with zero change in the underlying problem itself - we require multiple passes to come up with better answers. The issue here is one of complexity. There are effectively so many possible routes around the game board that no team could possibly hope to find the optimum plan with limited time for analysis. Iterating gives them:

a. Feedback that tells them their route might not be the best, and
b. Opportunities to select better routes if they spot them

I also wonder if a plan that would be optimal for 100 dice throws might not be so optimal for 80, and that - as the game unfolds - it makes sense to adapt the plan anyway. That theory still requires some work, though.

So, at a project level, we have three strategies that should improve our chances of delivering more value, even though we have no control over individual outcomes. I can't put numbers on it - maybe a simulation would help - but I suspect the impact of these strategies, used in combination, could be quite profound on our overall chances of a successful project. It ought to be criminal not to apply them!

Some managers are not solely concerned with the outcome of one project. Some managers are responsible for the governance of a portfolio of projects (or projects, or systems, or whatever.) For them, the lack of control is even more pronounced. In reality, most managers in this situation have only one kind of decision they can make - which projects to back. Their visibility of projects is very limited, because they are not involved on a day-to-day basis, so on what information should they base their decisions?

Firstly, it makes sense to back a winner. Projects that have more consistently delivered more value in the past are more likely to deliver in the future. With fixed resources to distribute among a portfolio of projects, much of the skill of the informed program manager is to split the resources according to the likelihood of a return, based on previous performance.

Projects that are failing need to be starved of resources, which runs counter to what many might tell you. In government IT projects, for example, the further behind schedule and the more over budget a project gets, the more money they throw at it to "fix the problems". Almost invariably, though, the problems get worse and the schedule slips further and the costs spiral out of control. Projects that are succeeding should be given even more support. If you could bet on a horse part-way through a race, which one would you back? If a horse has been in the lead for 80% of the course, would you spread your bets evenly among all the runners? No. Neither would I? Would you bet a penny of your money on the horse bringing up the rear? Certainly not! That would be crazy.

But we do it in IT all the time. Indeed, sometimes we starve the front-runner of resources to feed a failing project. How many critical NHS IT projects are indefinitely on hold because all the money's gone to the BIG WHITE ELEPHANT that is Connecting for Health? And those of us in the private sector may smugly point and snigger, but I've seen far worse funding decisions in the City, believe me.

The next question is a lot more subtle, and this is the aspect of governance that I'm working on now.

Imagine this scenario:

A software system has been live for 5 years, and is supported by a team of 6 developers who fix bugs and make minor changes. There are 500,000 lines of code, and the cost of a new line of code after 5 years is roughly 5 times higher than it was in the first year. This is largely due to the complexity of the code, the dependencies between modules (causing small changes to "ripple" out to other moules), and the time it takes to rebuild, re-test and redeploy components.

Then along come marketing, asking for a whole raft of new features and changes to existing features. The scope of these requests is roughly as big as the existing system - a very big deal.

The project manager plans on the basis that it took them a year to build a system of similar complexity, and therefore the new project would take about as long.

"Ah, but hang on a moment", says the architect (yes, they have an architect - god help them), "That was our productivity when we didn't have all this legacy code to contend with. Our productivitiy now is about 20% of what it used to be. It might be quicker if we started with a clean slate and rebuilt the whole system - with all the new features - from scratch."

But would it be quicker? Again, with a clean slate, initial productivity would be high, but surely as the project progresses - subject to the same law of entropy - productivity would drop and we'd be back where we started?

It's not at all a simple decision. How much like the old system would the new system need to be? You wouldn't be replacing 100% of it if you started from scratch, because marketing have asked for lots of changes to the existing features. How much would be the same? How much would be different or new? And how much of that would you get in a fixed window of time? And is there anything the team could do to make the cost of change curve less steep?

More on this soon.
Posted 13 years, 6 months ago on May 29, 2007