February 20, 2019

Learn TDD with Codemanship

A Tale of Two Agiles

There are few of us left who rode the first wave of what we now call "Agile Software Development" who don't think something has gone very wrong with "Agile".

At the tail-end of the 1990's, it was all about small self-organising teams working closely with their customers to rapidly evolve working solutions and deliver value sooner and more sustainably with technical discipline. Extreme Programming, for example, represented a balance of forces - people, process and technology - all of which had to be addressed to produce valuable working software that met customers' needs today, tomorrow, and on the N+1th day.

Since the signing of the Agile Manifesto at Snowbird, Utah in 2001, that balance has been slowly drifting further and further off-kilter, towards Process. Today, Agile is almost unrecognisable from the small, adaptive set of values and principles celebrated in the manifesto.

Big Process once again dominates. Values and principles be damned. The micromanagers who made teams' lives hell in the 90s are back with new job titles like "Head of Agile", "Head of Delivery" and other heads of things nobody envisaged you could possibly need a head of in 2001.

"Product Managers" act as the new chasm between developers and customers, who are as far removed from each other as ever they were.

Small teams are now organised into major programmes of change and "transformations" under the kosh of "scaled Agile" processes that attempt to achieve conformity where none is either desirable or, frankly, possible. Among the most experienced dev practitoners, they have little credibility. By what magic have the scaled Agile wizards solved the problem of large-scale software development? The answer's simple: they haven't. The problem they have solved is how to make Big IT money from something that's supposed to be small.

And the technical side of it is all but forgotten. I recently reviewed more than 100 CVs for an "Agile leadership" role a client was advertising. More than 70% of candidates had never written code. Most of the rest hadn't written code in more than a decade. Pioneers of early lightweight methods were always clear about this: decisions should be made by people with the necessary expertise and experience to make those decisions.

Meanwhile, armies of "Agile Coaches" - who are also mostly without technical experience - guide development teams in the prescribed process. Agile "maturity models" abound. It's big business now.

And for every 10 Agile coaches, you might find one developer guiding teams in the technical practices. Teams get very little help in things like unit testing, TDD, refactoring, CI/CD, architecture & design, etc. I think, from my own experiences working in this space, this is because most managers are from non-technical backgrounds and find things like Scrum, Kanban and SAFe accessible. They see the value in them. They don't understand code craft, and don't understand the value in it. So they don't invest anywhere near as much in it. Our people-process-technology tripod tries to walk with one leg much more developed than the other two, and tips over.

The upshot of all this is that we're back where we started. Oversized teams (and teams of teams), micromanaged in deep command-and-control heirarchies, measured by arcane heavyweight processes. All the emphasis is back on "Did we do it the right way?", and "Did we do the right thing?" is as immaterial as it was when the Unified Process ruled the waves.

None of this would be a problem if organisations also invested seriously in the other 2 legs of the Agile tripod, especially their People.

None of this would be a problem if every non-technical Agile manager or Agile coach was balanced with an equivalent technical manager or coach.

None of this would be a problem if we could get back to the original vision of small, self-organising teams working directly with customers to solve problems together.

I'm not saying everyone involved has to be a software developer. I'm saying organisations need to restore the balance between those forces to succeed at "delivering sustainable value" through software (to borrow the vernacular).

Whatever it is successful dev teams are doing today, I'm loathe to hang the label of "Agile" around its neck. In 2019, that might be considered an insult.








February 5, 2019

Learn TDD with Codemanship

Evolutionary Design - What Most Dev Teams Get Wrong

One of the concepts a lot of software development teams struggle with is evolutionary design. It's the foundation of Agile Software Development, but also something many teams attempting to be more agile get wrong.

Evolution is an iterative problem solving algorithm. Each iteration creates a product that users can test and give feedback on. This feedback drives changes to improve the design in the next iteration. It may require additional features. It may require refinements to existing features.

To illustrate, consider the evolution of the guitar.



The simplest design for a guitar could be a suitably straight stick of wood with a piece of string fastened taught at both ends, with some kind of container - like a tin can - to amplify the sound it makes when we pluck the string.

That might be our first iteration of a guitar. Wouldn't take long to knock up, and we could probably get a tune out of it.

Anyone who's tried playing that kind of design will probably have struggled with fretting the correct notes, so maybe in the next iteration we add dots to the stick to indicate where key notes should be fretted.

Perhaps in the next iteration we take strips of metal and embed them in our stick to make fretting even easier and more accurate.

In the next iteration, we might replace the stick with a plank and add more strings, tuned at different musical intervals so we can play chords.

We might find that, with extensive use, the strings lose their taughtness and our guitar goes out of tune, so we add a way to adjust the tension with "tuners" at the far end of the plank. Also, occasionally, strings break and we need to be able to replace them easily , so we make it so that replacement strings can be fastened to a "bridge" near the can.

Up close, our guitar sounds okay. But in a larger venue, it's very difficult to hear the sound amplified by the tin can. So we replace that with a larger resonating chamber: a cigar box, perhaps.

Travelling extensively with our cigar-box guitar, we realise that it's not a very robust design. So maybe we can recreate the basic design concepts in a better-crafted wooden neck and body, with properly engineered hardware for the bridge and the tuners. And perhaps it's time to move from using strings to something that will last longer and stay in tune better, like thin metal wires.

News of our guitar has spread, and we find ourselves playing much larger venues where - even with the larger resonating chamber - it's hard to be heard over the rest of the band. For a while we use a well-placed microphone to amplify the sound, but we find that restricts our movement and prevents us from doing all the cool rock poses we've been inventing. So we create "pickups" that generate an electrical signal when the metal strings move within their magnetic field at the frequency of the fretted note. That signal is then sent to an amplifier that can go as loud as we need.

What we find, though, is that the resonance of our guitar generates a lot of electronic feedback. We realise that we don't actually need a resonating chamber any more, since the means by which we're now generating musical tone is no longer acoustic. We could use a solid body instead.

The pickups are still a bit noisy, though. And the strings still go out of tune over an hour or more of playing. So we develop noiseless pickups, and invent a bridge that detects the tuning and autocorrects the tension in the strings continuously, so the guitar's always in tune.

Then we add some cool LED lights, because rock and roll.

And so on.

The evolution of the guitar neatly illustrates the concept of iterative design. We start with the simplest solution possible, play it, and see how it can be improved in the next iteration of the design. Each iteration may add a feature (e.g., add more strings), or refine an existing feature (e.g., make the neck wider) to solve a problem that the previous iteration raised.

Very importantly, though, every iteration is a working solution to the headline problem. Every iteration of the guitar was a working guitar. You could get a tune out of it.

The mistake many teams make is, instead of starting with the simplest solution possible and then iteratively improving on it to solve problems, they start with a concept for a complex and complete solution and incrementally work their way through its long feature list.

Instead of starting with a stick, a string and a tin can, they set out to build (as illustrated above) a Framus Stormbender high-end custom guitar with all the bells and whistles like locking tuners, an Evertune bridge, noiseless Fishman Fluence pickups and a fretboard that lights up (because rock and roll).

This is not iterative, evolutionary design. It's incremental construction of a completed design. The question then is: do we really need the locking tuners? Do we really need the Evertune bridge? Do we really need the Fishman Fluence pickups? Because the Stormbender is a very high-spec guitar, and that makes it very expensive compared to, say, a perfectly usable standard Fender Stratocaster.

The emphasis in evolutionary design must be on solving problems. We're iterating towards the right solution, improving with each pass until the design is good enough for our needs. Each iteration is therefore defined by a goal (ideally one per iteration), not by a list of features. Make it so you can play a tune. Make it so it's easy to fret the rght notes. Make it so you can adjust the tuning. Make it so you can play chords. Make it so you can hear it in a large room. Make it so it doesn't fall to pieces in transit. Make it so it can be heard above the drums. Make it so there's less feedback. Make it so it's always in tune. And so on and so on.

Of course, when Framus construct a Stormbender, they don't start with a stick and a piece of string. They incrementally construct it, because they already know what the finished design is.

And when they designed the Stormbender, they didn't start with a stick and a piece of string, either. They started with the benefit of hundreds of years of guitar design progress and many problems pre-solved. Likewise, I don't start every software product with "First, I'm going to need an AND gate" and work my way up from there. Many of the problems have already been solved. When Google set out to create their own operating system, they didn't start by creating a simple BASIC interpreter. Many of the problems had already been solved. T hey started where others left off and solved new problems for the mobile age.

My point is that the process of solving those problems was evolutionary. Computing didn't start with Windows 10. It started with basic logical operations on 1s and 0s. Likewise, when we're faced with problems for which there are no pre-made solutions, we start with the simplest solution we can think of and iteratively improve on that until it's good enough for our needs.





December 8, 2018

Learn TDD with Codemanship

True Agile Requirements: Get It Wrong Quickly, Then Iterate

I'm going to be arguing in this post that our emphasis in the software design process tends to be wrong. To explain this, I'm going to show you some code. Bear with me.



This is a simple algorithm for calculating square roots. It's iterative. It starts with a very rough guess for the square root - half the input - and then refines that guess over multiple feedback cycles, getting it progressively less wrong with each pass, until it converges on a solution.

I use this algorithm when I demonstrate mutation testing, deliberately introducing errors to check if our test suite catches them. When I introduce an error into the line that makes the initial guess:



e.g., changing it to:



The tests still pass. In fact, I can change the initial guess wildly:



And the tests still pass. They take a little longer to run is all. This is because, even with an initial guess 2 million times bigger, it just requires an extra few iterations to converge on the right answer.

What I take from this is that, in an iterative problem solving process, the feedback loops can matter far more than the initial input. It's the iterations that solve the problem.

When I see teams, including the majority of agile teams, focusing on the initial inputs and not on the feedback cycles, I can't help feeling they're focusing on the wrong thing. I believe we could actually start out with a set of requirements that are way off the mark, but with rapid iterating of the design, arrive at workable solution anyway. It would maybe take an extra couple of iterations.

For me, the more effective requirements discpline is testable goals + rapid iterations. You could start with a design for a word processor, but if your goal is to save on heating bills, and you rapidly iterate the design based on customer feedback from real world testing (i.e., "Nice spellchecker, but our gas bill isn't going down!"), you'll end up with a workable smart meter.

This is why I so firmly believe that the key to giving customers what they need is to focus on the factors that affect the speed of iterating and how long we can sustain the pace of evolution. The cost of changing software is a big factor in that. To me, iterating is the key requirements discipline, and therefore the cost of changing software is a requirements issue.

Time spent trying to get the spec right, for me, is time wasted. I'd rather get it wrong quickly and start iterating.







December 2, 2018

Learn TDD with Codemanship

Architecture: The Belated Return of Big Picture Thinking

A question that's been stalking me is "When does architecture happen in TDD?"

I see a lot of code (a LOT of code) and if there's a trend I've noticed in recent years it's an increasing lack of - what's the word I'm looking for? - rationality in software designs as they grow.

When I watch dev teams produce working software (well, the ones who do produce software that works, at least), I find myself focusing more and more on when the design decisions get made.

In TDD, we can make design decisions during four distinct phases of the red-green-refactor cycle:

1. Planning - decisions we make before we write any code (e.g., a rough sequence diagram that realises a customer test scenario)

2. Specifying- decisions we make while we're writing a failing test (e.g., calling a function to do what you need done for the test, and then declaring it in the solution code)

3. Implementing - decisions we make when we're writing the code to pass the test (e.g., using a loop to search through a list)

4. Refactoring - decisions we make after we've passed the test according to our set of organising principles (e.g., consolidating duplicate code into a reusable method)

If you're a fan of Continuous Delivery like me, then a central goal of the way you write software is that it should be (almost) always shippable. Since 2 and 3 imply not-working code, that suggests we'd spend as little time as possible thinking about design while we're specifying and implementing. While the tests are green (1 and 4), we can consider design at our leisure.

I can break down refactoring even further, into:

4a. Thinking about refactoring

4b. Performing refactorings

Again, if your goal is always-shippable code, you'd spend as little time as possible executing each refactoring.

Put more bluntly, we should be applying the least thought into design while we're editing code.

(In my training workshops, I talk about Little Red Riding Hood and the advice her mother gave her to stay on the path and not wander off into the deep dark forest, where dangers like Big Bad Wolves lurk. Think of working code as the path, and not-working code as the deep dark forest. I encourage developers to always keep at least one foot on the path. When they step off to edit code, they need to step straight back on as quickly as possible.)

Personally - and I've roughly measured this - I make about two-thirds of design decisions during refactoring. That is, roughly 60-70% of the "things" in my code - classes, methods, fields, variables, interfaces etc - appear during refactoring:

* Extracting methods, constants and such to more clearly document what code does

* Extracting methods and classes to consolidate duplicate code

* Extracting classes to eliminate Primitive Obsession (e.g., IF statements that hinge on what is obviously an object identity represented by a literal vaue)

* Extracting and moving methods to eliminate Feature Envy in blocks of code and expressions

* Extracting methods and classes to split up units of code that have > 1 reason to change

* Exctracting methods to decompose complex conditionals

* Extracting client-specific interfaces

* Introducing parameters to make dependencies swappable

And so on and so on.

By this process, my code tends to grow and divide like cells with each new test. A complex order emerges from simple organising principles about readabililty, complexity, duplication and dependencies being applied iteratively over and over again. (This is perfectly illustrated in Joshua Kerievky's Refactoring to Patterns.)

I think of red-green-refactor as the inner loop of software architecture. And lots of developers do this. (Although, let's be honest, too many devs skimp on the refactoring.)

But there's architecture at higher levels of code organisation, too: components, services, systems, systems of systems. And they, too, have their organising principles and patterns, and need their outer feedback loops.

This is where I see a lot of teams falling short. Too little attention is paid to the emerging bigger picture. Few teams, for example, routinely visualise their components and the dependencies between them. Few teams regularly collaborate with other teams on managing the overall architecture. Few devs have a clear perspective on where their work fits in the grand scheme of things.

Buildings need carpentry and plumbing. Roads need tarmaccing. Sewers need digging. Power lines need routing.

But towns need planning. Someone needs to keep an eye on how the buildings and the roads and the sewers and the power lines fit together into a coherent whole that serves the people who live and work there.

Now, I come from a Big ArchitectureTM background. And, for all the badness that we wrought in the pre-XP days, one upside is that I'm a bit more Big Picture-aware than a lot of younger developers seem to be these days.

After focusing almost exclusively on the inner loop of software architecture for the last decade, starting in 2019 I'm going to be trying to help teams build a bit of Big Picture awareness and bring more emphasis on the outer feedback loops and associated principles, patterns and techniques.

The goal here is not to bring back the bad old days, or to ressurect the role of the Big Architect. And it's definitely not to try to reanimate the corpse of Big Design Up-Front.

This is simply about nurturing some Big Picture awareness among developers and hopefully reincorporating the outer feedback loops into today's methodologies, which we misguidedly threw out with the bathwater during the Agile Purges.

And, yes, there may even be a bit of UML. But just enough, mind you.





August 6, 2018

Learn TDD with Codemanship

Agile Baggage

In the late 1940s, a genuine mystery gripped the world as it rebuilt after WWII. Thousands of eye witnesses - including pilots, police officers, astronomers, and other credible observers - reported seeing flying objects that had performance characteristics far beyond any known natural or artificial phenomenon.

These "flying saucers" - as they became popularly known - were the subject of intense study by military agencies in the US, the UK and many other countries. Very quickly, the extraterrestrial hypothesis - that these objects were spacecraft from another world - caught the public's imagination, and "flying saucer" became synonymous with Little Green Men.

In an attempt to outrun that pop culture baggage, serious studies of these objects adopted the less sensational term "Unidentified Flying Object". But that, too, soon became shorthand for "alien spacecraft". These days, you can't be taken seriously if you study UFOs, because it lumps you in with some very fanciful notions, and some - how shall we say? - rather colorful characters. Scientists don't study UFOs any more. It's not good for the career.

These days, scientific studies of strange lights in the sky - like the Ministry of Defence's Project Condign - use the term Unidentified Aerial Phenomena (UAP) in an attempt to outrun the cultural baggage of "UFOs".

The fact remains, incontravertibly, that every year thousands of witnesses see things in the sky that conform to no known physical phenomena, and we're no closer to understanding what it is they're seeing after 70 years of study. The most recent scientific studies, in the last 3 decades, all conclude that a portion of reported "UAPs" are genuine unknowns, they they are of real defence significance, and worthy of further scientific study. But well-funded studies never seem to materialise, because of the connotation that UFOs = Little Green Men.

The well has been poisoned by people who claim to know the truth about what these objects are, and they'll happily reveal all in their latest book or DVD - just £19.95 from all good stores (buy today and get a free Alien Grey lunch box!) If these people would just 'fess up that, in reality, they don't know what they are, either - or , certainly, they can't prove their theories - the scientific community could get back to trying to find out, like they attempted to in the late 1940s and early 1950s.

Agile Software Development ("agile" for short) is also now dragging a great weight of cultural baggage behind it, much of it generated by a legion of people also out to make a fast buck by claiming to know the "truth" about what makes businesses successful with technology.

Say "agile" today, and most people think you're talking about Scrum (and its scaled variations). The landscape is very different to 2001, when the term was coined at a ski resort in Utah. Today, there are about 20,000 agile coaches in the UK alone. Two thirds of them come from non-technical backgrounds. Like the laypeople who became "UFO researchers", many agile coaches apply a veneer of pseudoscience to what is - in essence - a technical persuit.

The result is an appearance of agility that often lacks the underlying technical discipline to make it work. Things like unit tests, continuous integration, design principles, refactoring: they're every bit as important as user stories and stand-up meetings and burndown charts.

Many of us saw it coming years ago. Call it "frAgile", "Cargo Cult agile", or "WAgile" (Waterfall-Agile) - it was on the cards as soon as we realised Agile Software Development was being hijacked by management consultants.

Post-agilism was an early response: an attempt to get back to "doing what works". Software Craftsmanship was a more defined reaction, reaffirming the need for technical discipline if we're to be genuinely responsive to change. But these, too, accrued their baggage. Software craft today is more of a cult of personality, dominated by a handful of the most vocal proponents of what has become quite a narrow interpretation of the technical disciplines of writing software. Post-agilism devolved into a pseudo-philosophical talking shop, never quite getting down to the practical detail. Their wells, too, have been poisoned.

But teams are still delivering software, and some teams are more successfully delivering software than others. Just as with UFOs, beneath the hype, there's a real phenomenon to be understood. It ain't Scrum and it ain't Lean and it certainly ain't SAFe. But there's undeniably something that's worthy of further study. Agile has real underlying insights to offer - not necessarily the ones written on the Manifesto website, though.

But, to outrun the cultural baggage, what shall we call it now?




July 2, 2018

Learn TDD with Codemanship

Level 4 Agile Maturity

I recently bought new carpets for my home, and the process of getting a quote was very interesting. First, I booked an appointment online for someone to come round and measure up. This appointment took about an hour, and much of that time was spent entering measurements into a software application that created a 2D model of the rooms.

Then I visited a local-ish store - this was a big national chain - and discussed choices and options and prices. This took about an hour and a half, most of which was spent with the sales adviser reading the measurements off a print-out of the original data set and typing them into a sales application to generate a quote.

There were only 3 sales people on the shop floor, and it struck me that all this time spent re-entering data that someone had already entered into a software application was time not spent serving customers. How many sales, I wondered, might be lost because there were no sales people free to serve? We discussed this, and the sales advisor agreed that this system very probably cost sales: and lots of them. (Only the previous week I had visited the local, local shop for this chain, and walked out because nobody was free to serve me.)

With more time and research, we might have been able to put a rough figure on potential sales lost during this data re-entering activity for the entire chain (400 stores).

As a software developer, this problem struck me immediately. It had never really occurred to the sales advisor before, he told me. We probably all have stories like this. I can think of many times during my 25-year career where I've noticed a problem that a piece of software might be able to solve. We tend to have that problem-solving mindset. We just can't help ourselves.

And this all reminded me of a revelation I had maybe 16 years ago, working on a dev team who had temporarily lost its project manager and requirements analyst, and had nobody telling us what to build. So we went to the business and asked "How can we help?"

It turned out there was a major, major problem that was IT-related, and we learned that the IT departmet had steadfastly ignored their pleas to try and solve it for years. So we said "Okay, we'll have a crack at it."

We had many meetings with key business stakeholders, which led to us identifying roughly what the problem was and creating a Balanced Scorecard of business goals that we'd work directly towards.

We shadowed end users who worked in the processes that we needed to improve to see what they did and think about how IT could make it easier. Then we iteratively and incrementally reworked existing IT systems specifically to achieve those improvements.

For several months, it worked like a dream. Our business customers were very happy with the progress we were making. They'd never had a relationship with an IT team like this before. It was a revelation to them and to us.

But IT management did not like it. Not one bit. We weren't following a plan. They wanted to bring us back to heel, to get project management in place to tell us what to do, and to get back to the original plan of REPLACING ALL THE THINGS.

But for 4 shiny happy months I experienced a different kind of software development. Like Malcom McDowell in Star Trek Generations, I experienced the bliss of the Nexus and would now do pretty much anything to get back there.

So, ever since, I've encouraged dev teams to take charge of their destinies in this way. To me, it's a higher level of requirements maturity. We progress from:

1. Executing a plan, to
2. Building a product, to
3. Solving real problems people bring to us, to
4. Going out there and pro-actively seeking problems we could solve

We evolve from being told "do this" to being told "build this" to being told "solve this" to eventually not being told at all. We start as passive executors of plans and builders of features to being active engaged stakeholders in the business, instigating the work we do in response to business needs and opportunities that we find or create.

For me, this is the partnership that so many dev teams aspire to, but can never reach because management won't let them. Just like, ultimately, they woudn't let us in that particular situation.

But I remain convinced it's the next step in the evolution of software development: one up from Agile. It is inevitable*.




*...that we will pretend to do it for certifications while the project office continues to be the monkey on our backs

January 23, 2018

Learn TDD with Codemanship

Without Improving Code Craft, Your Agile Transformation Will Fail

"You must be really busy!" is what people tend to say when I tell them what I do.

It stands to reason. If software is "eating the world", then code craft skills must be highly in demand, and therefore training and coaching for developers in those skills must be selling like hotcakes.

Well, you'd think so, wouldn't you?

The reality, though, is that code craft is critically undervalued. The skills needed to deliver reliable, maintainable software at a sustainable pace - allowing businesses to maintain the pace of innovation - are not in high demand.

We can see this both in the quality of code being produced by the majority of teams, and in where organisations focus their attentions and where they choose to invest in developing skills and capabilities.

"Agile transformations" are common. Some huge organisations are attempting them on a grand scale, sending their people on high-priced training courses and drafting in hundreds of Agile coaches - mostly Scrum-certified - to assist, at great expense.

Only a small minority invest in code craft at the same time, and typically they invest a fraction of the time, effort and money they budget for Agile training and coaching.

The end result is software that's difficult to change, and an inability to respond to new and changing requirements. Which is kind of the whole point of Agile.

Let me spell it out in bold capital letters:

IF CODE CRAFT ISN'T A SIGNIFICANT PART OF YOUR AGILE TRANSFORMATION, YOU WILL NEVER ACHIEVE AGILITY.

You can't be responsive to change if your code is expensive to change. It's that simple.

While you build your capability in product management, agile planning and all that scrummy agile goodness, you also need to be addressing the factors that increase the cost of changing code. Skills like unit testing, TDD, refactoring, SOLID, CI/CD are a vital part of agility. They are hard skills to crack. A 3-day Certified Code Crafter course ain't gonna cut the mustard. Developers need ongoing learning and practice, with the guidance of experienced code crafters. I was lucky enough to get that early in my career. Many other developers are not so lucky.

That's why I built Codemanship; to help developers get to grips with the code-facing skills that few other training and coaching companies focus on.

But, I'll level with you: even though I love what I'm doing, commercially it's a struggle. The reason so few others offer this kind of training and coaching is because there's little money it. Decision makers don't have code craft on their radars. There's been many occasions when I've thought "May as well just get Scrum-certified". I'm not going to go down without a fight, but what I really need (apart from them to cancel Brexit) is for a shift in the priorities of business who are currently investing millions on Agile transformations that are all but ignoring this crucial area.

Of course, those are my problems, and I made my choices. I'm very happy doing what I'm doing. But it's indicative of a wider problem that affects us all. Getting from A to B is about more than just map reading and route planning. You need a well-oiled engine to get you there, and to get you wherever you want to go next. Too many Agile transformations end up broken down by the side of the road, unable to go anywhere.


November 7, 2017

Learn TDD with Codemanship

Why Agile's Not For Me

There's a growing consensus among people who've been involved with Agile Software Development since the early (pre-Snowbird) days that something is rotten in the state of Agile.

Having slowly backed out of the Agile movement over the last decade or more (see my semi-jocular posts on Post-Agilism from 2007), I approach the movement as a fairly skeptical observer.

Talking with folk both inside and outside the Agile movement - and many with one foot in and one foot out - has highlighted for me where the wheels came off, so to speak. And it's a story that's by no means unique to Agile Software Development. Like all good ideas in software, it's never long before the money starts taking an interest and the pure ideas that it was founded on get corrupted.

1. Too Much Emphasis On Working Software

But, arguably, Agile Software Development was fundamentally flawed straight out of the gate (or straight out of the ski resort, more accurately). If I look for a foundation for Agile, it clearly has its roots in the concept of evolutionary software development. Evolution is a goal-seeking algorithm that searches for an optimum solution by iterating designs rapidly - the more rapidly the better - and feeding back in what we learn with each iteration to improve our solution.

There are two key words in that description: iterating and goal-seeking. There is no mention of goals in the original Agile Manifesto. The manifesto stipulates that the measure of progress is "working software". It does not address the question of why we should build that software in the first place.

And so, many Agile teams - back in the days when Extreme Programming was still a thing - focused on iterating software designs to solve poorly-defined - or not defined at all, let's face it - business problems. This is pretty much guaranteed to fail. But, bless our little cotton socks, because we set ourselves the goal of delivering "working software", we tended to walk away thinking we'd succeeded. Our customers... not so much.

This was the crack in Agile through which the project office snuck back in. (More about them later.)

2. Not Enough Emphasis On Working Software

As Agile evolved as a brand, more and more of us tried to paint ourselves in the colours of management consultants. Because, let's be frank, that's where the big bucks are. People who would once have been helping you to fix your build script were now suddenly self-professed McKinsey-style business gurus telling you how to "maximise the flow of value" in your enterprise, often to comic effect because nobody outside of the IT department took us seriously.

And then, one day - to everyone's horror - somebody outside the IT department did start taking us seriously, and suddenly it wasn't funny any more. Agile "crossed the chasm", and now people were talking about "going Agile" in the boardroom. Management and business magazines now routinely run articles about Agile, typically seeking input from people I've certainly never heard of who are now apparently world-leading experts. None of these people has heard of Kent Beck or Ward Cunningham or Brian Marick or any other signatory of the original Agile Manifesto. Agile today is very much in the hands of the McKinseys of this world. A classic "be careful what you wish for" moment for those from the IT department who aspired to be dining at the top table of consulting.

Agile's now Big Business. And the business of Agile is going BIG. Like every good and pure thing that falls into the hands of management consultants, Agile has mutated from a small, beautiful bird singing a twinkly tune to a bloated enterprise albatross with a foghorn.

3. We Didn't Nuke The Project Office From Orbit To Be Sure

I'm often found hanging around on street corners muttering to myself incoherently about the leadership class. Well, it's good to have a hobby.

Across the world - and especially in the UK - we have a class of people who have no actual practical skills or specific expertise to speak of, but a compelling sense of entitlement that they should be in charge, often of things they barely understand.

In the pre-Agile Manifesto world, IT was ruled by the leadership class. There was huge emphasis on processes, driven by the creation of documents, for the benefit of people who were neither using the software or writing it. This was a non-programmer's idea of what programming should be. In the late 1990's, the project office was the Alpha and the Omega of software and systems development. People who'd never written a line of code in their lives telling people who do it day-in and day-out how it should be done.

Because, if they let programmers make the decisions, they'll do it wrong!!! And, to be fair, we often did do it wrong. We built the wrong thing, and we built it wrong. It was our fault. We let the project office in by frequently disappointing our customers. But their solution just meant that we still did it wrong, only now we did it wrong on a much grander scale.

And just as we developers kidded ourselves that, because we delivered working software, that meant we had succeeded, managers deluded themselves that - because the team followed the prescribed processes - the customer's needs had been met.

Well, nope. We ticked the boxes while the customer got ticked off.

It turns out that the working relationship between software developers and their customers is, and always has been, the crux of the problem. Teams that work closely and communicate effectively with customers tend to build the right thing, at least. There's no process, standard or boxes-and-arrows diagram that can fix a dysfunctional developer-customer relationship. CMMi all you like. It doesn't help in the end. And, as someone who specialised on software process engineering and wore the robes and pointy hat of a Chief Architect, I would know.

The Agile Manifesto was a reaction to the Big Process top-heavy approach that had failed us so badly in the previous decades. Self-organising teams should work directly with customers and do the simplest things to deliver value. Why write a big requirements specification when we can have a face-to-face conversation with the customer? Why create a 200-page architecture document when developers can just gather round a whiteboard when they need to talk about design?

XP in particular seemed to be a welcome death knell for value-sucking Plan-Driven, Big Architecture, Big Process roles. It was the end for those projects like the one where I was the only developer but for some reason reported to three project managers, spending a full day every week travelling the country helping them to revise their constantly out-of-date Gantt charts.

And, for a while, it was working. The early noughties was a Golden Age for me of working on small teams, communicating directly with customers, making the technical decisions that needed to be made, and doing it our way.

But the project office wasn't going to just slink away and die in a corner. People with power rarely relinquish it voluntarily. And they have the power to make sure they don't need to.

Just as before, we let them back in by disappointing our customers. A lack of focus on end business goals - real customer needs - and too much focus initially on the mechanics of delivering working software created the opportunity for people who don't write code to proclaim "Look, the people writing the code are doing Agile wrong!"

And, again, their solution is more processes, more management, more control. And, hey presto, our 6-person XP projects transformed into beautiful multi-team Enterprise Agile butterflies. Money. That's what I want.

Back To Basics

Agile today is completely dominated by management. It's no longer about software development, or about helping customers achieve real goals. It's just as top-heavy, process-oriented and box-ticky as it ever was in the 1990s. And it's therefore not for me.

Working closely with customers to solve real problems by rapidly iterating working software on small self-organising teams very much is, still. But I fear the word for that has had its meaning so deeply corrupted that I need to start calling it something else.

How about "software development"?





September 3, 2017

Learn TDD with Codemanship

Iterating is THE Requirements Discipline

OK. Let's get serious about software requirements, shall we?

The part where we talk to the customer and write specifications and agree acceptance tests and so forth? That's the least important part of figuring out what software we need to build.

You heard me right. Requirements specification is the least important part of requirements analysis.

THE. LEAST. IMPORTANT. PART.

It's 2017, so I'm hoping you've heard of this thing they have nowadays (and since the 1970s) called iterative design. You have? Excellent.

Iterating is the most important part of requirements analysis.

When we iterate our designs faster, testing our theories about what will work in shorter feedback loops, we converge on a working solution sooner.

We learn our way to Building The Right ThingTM.

Here's the thing with iterative problem solving processes: the number of iterations matters more than the accuracy of the initial input.

We could agonise over taking our best first guess at the square root of a number, or we could just start with half the input number and let the feedback loop do the rest.

I don't know if you've been paying attention, but that's the whole bedrock of Agile Software Development. All the meetings and documents and standards in the world - the accoutrements of Big Process - don't mean a hill of beans if you're only allowing yourself feedback from real end users using real working software every, say, 2 years.

So ask your requirements analyst or product owner this question: "What's your plan for testing these theories?"

I'll wager a shiny penny they haven't got one.



December 8, 2016

Learn TDD with Codemanship

What Do I Think of "Scaled Agile"?

People are increasingly asking me for my thoughts on "scaled agile", so I though I'd take a quiet moment to collect my thoughts in one place.

Ever since that fateful meeting in Snowbird, Utah in 2001, some commercially-minded folk have sought to "scale up" the Agile brand so it can be applied to large organisations.

I'll give you an example of the kind of organisation we're talking about: a couple of years ago I was invited into a business that had about 150 teams of developers all effectively working on the same system (or different versions of the same system). I was asked to put together a report and some recommendations on how TDD could be adopted across the organisation.

The business in question was peppered throughout with Agile consultants, Scrum Masters, Lean experts, Kanban experts, and all manner of Agile flora and fauna.

Teams all used user stories, all had Scrum or Kanban boards, all did daily stand-ups, and all the paraphernalia we associate with Agile Software Development.

But if there was one thing they most definitely were not, it was agile. Change was slow and expensive. There was absolutely no sense of overall direction or control, or of an overall picture of progress. And the layers of "scaled Agile" the managers had piled on top of all that mess was just making things worse.

It's just a fact of life. Software development doesn't scale. Once software projects go above a certain size (~$1 million), chaos is inevitable, and the best you can hope for is an illusion of control.

And that, in my considerably wide experience of organisations of all sizes attempting to apply agile principles and practices, is all that the "scaled agile" methods can offer.

I've seen some quite well-known case studies of organisations that claim to be doing agile at scale with my own eyes, and they just aren't. 150 teams doing scaled agile, it turns out, is just 150 teams doing their own thing, and on a surface level making it look like they're all following a common process. But they're still dogged by all the same problems that any organisation trying to do software development at scale is dogged by. You can't fix nature.

Instead, you have to acknowledge the true nature of development at scale; that these are highly complex systems, not conducive to overall top-down control, from which outcomes organically emerge, planned or not.

Insect colonies do not follow top-down processes. A beehive isn't command and control, even if it might look to the casual observer that there's a co-ordinated plan they're all following. What's actually happening is that individual bees are responding to broadcast messages ("goals") about where the pollen, or the threat, can be found, and then they respond according to a set of simple internalised rules to that message, co-operating at a local level so as not to bump into each other.

In software development teams, the internalised rules - often unspoken and frequently at odds with the spoken or written rules - and the interactions at a local level determine what the outcomes the system will produce. We call these internalised rules "culture". Culture is often simple, but buried so deep that changing the culture can take a long time.

In particular, culture round the way we communicate and collaborate tends to steer the ship in particular directions, regardless of which direction you point the rudder. This is a property of complex adaptive systems called "strange attractors".

Complex systems have a property called "homeostatis" - a tendency, when disturbed, to iteratively revert back to their original dynamic state, as determined by their strange attractors. Hence, a heartbeat can rise to more than 150 bpm, but will eventually return to a resting heart rate of about 70-80 bpm.

We can apply external stimuli to a system to try and change the way it performs, but the intrinsic properties of the agents within that system, and particularly their interactions, will ultimately determine the outcome.

Methods like SAFe, LeSS and DAD are attempts to exert top-down control on highly complex adaptive organisations. As such, in my opinion and in the examples I've witnessed, they - at best - create the illusion of control. And illusions of control aren't to be sniffed at. They've been keeping the management consulting industry in clover for decades.

The promise of scaled agile lies in telling managers what they want to hear: you can have greater control. You can have greater predictability. You can achieve economies of scale. Acknowledging the real risks puts you at a disadvantage when you're bidding for business.

But if you really want to make a practical difference in a large software development organisation, the best results I've seen have come from focusing on the culture: what do people really value? What do people really believe? What are people's real habits? What do they really do under pressure?

You build big, complex products out of small, simple parts. The key is not in trying to exert control over the internal workings of each part, but to focus on how the parts - and the small, simple teams who make them - interact. Each part does a job. Each part will depend on some of the other parts. An overall architecture can emerge by instilling a set of good, practical organising principles across the teams - a design culture, like we have in the architecture of buildings, for example. The teams negotiate with each other to resolve potential conflicts, like motorists on our complex road systems trying to get where they need to go without bumping into each other.

Another word for this is "anarchy". I advice you not to use it in client meetings. But that is what it is.

I think it's very telling that so many of the original signatories of the Agile Manifesto have voiced scepticism - indeed, in some cases been very scathing - of "scaled agile". The way I see it, it's the precise opposite of what they were trying to tell us at Snowbird.

This is why, as a professional, I've invested so much time in training and coaching developers and teams, rather than in management consulting. I certainly engage with bosses, but when they ask about "scaled agile" I tell them what I personally think, which is that it's a mirage.