November 7, 2017

Learn TDD with Codemanship

Why Agile's Not For Me

There's a growing consensus among people who've been involved with Agile Software Development since the early (pre-Snowbird) days that something is rotten in the state of Agile.

Having slowly backed out of the Agile movement over the last decade or more (see my semi-jocular posts on Post-Agilism from 2007), I approach the movement as a fairly skeptical observer.

Talking with folk both inside and outside the Agile movement - and many with one foot in and one foot out - has highlighted for me where the wheels came off, so to speak. And it's a story that's by no means unique to Agile Software Development. Like all good ideas in software, it's never long before the money starts taking an interest and the pure ideas that it was founded on get corrupted.

1. Too Much Emphasis On Working Software

But, arguably, Agile Software Development was fundamentally flawed straight out of the gate (or straight out of the ski resort, more accurately). If I look for a foundation for Agile, it clearly has its roots in the concept of evolutionary software development. Evolution is a goal-seeking algorithm that searches for an optimum solution by iterating designs rapidly - the more rapidly the better - and feeding back in what we learn with each iteration to improve our solution.

There are two key words in that description: iterating and goal-seeking. There is no mention of goals in the original Agile Manifesto. The manifesto stipulates that the measure of progress is "working software". It does not address the question of why we should build that software in the first place.

And so, many Agile teams - back in the days when Extreme Programming was still a thing - focused on iterating software designs to solve poorly-defined - or not defined at all, let's face it - business problems. This is pretty much guaranteed to fail. But, bless our little cotton socks, because we set ourselves the goal of delivering "working software", we tended to walk away thinking we'd succeeded. Our customers... not so much.

This was the crack in Agile through which the project office snuck back in. (More about them later.)

2. Not Enough Emphasis On Working Software

As Agile evolved as a brand, more and more of us tried to paint ourselves in the colours of management consultants. Because, let's be frank, that's where the big bucks are. People who would once have been helping you to fix your build script were now suddenly self-professed McKinsey-style business gurus telling you how to "maximise the flow of value" in your enterprise, often to comic effect because nobody outside of the IT department took us seriously.

And then, one day - to everyone's horror - somebody outside the IT department did start taking us seriously, and suddenly it wasn't funny any more. Agile "crossed the chasm", and now people were talking about "going Agile" in the boardroom. Management and business magazines now routinely run articles about Agile, typically seeking input from people I've certainly never heard of who are now apparently world-leading experts. None of these people has heard of Kent Beck or Ward Cunningham or Brian Marick or any other signatory of the original Agile Manifesto. Agile today is very much in the hands of the McKinseys of this world. A classic "be careful what you wish for" moment for those from the IT department who aspired to be dining at the top table of consulting.

Agile's now Big Business. And the business of Agile is going BIG. Like every good and pure thing that falls into the hands of management consultants, Agile has mutated from a small, beautiful bird singing a twinkly tune to a bloated enterprise albatross with a foghorn.

3. We Didn't Nuke The Project Office From Orbit To Be Sure

I'm often found hanging around on street corners muttering to myself incoherently about the leadership class. Well, it's good to have a hobby.

Across the world - and especially in the UK - we have a class of people who have no actual practical skills or specific expertise to speak of, but a compelling sense of entitlement that they should be in charge, often of things they barely understand.

In the pre-Agile Manifesto world, IT was ruled by the leadership class. There was huge emphasis on processes, driven by the creation of documents, for the benefit of people who were neither using the software or writing it. This was a non-programmer's idea of what programming should be. In the late 1990's, the project office was the Alpha and the Omega of software and systems development. People who'd never written a line of code in their lives telling people who do it day-in and day-out how it should be done.

Because, if they let programmers make the decisions, they'll do it wrong!!! And, to be fair, we often did do it wrong. We built the wrong thing, and we built it wrong. It was our fault. We let the project office in by frequently disappointing our customers. But their solution just meant that we still did it wrong, only now we did it wrong on a much grander scale.

And just as we developers kidded ourselves that, because we delivered working software, that meant we had succeeded, managers deluded themselves that - because the team followed the prescribed processes - the customer's needs had been met.

Well, nope. We ticked the boxes while the customer got ticked off.

It turns out that the working relationship between software developers and their customers is, and always has been, the crux of the problem. Teams that work closely and communicate effectively with customers tend to build the right thing, at least. There's no process, standard or boxes-and-arrows diagram that can fix a dysfunctional developer-customer relationship. CMMi all you like. It doesn't help in the end. And, as someone who specialised on software process engineering and wore the robes and pointy hat of a Chief Architect, I would know.

The Agile Manifesto was a reaction to the Big Process top-heavy approach that had failed us so badly in the previous decades. Self-organising teams should work directly with customers and do the simplest things to deliver value. Why write a big requirements specification when we can have a face-to-face conversation with the customer? Why create a 200-page architecture document when developers can just gather round a whiteboard when they need to talk about design?

XP in particular seemed to be a welcome death knell for value-sucking Plan-Driven, Big Architecture, Big Process roles. It was the end for those projects like the one where I was the only developer but for some reason reported to three project managers, spending a full day every week travelling the country helping them to revise their constantly out-of-date Gantt charts.

And, for a while, it was working. The early noughties was a Golden Age for me of working on small teams, communicating directly with customers, making the technical decisions that needed to be made, and doing it our way.

But the project office wasn't going to just slink away and die in a corner. People with power rarely relinquish it voluntarily. And they have the power to make sure they don't need to.

Just as before, we let them back in by disappointing our customers. A lack of focus on end business goals - real customer needs - and too much focus initially on the mechanics of delivering working software created the opportunity for people who don't write code to proclaim "Look, the people writing the code are doing Agile wrong!"

And, again, their solution is more processes, more management, more control. And, hey presto, our 6-person XP projects transformed into beautiful multi-team Enterprise Agile butterflies. Money. That's what I want.

Back To Basics

Agile today is completely dominated by management. It's no longer about software development, or about helping customers achieve real goals. It's just as top-heavy, process-oriented and box-ticky as it ever was in the 1990s. And it's therefore not for me.

Working closely with customers to solve real problems by rapidly iterating working software on small self-organising teams very much is, still. But I fear the word for that has had its meaning so deeply corrupted that I need to start calling it something else.

How about "software development"?





September 3, 2017

Learn TDD with Codemanship

Iterating is THE Requirements Discipline

OK. Let's get serious about software requirements, shall we?

The part where we talk to the customer and write specifications and agree acceptance tests and so forth? That's the least important part of figuring out what software we need to build.

You heard me right. Requirements specification is the least important part of requirements analysis.

THE. LEAST. IMPORTANT. PART.

It's 2017, so I'm hoping you've heard of this thing they have nowadays (and since the 1970s) called iterative design. You have? Excellent.

Iterating is the most important part of requirements analysis.

When we iterate our designs faster, testing our theories about what will work in shorter feedback loops, we converge on a working solution sooner.

We learn our way to Building The Right ThingTM.

Here's the thing with iterative problem solving processes: the number of iterations matters more than the accuracy of the initial input.

We could agonise over taking our best first guess at the square root of a number, or we could just start with half the input number and let the feedback loop do the rest.

I don't know if you've been paying attention, but that's the whole bedrock of Agile Software Development. All the meetings and documents and standards in the world - the accoutrements of Big Process - don't mean a hill of beans if you're only allowing yourself feedback from real end users using real working software every, say, 2 years.

So ask your requirements analyst or product owner this question: "What's your plan for testing these theories?"

I'll wager a shiny penny they haven't got one.



December 8, 2016

Learn TDD with Codemanship

What Do I Think of "Scaled Agile"?

People are increasingly asking me for my thoughts on "scaled agile", so I though I'd take a quiet moment to collect my thoughts in one place.

Ever since that fateful meeting in Snowbird, Utah in 2001, some commercially-minded folk have sought to "scale up" the Agile brand so it can be applied to large organisations.

I'll give you an example of the kind of organisation we're talking about: a couple of years ago I was invited into a business that had about 150 teams of developers all effectively working on the same system (or different versions of the same system). I was asked to put together a report and some recommendations on how TDD could be adopted across the organisation.

The business in question was peppered throughout with Agile consultants, Scrum Masters, Lean experts, Kanban experts, and all manner of Agile flora and fauna.

Teams all used user stories, all had Scrum or Kanban boards, all did daily stand-ups, and all the paraphernalia we associate with Agile Software Development.

But if there was one thing they most definitely were not, it was agile. Change was slow and expensive. There was absolutely no sense of overall direction or control, or of an overall picture of progress. And the layers of "scaled Agile" the managers had piled on top of all that mess was just making things worse.

It's just a fact of life. Software development doesn't scale. Once software projects go above a certain size (~$1 million), chaos is inevitable, and the best you can hope for is an illusion of control.

And that, in my considerably wide experience of organisations of all sizes attempting to apply agile principles and practices, is all that the "scaled agile" methods can offer.

I've seen some quite well-known case studies of organisations that claim to be doing agile at scale with my own eyes, and they just aren't. 150 teams doing scaled agile, it turns out, is just 150 teams doing their own thing, and on a surface level making it look like they're all following a common process. But they're still dogged by all the same problems that any organisation trying to do software development at scale is dogged by. You can't fix nature.

Instead, you have to acknowledge the true nature of development at scale; that these are highly complex systems, not conducive to overall top-down control, from which outcomes organically emerge, planned or not.

Insect colonies do not follow top-down processes. A beehive isn't command and control, even if it might look to the casual observer that there's a co-ordinated plan they're all following. What's actually happening is that individual bees are responding to broadcast messages ("goals") about where the pollen, or the threat, can be found, and then they respond according to a set of simple internalised rules to that message, co-operating at a local level so as not to bump into each other.

In software development teams, the internalised rules - often unspoken and frequently at odds with the spoken or written rules - and the interactions at a local level determine what the outcomes the system will produce. We call these internalised rules "culture". Culture is often simple, but buried so deep that changing the culture can take a long time.

In particular, culture round the way we communicate and collaborate tends to steer the ship in particular directions, regardless of which direction you point the rudder. This is a property of complex adaptive systems called "strange attractors".

Complex systems have a property called "homeostatis" - a tendency, when disturbed, to iteratively revert back to their original dynamic state, as determined by their strange attractors. Hence, a heartbeat can rise to more than 150 bpm, but will eventually return to a resting heart rate of about 70-80 bpm.

We can apply external stimuli to a system to try and change the way it performs, but the intrinsic properties of the agents within that system, and particularly their interactions, will ultimately determine the outcome.

Methods like SAFe, LeSS and DAD are attempts to exert top-down control on highly complex adaptive organisations. As such, in my opinion and in the examples I've witnessed, they - at best - create the illusion of control. And illusions of control aren't to be sniffed at. They've been keeping the management consulting industry in clover for decades.

The promise of scaled agile lies in telling managers what they want to hear: you can have greater control. You can have greater predictability. You can achieve economies of scale. Acknowledging the real risks puts you at a disadvantage when you're bidding for business.

But if you really want to make a practical difference in a large software development organisation, the best results I've seen have come from focusing on the culture: what do people really value? What do people really believe? What are people's real habits? What do they really do under pressure?

You build big, complex products out of small, simple parts. The key is not in trying to exert control over the internal workings of each part, but to focus on how the parts - and the small, simple teams who make them - interact. Each part does a job. Each part will depend on some of the other parts. An overall architecture can emerge by instilling a set of good, practical organising principles across the teams - a design culture, like we have in the architecture of buildings, for example. The teams negotiate with each other to resolve potential conflicts, like motorists on our complex road systems trying to get where they need to go without bumping into each other.

Another word for this is "anarchy". I advice you not to use it in client meetings. But that is what it is.

I think it's very telling that so many of the original signatories of the Agile Manifesto have voiced scepticism - indeed, in some cases been very scathing - of "scaled agile". The way I see it, it's the precise opposite of what they were trying to tell us at Snowbird.

This is why, as a professional, I've invested so much time in training and coaching developers and teams, rather than in management consulting. I certainly engage with bosses, but when they ask about "scaled agile" I tell them what I personally think, which is that it's a mirage.





November 8, 2016

Learn TDD with Codemanship

Business Benefits of Continuous Delivery: We Need Hard Data

Something that's been bugging me for a while is our apparent lack of attention to the proclaimed business benefits of Continuous Delivery.

I'm not going to argue for one second that CD doesn't have business benefits; I'm a firm believer in the practice myself. But that's just it... I'm a believer in the business benefits of Continuous Delivery. And it's a belief based on personal and anecdotal experience, not on a good, solid body of hard evidence.

I had naturally assumed that such evidence existed, given that the primary motivation for CD, mentioned over and over again in the literature, is the reduced lead times on delivering feature and change requests. It is, after all, the main point of CD.

But where is the data that supports reduced lead times? I've looked, but not found it. I've found surveys about adopting CD. I've found proposed metrics, but no data. I've found largely qualitative studies of one or two organisations. But no smoking gun, as yet.

There's a mountain of data that backs up the benefits of defect prevention, but the case for CI currently rests on little more than smoke.

This, I reckon, we need to fix. It's a pillar on which so much of software craftsmanship and Agile rests; delivering working software sooner (and for longer).

Anything that supports the case for Continuous Delivery indirectly supports the case for Continuous Integration, TDD, refactoring, automation, and a bunch of other stuff we believe is good for business. And as such, I think we need that pillar to unassailably strong.

We need good data - not from surveys and opinion polls - on lead times that we can chart against CD practices so we can build a picture of what real, customer-visible impact these practices have.

To be genuinely useful and compelling, it would need to come from hundreds of places and cover the full spectrum of Continuous Delivery from infrequent manual builds with infrequent testing and no automation, to completely automated Continuous Deployment several times a day with high confidence.

One thing that would of particular interest to Agile mindsets would be how the lead times change over time. As the software grows, do lead times get longer? What difference does, say, automated developer testing make to the shape of the curve?

Going beyond that, can we understand what impact shorter lead times can have on a business? Shorter lead times, in of themselves have no value. The value is in what they enable a business to do - specifically, to learn faster. But what, in real terms, are the business benefits of learning faster? How would we detect them? Are businesses that do CD outperforming competitors who don't in some way? Are they better at achieving their goals?

Much to ponder on.





October 11, 2016

Learn TDD with Codemanship

Code Quality is a Requirements Issue

The most fundamental aspect of Agile Software Development is responding to change. Whatever software we deliver today, the real value is in what we can learn from that, so we can deliver something even better tomorrow.

It's nature's search algorithm: evolutionary design. So it should come as no surprise that the cost of changing software is pivotal to our ability to succeed. The more change costs, the less change we can accommodate. The less change we can accommodate, the slower we learn. Simples.

In this respect, the cost of changing software can be thought of as a requirements discipline - ever bit as much as user stories and Specification By Example. Indeed, if we're being genuinely Agile, iterating is the requirements discipline, and everything else is about tweaking our seed values to make the search a little more efficient.

As so many have commented in recent years, failing on code quality means failing on agility. You can Lean ScrumBan all you like. But there's more to Agile than turning up to planning meetings.



September 30, 2016

Learn TDD with Codemanship

Software Development Doesn't Scale. Dev Culture Does

For a couple of decades now, the Standish Group have published an annual "CHAOS" reported, detailing the results of surveys taken by IT managers about the outcomes of IT projects.

One clear trend that emerged - and remains as true today as in 1995 - is that the bigger they are, the harder they fall. The risk of an IT project failing outright rises rapidly with project size and cost. When they reach a certain size - and it's much smaller than you may think - failure is almost guaranteed.

The reality of software development is that, once we get above a dozen or so people working for a year or two on the same product or system, the prognosis does not look good at all.

This is chiefly because - and how many times do we need to say this, folks? - software development does not scale.

If that's true, though, how do big software products come into existence?

The answer lies in city planning. A city is made up of hundreds of thousands of buildings, on thousands of streets, with miles of sewers and underground railways and electrical cabling and lawns and trees and shops and traffic lights and etc etc.

How do such massively complex structures happen? Is a city planned and constructed by a single massive team of architects and builders as a single project with a single set of goals?

No, obviously not. Rome was not built in a day. By the same guys. Reporting to one boss. With a single plan.

Cities appear over many, many decades. The suburbs of London were once, not all that long ago, villages outside London. An organic process of development, undertaken by hundreds of thousands of people and organisations all working towards their own unique goals, and co-operating or compromising when goals aligned or conflicted, produced the sprawling metropolis that is now London.

Trillions of pounds has been spent creating the London of today. Most of that investment is nowhere to be seen any more, having been knocked down (or bombed) and built over many times. You could probably create a "London" for a fraction of the cost in a fraction of the time, if it were possible to coordinate such a feat.

And that's my point: it simply isn't possible to coordinate such a feat, not on that scale. An office complex? Sure. A housing estate? Why not? A new rail line with new train stations running across North London? With a few tens of billions and a few decades, it's do-able.

But those big projects exist right the edge of what is manageable. They invariably go way over budget, and are completed late. If they were much bigger, they'd fail altogether.

Cities are a product of many lifetimes, working towards many goals, with no single clear end goal, and with massive inefficiency.

And yet, somehow, London mostly looks like London. Toronto mostly looks like Toronto. European cities mostly look like European cities. Russian cities mostly look like Russian cities. It all just sort of, kind of, works. A weird conceptual cohesion emerges from the near-chaos.

This is the product of culture. Yes, London has hundreds of thousands of buildings, designed by thousands of people. But those people didn't work in bubbles, completely oblivious to each others' work. They could look at other buildings. Read about their design and their designers. Learn a thousand and one lessons about what worked and what didn't without having to repeat the mistakes that earned that knowledge.

And knowledge is weightless. It travels fast and travels cheaply. Hence, St Petersburg looks like the palaces of Versailles, and that area above Leicester Square looks like 19th century Hong Kong.

Tens of thousands of architects and builders, guided by organising principles plucked from the experience of others who came before.

Likewise, with big software products. Many teams, with many goals, building on top of each other, cooperating when it makes sense, compromising when there are conflicts. But, essentially, each team is doing their own thing for their own reasons. Any attempt to standardise, or impose order from above, fails. Every. Single. Time.

Better to focus on scaling up developer culture, which - those of us who participate in the global dev community can attest - scales beautifully. We have no common goal, no shared boss; but, somehow, I find myself working with the same tools, applying the same practices and principles, as thousands of developers around the world, most of whom I've never met.

Instead of having an overriding architecture for your large system, try to spread shared organising principles, like Simple Design and S.O.L.I.D. It's not a coincidence that hundreds of thousands developers use dependency injection to make external dependencies swappable. We visit the same websites, watch the same screencasts, read the same books. On a 10,000-person programme, your architect isn't the one who sits in the Big Chair at head office drawing UMLL diagrams. Your architect is Uncle Bob. Or Michael Feathers. Or Rebecca Whirfs-Brock. Or Barbara Liskov. Or Steve Freeman. Or even me (a shocking thought!)

But it's true. I probably have more influence over the design of some systems than the people getting paid to design it. And all I did was blog, or record a screencast, or speak at a conference. Culture - in this web age - spreads fast, and scales rapidly. You, too, can use these tools to build bridges between teams, share ideas, and exert tacit influence. You just have to let go of having explicit top-down control.

And that's how you scale software development.




September 6, 2016

Learn TDD with Codemanship

Empowered Teams Can Make Decisions The Boss Disagrees With

Coming into contact, as I do, with many software development teams across a wide range of industries, you begin to recognise patterns.

One such pattern that - once I noticed it - I realised is very prevalent in Agile Software Development is what I call the Empowered Straightjacket. Teams are "empowered" to make their own decisions, but when the boss doesn't like a decision they've made, he or she overrules it.

Those who remember their set theory will know that if the set of all possible decisions a team is allowed to make can only include decisions the boss agrees with, then they are effectively working in the same set (or a subset) of the boss's rules.

That is not empowerment. Just in case you were wondering.

To have truly empowered development teams, bosses have to recognise that just being the boss doesn't necessarily make them right, and disagreeing with a decision doesn't necessarily make it a bad decision.

Unfortunately, the notion that decisions are made from above by more enlightened individuals has an iron grip on corporate culture.

Moving away from that means that managers have to reshape their role from someone who makes decisions to someone who facilitates the decision-making process (and then accepts the outcome graciously and with energy and enthusiasm.)

Once we recognise that there are other - more democratic and more objective - ways of making decisions, and that the decisions are just as likely to be right (if not more likely) than our own, then we have a golden opportunity to scale decision-making far beyond what traditional command-and-control hierarchies are capable of.

To scale Agile, you must learn to let go and let teams be the masters of their own destinies. You have no superhuman ability to make better decisions than a team full of highly-educated professionals.

The flipside of this is that developers and teams have to allow themselves to be empowered. With great empowerment comes great responsibility. And developers who've been cushioned from the responsibility of making decisions for a long time can run a mile when it comes a'knocking. Like prisoners who can't cope on the outside after a long stretch of regimented days doing exactly what they tell exactly when they tell you, devs who are used to management "taking care of all that" can panic when someone hands them a company credit card and says "if you need anything, use this". It reminds me of how my grandmother's hands used to shake when she had to write a cheque. Granddad took care of all that sort of thing.

This can lead to developers lacking confidence, which leads to them being afraid to take initiative. They may have learned their craft in environments where failure is not tolerated, and learned a survival strategy of not being responsible for anything that matters.

In this situation, developers rise up through the ranks - usually by length of service - and perpetuate the cycle by micromanaging their teams.

Based on my own subjective experiences leading dev teams (and being led): ultimately, developers empower themselves. The maxim "it's easier to ask for forgiveness than permission" applies here.

Now, t'was ever thus. But the rise of Agile Software Development has forced many managers to at least pretend to empower their teams. (And, let's face it, the majority are just pretending. Scrum Masters are not project managers, BTW.)

That's your cue to seize the day. They may not like it. But they'll have to pretend they do.





August 30, 2016

Learn TDD with Codemanship

TDD 2.0 - Training Bookings & Book Preview

Just time to mention some Codemanship news.

I'm now taking advanced client bookings for the new & improved TDD 2.0 training workshop.

Incorporating lessons from 7 years delivering the original workshop for over 2,000 developers, the TDD 2.0 is more practical, in-depth and hands-on than ever.

There's more on refactoring, more on design principles, and... well, just more!

I've ditched the PowerPoint slides, beefed up the demonstrations and turbo-charged the exercises. The workshop's available in a 1, 2 or 3-day version to suit budgets and time constraints.



Attendees also get an exclusive 200-page book that goes into even greater depth, with a stack more exercises you can use to hone your TDD craft after the workshop. Ongoing practice is all-important.

You can find out more about the workshop, and grab a free preview of the first 7 chapters of the TDD book, by visiting the website.

I'm taking bookings now for delivery from Oct 10th and beyond.






August 9, 2016

Learn TDD with Codemanship

TDD 2.0 - London Saturday Oct 8th

Just a quick plug for the launch of the new and improved Codemanship Test-Driven Development workshop in London on Saturday October 8th.



Incorporating all the lessons learned from years of training and coaching developers in TDD, plus nearly 2 decades of real-world TDD experience, the improved workshop comes with an exclusive new 200-page book that goes into even greater depth and takes you to places that a 1 or 2-day workshop simply doesn't have time for.

The average price of a 2-day TDD training course in the UK is £1,300, and they're typically delivered by less experienced contractors rather than by the person who actually designed the course.

We're offering this intensive 1-day workshop for a mindbogglingly affordable £99, for which you get a packed day of learning, direct from the person who created the workshop, with a book worth £30 included in the price. The only thing a £1,300 course offers that we don't is catering. That's a very expensive lunch!

And we're running it on a Saturday, so you don't even have to get permission from the boss. I'd rather be surrounded by keen developers than be rich any day!

October 8th is the launch of the improved workshop, and the book, so join us and be the first to get your hands on it.

To find out more, and book your place, visit our Eventbrite page.





July 23, 2016

Learn TDD with Codemanship

On The Compromises of Acceptance Test-Driven Development

I'm currently writing a book on Test-Driven Development to accompany the redesigned training workshop. Having thought very hard about TDD for many years, the first 140 pages were very easy to get out.

But things have - predictably - slowed down now that I'm on the chapter on end-to-end TDD and driving internal designs from customer tests.

The issue is that the ways we currently tackle this are all compromises, and there are many gods that need appeasing, just as there are many ways that folk do it.

Some developers will write, say, a failing FitNesse test and come up with an implementation to pass that test. Some will write a failing automated customer test and then drive an internal design using unit tests and "classic TDD". Some will write a failing automated customer test that makes all the assertions about desired outcomes (e.g., "the donated DVD should be in the library"), and rely entirely on interaction tests to drive out the internal design using mock objects. Some will use test doubles only for external dependencies, ensuring their automated customer test runs faster. Some will include external dependencies and use their automated customer test to do integration testing as well. Some will drive the UI with their automated customer tests, effectively making them complete end-to-end system tests. Some will drive the application through controllers or services, excluding the UI as well as external back-end dependencies, so they can concentrate on the internal design.

And, of course, some won't automate their customer tests at all, relying entirely on their own developer tests for design and regression testing, and favouring manual by-eye confirmation of delivery by the customer herself.

And many will use a combination of some or all of these approaches, as required.

In my own approach, I observe that:

a. You cannot automate customer acceptance. The most important part of ATDD is agreeing the test examples and getting the customer's test data. Making those tests executable through automation helps to eliminate ambiguity, but really we're only doing it because we know we'll be running those tests many times, and automating will save us time and money. We still have to let the dog see the rabbit to get confirmation of acceptance. The customer has to step through the tests with working software and see it for themselves at least once.

b. Non-executable customer tests can be ambiguous, and manually reconciling customer-provided data with unit test parameters can be hit-and-miss

c. The customer rarely, if ever, gets involved with writing "customer tests" using the available tools like FitNesse and Cucumber.




We're probably kidding ourselves that we even need a special set of tools distinct from the xUnit frameworks we would use for other kinds of tests, because - chances are - we're going to be writing those tests

d. Customer tests executed using these tools tend to run slow, even when external dependencies are excluded

e. Relying entirely on top-level tests to check that the work got done right can - and usually does - lead to problems with maintainability later. We might identify a class that could be split off into a component to be reused in ther applications, but where are its functional tests? Imagine we could only test a car radio when it's installed in a Ford Mondeo. This is especially pertinent for teams thinking about breaking down monolithic architectures into component-based or service-based designs.

f. When you exclude the UI and external dependencies, you are still a long way from "done" after your customer test has passed. There's many a slip twixt cup and lip.

g. Once we've established a design that passes the customer's test, the main purpose of having automated tests is to catch regressions as the code evolves. For this, we want to be able to test as much of our code as quickly and cheaply as possible. Over-reliance on slower-running customer tests can be at odds with this goal.

With all this in mind, and revisiting the original goal of driving designs directly from the customer's examples, it's difficult to craft a workable single narrative about how we might approach this.

I tend to automate a "happy path" test automated at entry point to the domain model, drive an internal design mostly through "classic" TDD, and use test doubles (stubs, mocks and dummies) to exclude external dependencies (as well as fake complex components I don't want to get into yet - "fake it 'til you make it".) A lot of edge cases get dealt with only in unit tests and with by-eye customer testing. I will work to pass one customer test assertion at a time, running the FitNesse test to get feedback before moving on to the next assertion.

This does lead to three issues:

1. It's not a system test, so there's still more TDD to do after passing the customer's test

2. It produces some duplication of test code, as the customer test will usually ask some of the same questions as the unit tests I write for specific behaviours

3. Even excluding the UI and external dependencies, they still run much slower than a unit test

I solve issue #3 by adapting my FitNesse fixtures to also be JUnit tests that can be run by me as part of continuous regression testing (see an example at https://gist.github.com/jasongorman/74f6a0a049e03b7030ab46e8b01128e7 ). That test is absolutely necessary, because it's typically the only place that checks that we get all of the desired outcomes from a user action. It's the customer test that drives me to wire the objects doing the work together. I prefer to drive the collaborations this way rather than use mock objects, because I have found over the years that an over-reliance on mocks can lead to maintainability issues. I want as few tests as possible that rely on the internal design.

Being honest, I don't know how to easily solve issue #2. It would require the ability to compose tests so that we can apply the same assertions to different set-ups and actions. I did experiment with an Assertion interface with a check() method, but ending up with every assertion having its own implementation just got kerrrazy. I think what's actually needed is a DSL of some kind that hides all of that complexity.

On issue #1, I've long understood that passing an automated customer test does not mean that we're finished. But there is a strong need to separate the concerns of our application's core logic from its user interface and from external dependencies. Most UIs can actually be unit tested, and if you implement an abstraction for the UI logic, the amount of actual code that directly depends on the UI framework tends to be minimal. All you're really doing is checking that logical views are rendered correctly, and that user actions map correctly onto their logical event handlers. The small sliver of GUI code that remains can be driven by integration tests, usually.

I don't write system tests to test logic of any kind. The few that I will write - complicated and cumbersome as they usually are - really just check that, once the car is assembled, when you turn the key in the ignition, it starts. A dozen or more "smoke tests" tend to suffice to check that the thing works when everything's plugged in.

So I continue to iterate this chapter, refining the narrative down to "this is how I would do it", but I suspect I will still be dissatisfied with the result until there's a workable solution to the duplication issue.