February 1, 2018

Learn TDD with Codemanship

BDD & Specification By Example - Where Did We Go Wrong?

I've been saving this post up for a while, but with a bit of pre-dinner free time I wanted to put it out there now.

I meet a lot of teams, and one thing many of them tell me is that the "customer tests" they've been driving their designs from are actually written by the developers, not the customer.

Sure, they're written using a "Behaviour-Driven Development" or "Acceptance Testing" tool like Cucumber or Fitnesse. But just because you've built a "granny annex" on your house, if there's no granny living in it, it's just an "annex".

We've dropped the ball on this. The CHAOS report, published every year by the Standish Group, consistently cites lack of customer involvement as the number one factor in project failure. A tool won't fix that.

Especially when that tool wasn't designed with customer collaboration in mind. When your "Getting Started" guide begins "First, install Visual Studio..." or requires your customer to learn a mark-up language or to use version control, arguably you're bound to have a hard time getting them to engage in the process.

Increasingly, I work with teams who want to somehow connect the way their customer actually prefers to capture examples with the way devs like to automate tests. 90% of the time, that means pulling data out of Excel spreadsheets - still the most widely used tool in both communities - into unit tests. Some unit testing frameworks even have that facility built in (e.g., MSTest for .NET). But reading data from spreadsheets is child's play for most developers. With OLD DB or JDBC, for example, a spreadsheet's just a database.

But, regardless of the tools, the problem most teams need to solve is a people problem. I've found that close customer involvement is so critical to the chances of a team succeeding at solving the customer's problems that I actually stop development until they engage at the level we need them to. No play? No code.

The mistake many of us make is to give them a choice. "Would you like to spend a lot of time with us discussing requirements and playing with candidate releases and giving us feedback?" "No thanks, ta very much. See you in a year's time."

We made a rod for our backs by allowing them to be absentee partners and trying to figure out what they want and need for them. Specification By Example presents us with an opportunity to make the relationship clearer. The customer has to be "trained" to understand that if they haven't agreed a test for it, they ain't gonna get it.

Learn TDD with Codemanship

A Bit of Old School BDD with NUnit & MS Excel

I'm going Old School this morning with my pairing partner, and while she's popped out for a meeting, I thought I'd quickly jot down what we've been working on.

Back in the good old days before BDD/ATDD frameworks, when we wanted to automate customer tests we just captured the customer's example data in something like MS Excel and then wrote a bit of code to read that data into a unit test. (That, essentially, is what SBE tools do, just with some bells and whistles.)

For example, imagine our customer wants to be able to calculate square roots using the software. We could agree an acceptance test, in the trendy hipster "Given...When...Then..." style, and put that in a spreadsheet, like so.

If we name the cell range containing the example data "examples" (for ease of extracting using OLE DB), and save this spreadsheet in the root directory of our Visual Studio test project, then we can relatively easily suck out that data to provide NUnit test cases for a parameterised test with arguments that match the data in the table.

Here's a complete source listing for our basic spike.

(We're going to try and refine this a bit, and see if it can't be made more general. One of the downsides of using a custom TestCaseSource is that we can't parameterise it easily to specify different Excel files and different ranges. Though why such a mechanism doesn't already exist is a bit of a mystery, after 15+ years of NUnit.)

January 3, 2018

Learn TDD with Codemanship

Professionalism & the "Customer"

Just a few words to add to a post I wrote a few days ago about TDD & "professionalism". I scribbled a quick Venn diagram to illustrate my ideas about stuff software development "professionals" should aim for.

A few good folk have understandably raised objections, which is the natural consequence of saying stuff on the Internet. In particular, some folk object to the idea that a "professional" doesn't write code the customer didn't ask for.

What if the customer doesn't know what they want? Should we build something and see if they like it? Call it an "experiment". We could do that. But before we do that, we could discuss it with the customer and seek their input before we build what we're planning to build. A mock-up, a storyboard, or other lo-fi prototype could clue them in as to what exactly it is we're planning to try for them.

And what if we're building software for the general public? How do we seek permission to try ideas?

This is the problem with words.

What exactly is a "customer"? Different teams will be working in different situations with different kinds of "customer". And there are many understandings of what that word means.

To me, the "customer" is whoever decides what the money gets spent on. In relation to professionalism, we can look at our relationship with our "customer" in many ways.

Think of doctors and patients: the doctor doesn't ask the patient "What medicine would you like me to prescribe?" Instead, she examines the patient, diagnoses the illness, and proposes a treatment. But she still seeks permission from the patient to try it. (Unless the patient is unable to give consent.) Arguably, it would be "unprofessional" of a doctor to administer a treatment without telling the patient what it is, what it's supposed to do, and what side effects it might have. There is a dialogue, then there is consent. The patient decides yay or nay, usually.

Or think of it as gambling. In the casino of software development, decisions are made to bet sums of money on features and changes. Some bets will be bigger than others. Some features will have a potentially larger pay-out than others. In that scenario, where we don't know what the outcome is going to be (which is - let's be honest - how it really is in software development anyway), who are we? Are we the gambler? Or are we the croupier? Do we take their money and tell them to go to the bar while we place bets on their behalf? Or do we ask them to sit at the table, and at least seek consent for every bet before it's placed?

And when it's us deciding what features to try, aren't we the "customer"? In this situation, it's our money we're gambling with. Do we randomly write code and see how it turns out? Or do we take aim before we fire? I've found it to be a bad idea to start writing code without a clear idea of what that code's supposed to do, regardless of whether this is decided in a conversation with a "customer", or in a conversation with myself.

One thing is clear to me (and feel free to disagree): all software development is an experiment. So, personally, I don't distinguish between a "spike" and a "finished solution". They're all spikes. I've found I'm genuinely no quicker producing working code when I cut corners. So my spikes have automated tests, and the code's maintainable. (I rarely even write sample code (e.g., for blog posts) without tests any more.) And they proceed a conversation in which the purpose of the spike is explicitly agreed, and consent - even if it's my own consent - is given to do it.

Now, like I said in the original post: I don't find discussions about professionalism very helpful. Words are difficult. However I spin it, some folk will object. And that's fine. Don't wanna do it my way? Don't do it. I'm not in charge of anyone except myself.

And isn't that, after all is said and done, the real definition of a "professional"?

November 19, 2017

Learn TDD with Codemanship

Everything Else Is Details

For pretty much all my freelancing and consulting career, I've strongly advocated driving software development directly from testable end user goals. I'm not talking here about use cases, or the "so that..." art of a user story. I'm talking actual goals. Not "reasons to use the software", but "reasons to build it in the first place".

Although the Agile movement has started to catch up, with ideas like "business stories" and "impact mapping", it's still very much the exception not the rule that teams set out on their journey with a clear destination in mind.

Once goals have been established, the next step is to explore and understand the current model. How do users currently do things? And this is where a see another classic mistake being made by dev teams. They build an understanding of the existing processes, and then just reproduce those as they currently are in code. This can bake in the status quo, making it doubly hard for businesses to adapt and improve.

The rubber meets the road when we work with our customers to build a shared vision of how things will work when our software has been delivered. And, most importantly, how that will help us achieve our goals.

The trick to this - a skills that's sadly still vanishingly rare in our industry - is to paint a clear picture of how the world will look with our software in it, without describing the software itself. A true requirements specification does not commit in any way to the implementation design of a solution. It merely defines the edges of the solution-shaped hole into which anything we create will need to fit.

I think we're getting better at this. But we're still very naïve about it. Goals are still very one-dimensional - typically just focusing on financial objectives - and fail to balance multiple stakeholder perspectives. The Balanced Scorecard has yet to arrive in software development. Goals are usually woolly and vague, too, with no tests we could use to measure how we're doing. And - arguably our biggest crime as an industry - goals are often confused with strategies and solutions. 90% of the requirements specs I read are, in fact, solution designs masquerading as business requirements.

This ought to be the job of a business analyst. Not to tell us what software to build, but instead to describe what problem we need to solve. What we need from them is a clear, testable vision of how the world will be different because of our software. What needs to change? Then our job is to figure out how - if possible - software could help change it. Does your team have this vision?

I continue to strongly recommend that dev teams ditch the backlogs (and any other forms of long-term plans or blueprints), sit down with their customers and other key stakeholders, and work to define a handful of clear, testable business goals.

Everything else is details.

September 3, 2017

Learn TDD with Codemanship

Iterating is THE Requirements Discipline

OK. Let's get serious about software requirements, shall we?

The part where we talk to the customer and write specifications and agree acceptance tests and so forth? That's the least important part of figuring out what software we need to build.

You heard me right. Requirements specification is the least important part of requirements analysis.


It's 2017, so I'm hoping you've heard of this thing they have nowadays (and since the 1970s) called iterative design. You have? Excellent.

Iterating is the most important part of requirements analysis.

When we iterate our designs faster, testing our theories about what will work in shorter feedback loops, we converge on a working solution sooner.

We learn our way to Building The Right ThingTM.

Here's the thing with iterative problem solving processes: the number of iterations matters more than the accuracy of the initial input.

We could agonise over taking our best first guess at the square root of a number, or we could just start with half the input number and let the feedback loop do the rest.

I don't know if you've been paying attention, but that's the whole bedrock of Agile Software Development. All the meetings and documents and standards in the world - the accoutrements of Big Process - don't mean a hill of beans if you're only allowing yourself feedback from real end users using real working software every, say, 2 years.

So ask your requirements analyst or product owner this question: "What's your plan for testing these theories?"

I'll wager a shiny penny they haven't got one.

July 18, 2017

Learn TDD with Codemanship

Why Don't We Eat Our Own Dog Food?

You may already know I'm a big advocate of developers eating our own dog food.

Using the software we create as real end users, in the real world, provides insights that no amount of meetings or documentation can provide.

A quick straw poll I ran on Twitter suggests 2 out of 3 of us haven't actually used the software we're working on for real.

I've been banging this drum for years, for a largely reluctant audience. Managers say "Our users aren't happy with the latest release" and ask me what can be done. And I say "Walk a mile in your customer's shoes". And they say "Thanks for that. But no."

It's such a simple thing to do, and yet somehow so very hard. There's something psychological going on, I reckon. The same reason most hotel chain employees have no idea what it's like to be a customer in their own hotels...

I urge you to try it. Now.

July 10, 2017

Learn TDD with Codemanship

Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams

One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.

For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.

Choose from:

  • Test-Driven Development workshops

    • Introduction to TDD

    • Specification By Example/BDD

    • Stubs, Mocks & Dummies

    • Outside-In TDD

  • Refactoring workshops

    • Refactoring 101

    • Refactoring To Patterns

  • Design Principles workshops

    • Simple Design & Tell, Don’t Ask

    • S.O.L.I.D.

    • Clean Code Metrics

To find out more, visit http://www.codemanship.co.uk/bitesized.html

July 5, 2017

Learn TDD with Codemanship

A Little Test for My Conceptual Correlation Metric

Here's a little test for my prototype .NET command line tool for calculating Conceptual Correlation. Imagine we have a use case for booking seats on flights for passengers.

The passenger selects the flight they want to reserve a seat on. They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it. We create a reservation for that passenger in that seat.

We write two implementations: one very domain-driven...

And one... not so much.

We run Conceptual.exe over our first project's binary to compare against the use case text, and get a good correlation.

Then we run it over the second project's output and get zero correlation.

QED :)

You can download the prototype here. What will it say about your code?

Learn TDD with Codemanship

Conceptual Correlation - Prototype Tool for .NET

With a few hours spare time over the last couple of days, I've had a chance to throw together a simple rough prototype of a tool that calculates the Conceptual Correlation between a .NET assembly (with a .pdb file in the same directory, very important that!) and a .txt file containing requirements descriptions. (e.g., text copied and pasted from your acceptance tests, or use case documents).

You can download it as a ZIP file, and to use it, just unzip the contents to a folder, and run the command-line Conceptual.exe with exactly 2 arguments: the first is the file name of the .NET assembly, the second is the file name of the requirements .txt.


Conceptual.exe "C:\MyProjects\FlightBooking\bin\debug\FlightBooking.dll" "C:\MyProjects\FlightBooking\usecases.txt"

I've been using it as an external tool in Visual Studio, with a convention-over-configuration argument of $(BinDir)\$(TargetName)$(TargetExt) $(ProjectDir)\requirements.txt

I've tried it on some fair-sized assemblies (e.g., Mono.Cecil.dll), and on some humungous text files (the entire text of my 200-page TDD book - all 30,000 words), and it's been pretty speedy on my laptop and the results have been interesting and look plausible.

Assumes code names are in PascalCase and/or CamelCase.

Sure, it's no Mercedes. At this stage, I just want to see what kind of results folk are getting from their own code and their own requirements. Provided with no warranty with no technical support, use at own risk, your home is at risk if you do not keep up repayments, mind the gap, etc etc. You know the drill :)

Conceptual.exe uses Mono.Cecil to pull out code names, and LemmaSharp to lemmatize words (e,g, "reporting", "reports" become "report"). Both are available via Nuget.

Have fun!

July 2, 2017

Learn TDD with Codemanship

Conceptual Correlation - A Working Definition

During an enjoyable four days in Warsaw, Poland, I put some more thought into the idea of Conceptual Correlation as a code metric. (hours of sitting in airports, planes, buses, taxis, trains and hotel bars gives plenty of time for the mind to wander).

I've come up with a working definition to base a prototype tool on, and it goes something like this:

Conceptual Correlation - the % of non-noise words that appear in names of things in our code (class names, method names, field names, variable names, constants, enums etc) that also appear in the customer's description of the problem domain in which the software is intended for use.

That is, if we were to pull out all the names from our code, parse them into their individual words (e.g., submit_mortgage_application would become "submit" "mortgage" "application"), and build a set of them, then Conceptual Correlation would be the % of that set that appeared in a similar set created by parsing, say, a FitNesse Wiki test page about submitting mortgage applications.

So, for example, a class name like MortgageApplicationFactory might have a conceptual correlation of 67% (unless, of course, the customer actually processes mortgage applications in a factory).

I might predict that a team following the practices of Domain-Driven Design might write code with a higher conceptual correlation, perhaps with just the hidden integration code (database access, etc) bringing the % down. Whereas a team that are much more solution-driven or technology-driven might write code with a relatively lower conceptual correlation.

For a tool to be useful, it would not only report the conceptual correlation (e.g,, between a .NET assembly and a text file containing its original use cases), but also provide a way to visualise and access the "word cloud" to make it easier to improve the correlation.

So, if we wrote code like this for booking seats on flights, the tool would bring up a selection of candidate words from the requirements text to replace the non-correlated names in our code with.

I currently envisage this popping up as an aid when we use a Rename refactoring, perhaps accentuating words that haven't been used yet.

A refactored version of the code would show a much higher conceptual correlation. E.g.,

The devil's in the detail, as always. Would the tool need to make non-exact correlations, for example? Would "seat" and "seating" be a match? Or a partial match? Also, would the strength of the correlation matter? Maybe "seat" appears in the requirements text many times, but only once in the code. Should that be treated as a weaker correlation? And what about words that appear together? Or would that be making it too complicated? Methinks a simple spike might answer some of these questions.