November 19, 2017
Everything Else Is DetailsFor pretty much all my freelancing and consulting career, I've strongly advocated driving software development directly from testable end user goals. I'm not talking here about use cases, or the "so that..." art of a user story. I'm talking actual goals. Not "reasons to use the software", but "reasons to build it in the first place".
Although the Agile movement has started to catch up, with ideas like "business stories" and "impact mapping", it's still very much the exception not the rule that teams set out on their journey with a clear destination in mind.
Once goals have been established, the next step is to explore and understand the current model. How do users currently do things? And this is where a see another classic mistake being made by dev teams. They build an understanding of the existing processes, and then just reproduce those as they currently are in code. This can bake in the status quo, making it doubly hard for businesses to adapt and improve.
The rubber meets the road when we work with our customers to build a shared vision of how things will work when our software has been delivered. And, most importantly, how that will help us achieve our goals.
The trick to this - a skills that's sadly still vanishingly rare in our industry - is to paint a clear picture of how the world will look with our software in it, without describing the software itself. A true requirements specification does not commit in any way to the implementation design of a solution. It merely defines the edges of the solution-shaped hole into which anything we create will need to fit.
I think we're getting better at this. But we're still very naïve about it. Goals are still very one-dimensional - typically just focusing on financial objectives - and fail to balance multiple stakeholder perspectives. The Balanced Scorecard has yet to arrive in software development. Goals are usually woolly and vague, too, with no tests we could use to measure how we're doing. And - arguably our biggest crime as an industry - goals are often confused with strategies and solutions. 90% of the requirements specs I read are, in fact, solution designs masquerading as business requirements.
This ought to be the job of a business analyst. Not to tell us what software to build, but instead to describe what problem we need to solve. What we need from them is a clear, testable vision of how the world will be different because of our software. What needs to change? Then our job is to figure out how - if possible - software could help change it. Does your team have this vision?
I continue to strongly recommend that dev teams ditch the backlogs (and any other forms of long-term plans or blueprints), sit down with their customers and other key stakeholders, and work to define a handful of clear, testable business goals.
Everything else is details.
September 3, 2017
Iterating is THE Requirements DisciplineOK. Let's get serious about software requirements, shall we?
The part where we talk to the customer and write specifications and agree acceptance tests and so forth? That's the least important part of figuring out what software we need to build.
You heard me right. Requirements specification is the least important part of requirements analysis.
THE. LEAST. IMPORTANT. PART.
It's 2017, so I'm hoping you've heard of this thing they have nowadays (and since the 1970s) called iterative design. You have? Excellent.
Iterating is the most important part of requirements analysis.
When we iterate our designs faster, testing our theories about what will work in shorter feedback loops, we converge on a working solution sooner.
We learn our way to Building The Right ThingTM.
Here's the thing with iterative problem solving processes: the number of iterations matters more than the accuracy of the initial input.
We could agonise over taking our best first guess at the square root of a number, or we could just start with half the input number and let the feedback loop do the rest.
I don't know if you've been paying attention, but that's the whole bedrock of Agile Software Development. All the meetings and documents and standards in the world - the accoutrements of Big Process - don't mean a hill of beans if you're only allowing yourself feedback from real end users using real working software every, say, 2 years.
So ask your requirements analyst or product owner this question: "What's your plan for testing these theories?"
I'll wager a shiny penny they haven't got one.
July 18, 2017
Why Don't We Eat Our Own Dog Food?You may already know I'm a big advocate of developers eating our own dog food.
Using the software we create as real end users, in the real world, provides insights that no amount of meetings or documentation can provide.
A quick straw poll I ran on Twitter suggests 2 out of 3 of us haven't actually used the software we're working on for real.
Devs: have you used the software you last worked on a release of as an *end user*?— Codemanship (@codemanship) July 15, 2017
I've been banging this drum for years, for a largely reluctant audience. Managers say "Our users aren't happy with the latest release" and ask me what can be done. And I say "Walk a mile in your customer's shoes". And they say "Thanks for that. But no."
It's such a simple thing to do, and yet somehow so very hard. There's something psychological going on, I reckon. The same reason most hotel chain employees have no idea what it's like to be a customer in their own hotels...
I urge you to try it. Now.
July 10, 2017
Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams
One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.
For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.
- Test-Driven Development workshops
- Introduction to TDD
- Specification By Example/BDD
- Stubs, Mocks & Dummies
- Outside-In TDD
- Refactoring workshops
- Refactoring 101
- Refactoring To Patterns
- Design Principles workshops
- Simple Design & Tell, Don’t Ask
- Clean Code Metrics
To find out more, visit http://www.codemanship.co.uk/bitesized.html
July 5, 2017
A Little Test for My Conceptual Correlation MetricHere's a little test for my prototype .NET command line tool for calculating Conceptual Correlation. Imagine we have a use case for booking seats on flights for passengers.
The passenger selects the flight they want to reserve a seat on. They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it. We create a reservation for that passenger in that seat.
We write two implementations: one very domain-driven...
And one... not so much.
We run Conceptual.exe over our first project's binary to compare against the use case text, and get a good correlation.
Then we run it over the second project's output and get zero correlation.
You can download the prototype here. What will it say about your code?
Conceptual Correlation - Prototype Tool for .NETWith a few hours spare time over the last couple of days, I've had a chance to throw together a simple rough prototype of a tool that calculates the Conceptual Correlation between a .NET assembly (with a .pdb file in the same directory, very important that!) and a .txt file containing requirements descriptions. (e.g., text copied and pasted from your acceptance tests, or use case documents).
You can download it as a ZIP file, and to use it, just unzip the contents to a folder, and run the command-line Conceptual.exe with exactly 2 arguments: the first is the file name of the .NET assembly, the second is the file name of the requirements .txt.
Conceptual.exe "C:\MyProjects\FlightBooking\bin\debug\FlightBooking.dll" "C:\MyProjects\FlightBooking\usecases.txt"
I've been using it as an external tool in Visual Studio, with a convention-over-configuration argument of $(BinDir)\$(TargetName)$(TargetExt) $(ProjectDir)\requirements.txt
I've tried it on some fair-sized assemblies (e.g., Mono.Cecil.dll), and on some humungous text files (the entire text of my 200-page TDD book - all 30,000 words), and it's been pretty speedy on my laptop and the results have been interesting and look plausible.
Assumes code names are in PascalCase and/or CamelCase.
Sure, it's no Mercedes. At this stage, I just want to see what kind of results folk are getting from their own code and their own requirements. Provided with no warranty with no technical support, use at own risk, your home is at risk if you do not keep up repayments, mind the gap, etc etc. You know the drill :)
Conceptual.exe uses Mono.Cecil to pull out code names, and LemmaSharp to lemmatize words (e,g, "reporting", "reports" become "report"). Both are available via Nuget.
July 2, 2017
Conceptual Correlation - A Working DefinitionDuring an enjoyable four days in Warsaw, Poland, I put some more thought into the idea of Conceptual Correlation as a code metric. (hours of sitting in airports, planes, buses, taxis, trains and hotel bars gives plenty of time for the mind to wander).
I've come up with a working definition to base a prototype tool on, and it goes something like this:
Conceptual Correlation - the % of non-noise words that appear in names of things in our code (class names, method names, field names, variable names, constants, enums etc) that also appear in the customer's description of the problem domain in which the software is intended for use.
That is, if we were to pull out all the names from our code, parse them into their individual words (e.g., submit_mortgage_application would become "submit" "mortgage" "application"), and build a set of them, then Conceptual Correlation would be the % of that set that appeared in a similar set created by parsing, say, a FitNesse Wiki test page about submitting mortgage applications.
So, for example, a class name like MortgageApplicationFactory might have a conceptual correlation of 67% (unless, of course, the customer actually processes mortgage applications in a factory).
I might predict that a team following the practices of Domain-Driven Design might write code with a higher conceptual correlation, perhaps with just the hidden integration code (database access, etc) bringing the % down. Whereas a team that are much more solution-driven or technology-driven might write code with a relatively lower conceptual correlation.
For a tool to be useful, it would not only report the conceptual correlation (e.g,, between a .NET assembly and a text file containing its original use cases), but also provide a way to visualise and access the "word cloud" to make it easier to improve the correlation.
So, if we wrote code like this for booking seats on flights, the tool would bring up a selection of candidate words from the requirements text to replace the non-correlated names in our code with.
I currently envisage this popping up as an aid when we use a Rename refactoring, perhaps accentuating words that haven't been used yet.
A refactored version of the code would show a much higher conceptual correlation. E.g.,
The devil's in the detail, as always. Would the tool need to make non-exact correlations, for example? Would "seat" and "seating" be a match? Or a partial match? Also, would the strength of the correlation matter? Maybe "seat" appears in the requirements text many times, but only once in the code. Should that be treated as a weaker correlation? And what about words that appear together? Or would that be making it too complicated? Methinks a simple spike might answer some of these questions.
March 6, 2017
Start With A Goal. The Backlog Will Follow.The little pairing project I'm doing with my 'apprentice' Will at the moment started with a useful reminder of just how powerful it can be to start development with goals, instead of asking for a list of features.
As usual, it was going to be some kind of community video library (it's always a community video library, when will you learn!!!), and - with my customer role-playing hat on - I envisaged the usual video library-ish features: borrowing videos, returning videos, donating videos, and so on.
But, at this point in my mentoring, I'm keen for Will to get some experience working in a wider perspective, so I insisted we started with a business goal.
I stipulated that the aim of the video library was to enable cash-strapped movie lovers to watch a different movie every day for a total cost of less than £100 a year. We fired up Excel and ran some numbers, and figured out that - in a group with similar tastes (e.g,, sci-fi, romantic comedies, etc) - you might need only 40 people to club together to achieve this.
This reframed the whole exercise. A movie club with 40 members could run their library out of a garden shed, using pencil and paper to keep basic records of who has what on loan. They could run an online poll to decide what titles to buy each month. They didn't really need software tools for managing their library.
The hard part, it seemed to us, would be finding people in your local area with similar tastes in movies. So the focus of our project shifted from managing a collection of DVDs to connecting with other movie lovers to form local clubs.
Out of that goal, a small feature list almost wrote itself. This is how planning should work; not with backlogs of feature requests, but with customers and developers closely collaborating to achieve goals.
It's similar in many ways to how TDD should work - in fact, arguably, it is TDD (except we start with business tests). When I'm showing teams how to do TDD, I advise them not to think of a design and then start writing unit tests for all the classes and methods and getters and setters they think they're gloing to need. Start with a customer test, and drive your internal design directly from that. Classes and methods and getters and setters will naturally emerge as and when they're needed.
When I run the Codemanship Agile Software Development workshop, we do it backwards to illustrate this point. Teams are tasked with coming up with internal designs, and then I ask them to write some customer tests afterwards. Inevitably, they realise their internal design isn't what they really needed. Stepping further back, I ask them to describe the business goals of their software, and at least half the teams realise they're building the wrong thing.
So, my advice... Ditch the backlog and start with a goal. The rest will follow.
December 28, 2016
The Best Software Developers Know It's Not All 'Hacking'A social media debate that appears to have been raging over the Xmas break was triggered by some tech CEO claiming that the "best developers" would be hacking over the holidays.
Putting aside just how laden with cultural assumptions that tweet was (and, to be fair, many of the angry responses to it), there is a wider question of what makes a software developer more effective.
Consider the same tweet but in a different context: the "best screenwriters" will spend their holiday "hacking" screenplays. There's an assumption there that writing is all there is to it. Look at what happens, though, when screenwriters become very successful. Their life experiences change. They tend to socialise with other movie people. They move out of their poky little downtown apartments and move into more luxurious surroundings. They exchange their beat-up old Nissan for a brand new Mercedes. They become totally immersed in the world of the movie business. And then we wonder why there are so many movies about screenwriters...
The best screenwriters start with great stories, and tell their stories in interesting and authentic voices. To write a compelling movie about firefighters, they need to spend a lot of time with firefighters, listening to their stories and internalising their way of telling them.
First and foremost, great software developers solve real problems. They spend time with people, listen to their stories, and create working solutions to their problems told in the end user's authentic voice.
What happens when developers withdraw from the outside world, and devote all of their time to "hacking" solutions, is the equivalent of the slew of unoriginal and unimaginative blockbuster special effects movies coming out of Hollywood in recent years. They're not really about anything except making money. They're cinema for cinema's sake. And rehashes of movies we've already seen, in one form or another. because the people who make them are immersed in their own world: movie making.
Ironically, if a screenwriter actually switched off their laptop and devoted Xmas to spending time with the folks, helping out with Xmas dinner, taking Grandma for a walk in her wheelchair, etc, they would probably get more good material out of not writing for a few days.
Being immersed in the real world of people and their problems is essential for a software developer. It's where the ideas come from, and if there's one thing that's in short supply in our industry, it's good ideas.
I've worked with VCs and sat through pitches, and they all have a Decline Of The Hollywood Machine feel to them. "It's like Uber meets Moonpig" is our equivalent of the kind of "It's like Iron-Man meets When Harry Met Sally" pitches studio executives have to endure.
As a coach, when I meet organisations and see how they create software, as wel as the usual technical advice about shortening feedback cycles and automating builds and deployment etc, increasingly I find myself urging teams to spend much more time with end users, seeing how the business works, listening to their stories, and internalising their voices.
As a result, many teams realise they've been solving the wrong problems and ultimately building the wrong software. The effect can be profound.
I'll end with a quick illustration: my apprentice and I are working on a simple exercise to build a community video library. As soon as we saw those words "video library", we immediately envisaged features for borrowing and returning videos, and that sort of library-esque thing.
But hang on a moment! When I articulated the goal of the video library - to allow people to watch a movie a day for just 25p per movie - and we broke that down into a business model where 40+ people in the same local area who like the same genres of movies club together - it became apparent that what was really needed was a way for these people to find each other and form clubs.
Our business model (25p to watch one movie) precluded the possibility of using the mail to send and return DVDs. So these people would have to be local to wherever the movies were kept. That could just be a shed in someone's garden. No real need for a sophistated computer system to make titles. They would just need some shelves, and maybe a book to log loans and returns.
So the features we finally came up with had nothing to do with lending and returning DVDs, but was instead about forming local movie clubs.
I doubt we'd have come to that realisation without first articulating the problem. It's not rocket science. But still, too few teams approach development in this way.
So, instead of the "It's like GitHub, but for cats" ideas that start from a solution, take some time out to live in the real world, and keep your eyes and ears open for when someone says "You know what's really annoys me?", because an idea for the Next Big Thing might follow.
November 27, 2016
Software Craftsmanship is a Requirements DisciplineAfter the smoke and thunder of the late noughties software craftsmanship movement had cleared, with all its talk of masters and apprentices and "beautiful code", we got to see what code craft was really all about.
At its heart, crafting high quality code is about leaving it open to change. In essence, software craftsmanship is a requirements discipline; specifically, enabling us to keep responding to new requirements, so that customers can learn from using our software and the product can be continually improved.
Try as we might to build the right thing first time, by far the most valuable thing we can do for our customers is allow them to change their minds. Iterating is the ultimate requirements discipline. So much value lies in empirical feedback, as opposed to the untested hypotheses of requirements specifications.
Crafting code to minimise barriers to change helps us keep feedback cycles short, which maximises customer learning. And it helps us to maintain the pace of innovation for longer, effectively giving the customer more "throws of the dice" at the same price before the game is over.
It just so happens that things that make code harder to change also tend to make it less reliable (easier to break) - code that's harder to understand, code that's more complex, code that's full of duplication, code that's highly interdependent, code that can't be re-tested quickly and cheaply, etc.
And it just so happens that writing code that's easy to change - to a point (that most teams never reach) - is also typically quicker and cheaper.
Software craftsmanship can create a virtuous circle: code that's more open to change, which also makes it more reliable, and helps us deliver value sooner and for longer. It's a choice that sounds like a no-brainer. And, couched in those terms, it should be an easy sell. Why would a team choose to deliver buggier software, later, at a higher cost, with less opportunity to improve it in the next release?
The answer is: skills and investment. But that's a whole other blog post!