July 2, 2018
Level 4 Agile MaturityI recently bought new carpets for my home, and the process of getting a quote was very interesting. First, I booked an appointment online for someone to come round and measure up. This appointment took about an hour, and much of that time was spent entering measurements into a software application that created a 2D model of the rooms.
Then I visited a local-ish store - this was a big national chain - and discussed choices and options and prices. This took about an hour and a half, most of which was spent with the sales adviser reading the measurements off a print-out of the original data set and typing them into a sales application to generate a quote.
There were only 3 sales people on the shop floor, and it struck me that all this time spent re-entering data that someone had already entered into a software application was time not spent serving customers. How many sales, I wondered, might be lost because there were no sales people free to serve? We discussed this, and the sales advisor agreed that this system very probably cost sales: and lots of them. (Only the previous week I had visited the local, local shop for this chain, and walked out because nobody was free to serve me.)
With more time and research, we might have been able to put a rough figure on potential sales lost during this data re-entering activity for the entire chain (400 stores).
As a software developer, this problem struck me immediately. It had never really occurred to the sales advisor before, he told me. We probably all have stories like this. I can think of many times during my 25-year career where I've noticed a problem that a piece of software might be able to solve. We tend to have that problem-solving mindset. We just can't help ourselves.
And this all reminded me of a revelation I had maybe 16 years ago, working on a dev team who had temporarily lost its project manager and requirements analyst, and had nobody telling us what to build. So we went to the business and asked "How can we help?"
It turned out there was a major, major problem that was IT-related, and we learned that the IT departmet had steadfastly ignored their pleas to try and solve it for years. So we said "Okay, we'll have a crack at it."
We had many meetings with key business stakeholders, which led to us identifying roughly what the problem was and creating a Balanced Scorecard of business goals that we'd work directly towards.
We shadowed end users who worked in the processes that we needed to improve to see what they did and think about how IT could make it easier. Then we iteratively and incrementally reworked existing IT systems specifically to achieve those improvements.
For several months, it worked like a dream. Our business customers were very happy with the progress we were making. They'd never had a relationship with an IT team like this before. It was a revelation to them and to us.
But IT management did not like it. Not one bit. We weren't following a plan. They wanted to bring us back to heel, to get project management in place to tell us what to do, and to get back to the original plan of REPLACING ALL THE THINGS.
But for 4 shiny happy months I experienced a different kind of software development. Like Malcom McDowell in Star Trek Generations, I experienced the bliss of the Nexus and would now do pretty much anything to get back there.
So, ever since, I've encouraged dev teams to take charge of their destinies in this way. To me, it's a higher level of requirements maturity. We progress from:
1. Executing a plan, to
2. Building a product, to
3. Solving real problems people bring to us, to
4. Going out there and pro-actively seeking problems we could solve
We evolve from being told "do this" to being told "build this" to being told "solve this" to eventually not being told at all. We start as passive executors of plans and builders of features to being active engaged stakeholders in the business, instigating the work we do in response to business needs and opportunities that we find or create.
For me, this is the partnership that so many dev teams aspire to, but can never reach because management won't let them. Just like, ultimately, they woudn't let us in that particular situation.
But I remain convinced it's the next step in the evolution of software development: one up from Agile. It is inevitable*.
*...that we will pretend to do it for certifications while the project office continues to be the monkey on our backs
November 7, 2017
Why Agile's Not For MeThere's a growing consensus among people who've been involved with Agile Software Development since the early (pre-Snowbird) days that something is rotten in the state of Agile.
Having slowly backed out of the Agile movement over the last decade or more (see my semi-jocular posts on Post-Agilism from 2007), I approach the movement as a fairly skeptical observer.
Talking with folk both inside and outside the Agile movement - and many with one foot in and one foot out - has highlighted for me where the wheels came off, so to speak. And it's a story that's by no means unique to Agile Software Development. Like all good ideas in software, it's never long before the money starts taking an interest and the pure ideas that it was founded on get corrupted.
1. Too Much Emphasis On Working Software
But, arguably, Agile Software Development was fundamentally flawed straight out of the gate (or straight out of the ski resort, more accurately). If I look for a foundation for Agile, it clearly has its roots in the concept of evolutionary software development. Evolution is a goal-seeking algorithm that searches for an optimum solution by iterating designs rapidly - the more rapidly the better - and feeding back in what we learn with each iteration to improve our solution.
There are two key words in that description: iterating and goal-seeking. There is no mention of goals in the original Agile Manifesto. The manifesto stipulates that the measure of progress is "working software". It does not address the question of why we should build that software in the first place.
And so, many Agile teams - back in the days when Extreme Programming was still a thing - focused on iterating software designs to solve poorly-defined - or not defined at all, let's face it - business problems. This is pretty much guaranteed to fail. But, bless our little cotton socks, because we set ourselves the goal of delivering "working software", we tended to walk away thinking we'd succeeded. Our customers... not so much.
This was the crack in Agile through which the project office snuck back in. (More about them later.)
2. Not Enough Emphasis On Working Software
As Agile evolved as a brand, more and more of us tried to paint ourselves in the colours of management consultants. Because, let's be frank, that's where the big bucks are. People who would once have been helping you to fix your build script were now suddenly self-professed McKinsey-style business gurus telling you how to "maximise the flow of value" in your enterprise, often to comic effect because nobody outside of the IT department took us seriously.
And then, one day - to everyone's horror - somebody outside the IT department did start taking us seriously, and suddenly it wasn't funny any more. Agile "crossed the chasm", and now people were talking about "going Agile" in the boardroom. Management and business magazines now routinely run articles about Agile, typically seeking input from people I've certainly never heard of who are now apparently world-leading experts. None of these people has heard of Kent Beck or Ward Cunningham or Brian Marick or any other signatory of the original Agile Manifesto. Agile today is very much in the hands of the McKinseys of this world. A classic "be careful what you wish for" moment for those from the IT department who aspired to be dining at the top table of consulting.
Agile's now Big Business. And the business of Agile is going BIG. Like every good and pure thing that falls into the hands of management consultants, Agile has mutated from a small, beautiful bird singing a twinkly tune to a bloated enterprise albatross with a foghorn.
3. We Didn't Nuke The Project Office From Orbit To Be Sure
I'm often found hanging around on street corners muttering to myself incoherently about the leadership class. Well, it's good to have a hobby.
Across the world - and especially in the UK - we have a class of people who have no actual practical skills or specific expertise to speak of, but a compelling sense of entitlement that they should be in charge, often of things they barely understand.
In the pre-Agile Manifesto world, IT was ruled by the leadership class. There was huge emphasis on processes, driven by the creation of documents, for the benefit of people who were neither using the software or writing it. This was a non-programmer's idea of what programming should be. In the late 1990's, the project office was the Alpha and the Omega of software and systems development. People who'd never written a line of code in their lives telling people who do it day-in and day-out how it should be done.
Because, if they let programmers make the decisions, they'll do it wrong!!! And, to be fair, we often did do it wrong. We built the wrong thing, and we built it wrong. It was our fault. We let the project office in by frequently disappointing our customers. But their solution just meant that we still did it wrong, only now we did it wrong on a much grander scale.
And just as we developers kidded ourselves that, because we delivered working software, that meant we had succeeded, managers deluded themselves that - because the team followed the prescribed processes - the customer's needs had been met.
Well, nope. We ticked the boxes while the customer got ticked off.
It turns out that the working relationship between software developers and their customers is, and always has been, the crux of the problem. Teams that work closely and communicate effectively with customers tend to build the right thing, at least. There's no process, standard or boxes-and-arrows diagram that can fix a dysfunctional developer-customer relationship. CMMi all you like. It doesn't help in the end. And, as someone who specialised on software process engineering and wore the robes and pointy hat of a Chief Architect, I would know.
The Agile Manifesto was a reaction to the Big Process top-heavy approach that had failed us so badly in the previous decades. Self-organising teams should work directly with customers and do the simplest things to deliver value. Why write a big requirements specification when we can have a face-to-face conversation with the customer? Why create a 200-page architecture document when developers can just gather round a whiteboard when they need to talk about design?
XP in particular seemed to be a welcome death knell for value-sucking Plan-Driven, Big Architecture, Big Process roles. It was the end for those projects like the one where I was the only developer but for some reason reported to three project managers, spending a full day every week travelling the country helping them to revise their constantly out-of-date Gantt charts.
And, for a while, it was working. The early noughties was a Golden Age for me of working on small teams, communicating directly with customers, making the technical decisions that needed to be made, and doing it our way.
But the project office wasn't going to just slink away and die in a corner. People with power rarely relinquish it voluntarily. And they have the power to make sure they don't need to.
Just as before, we let them back in by disappointing our customers. A lack of focus on end business goals - real customer needs - and too much focus initially on the mechanics of delivering working software created the opportunity for people who don't write code to proclaim "Look, the people writing the code are doing Agile wrong!"
And, again, their solution is more processes, more management, more control. And, hey presto, our 6-person XP projects transformed into beautiful multi-team Enterprise Agile butterflies. Money. That's what I want.
Back To Basics
Agile today is completely dominated by management. It's no longer about software development, or about helping customers achieve real goals. It's just as top-heavy, process-oriented and box-ticky as it ever was in the 1990s. And it's therefore not for me.
Working closely with customers to solve real problems by rapidly iterating working software on small self-organising teams very much is, still. But I fear the word for that has had its meaning so deeply corrupted that I need to start calling it something else.
How about "software development"?
May 30, 2017
20 Dev Metrics - 20. DiversityThe final metric in my series 20 Dev Metrics is Diversity.
First of all, we can have diversity of people: their ages, their genders, their sexual orientations, their ethnic backgrounds, their nationalities, their abilities (and disabilities), their socio-economic backgrounds, their educational backgrounds, and so on.
But we can go beyond this and also consider diversity of ideas. The value of diversity is essentially more choice. A team with 10 different ideas for improving customer retention is in a better position for solving their problem than a team with only one.
Nurturing diversity of people can lead to a greater diversity of ideas, but I believe we shouldn't take that effect for granted. Teams made up of strikingly different people are still quite capable of group-think. Culture is susceptible to homogenisation, because people tend to try to fit in. A more diverse group of people may just take a bit longer to reach that uniformity. Therefore, diversity is not a destination, but a journey; a process that continually renews itself by ingesting new people and new ideas.
For example, on your current product or project, how many different ideas were considered? How many prototypes were tried? You'd be amazed at just how common it is for dev teams to start with a single idea and stick to it to the bitter end.
What processes and strategies does your organisation have for generating or finding new ideas and testing them out? Where do ideas come from? Is it from anyone in the team, or do they all come from the boss? (The dictatorial nature of the traditional heirarchical organisation tends to produce a very narrow range of ideas.)
What processes and strategies does your organisation have for attracting and retaining a diverse range of people? Does it have any at all? (Most don't.)
How outward-looking are the team? Do they engage with a wide range of communities and are they exposed to a wide range of ideas? Or are they inward-looking and insular, mostly seeking solutions in their own backyard?
The first step to improving diversity is measuring it. Does the makeup of the team roughly reflect the makeup of the general population? If not, then maybe we need to take steps to open the team up to a wider range of people. Perhaps we need to advertise jobs in other places? Perhaps we need to look at the team's "brand" when we're hiring to see what kind of message we're sending out? Does "Must be willing to work long hours" put off parents with young children? Does "Regular team paintballing" exclude people with certain disabilities? Does "We work hard, play hard" say to the teetotaller "You probably won't fit in"?
Most vitally, is your organisation the kind that insists on developers arriving fully-formed (and therefore are always drawing from the narrow pool of people who are already software developers)? Or do you offer chances for people who wouldn't normally be in that pool to learn and become developers? Do you offer paid apprenticeships or internships, for example? Are they open to anyone? Are you advertising them outside of the software development community? How would a 55-year-old recently forced to take early retirement find out about your apprenticeship? How would an 18-year-old who can't afford to go to university hear about your internship? These people probably don't read Stack Overflow.
May 22, 2017
20 Dev Metrics - 19. ProgressSome folk have - quite rightly - asked "Why bother with a series on metrics?" Hopefully, I've vindicated myself with a few metrics you haven't seen before. And number 19 in the series of 20 Dev Metrics is something that I have only ever seen used on teams I've led.
When I reveal this metric, you'll roll your eyes and say "Well, duh!" and then go back to your daily routine and forget all about it, just like every other developer always has. Which is ironic, because - out of all the things we could possibly measure - it's indisputably the most important.
The one thing that dev teams don't measure is actual progress towards a customer goal. The Agile manifesto claimed that working software is the primary measure of progress. This is incorrect. The real measure of progress is vaguely alluded to with the word "value". We deliver "value" to customers, and that has somehow become confused with working software.
Agile consultants talk of the "flow of value", when what they really mean is the flow of working software. But let's not confuse buying lottery tickets with winning jackpots. What has value is not the software itself, but what can be achieved using the software. All good software development starts there.
If an app to monitor blood pressure doesn't help patients to lower their blood pressure, then what's the point? If a website that matches singles doesn't help people to find love, then why bother? If a credit scoring algorithm doesn't reduce financial risk, it's pointless.
At the heart of IT's biggest problems lies this failure of almost all development teams to address customers' end goals. We ask the customer "What software would you like us to build?", and that's the wrong question. We effectively make them responsible for designing a solution to their problem, and then - at best - we deliver those features to order. (Although, let's face it, most teams don't even do that.)
At the foundations of Agile Software Development, there's this idea of iterating rapidly towards a goal. Going back as far as the mid 1970's, with the germ of Rapid Development, and the late 80's with Tom Gilb's ideas of an evolutionary approach to software design driven by testable goals, the message was always there. But it got lost under a pile of daily stand-ups and burndown charts and weekly show-and-tells.
So, number 19 in my series is simply Progress. Find out what it is your customer is trying to achieve. Figure out some way of regularly testing to what extent you've achieved it. And iterate directly towards each goal. Ditch the backlog, and stop measuring progress by tasks completed or features delivered. It's meaningless.
Unless, of course, you want the value of what you create to be measured by the yard.
May 19, 2017
20 Dev Metrics - 18. External Dependencies18th in my series 20 Dev Metrics is External Dependencies.
If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)
In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.
I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.
It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.
The simplest metric is just to count the number of dependencies. The more there are, the more unstable our code will become.
It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.
You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.
(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")
May 18, 2017
20 Dev Metrics - 17. Test Execution TimeThe 17th in my 20 Dev Metrics series can have a profound effect on our ability to sustain the pace of development - Test Execution Time.
When it takes too long to get feedback from tests, we have to test less often, which means more changes to the code in between test runs. The economics of defect removal are stark: the longer it is before a problem is detected, exponentially the more expensive it is to fix. If we break the code and discover it minutes later, then fixing the problem is quick and easy. If we break the code and discover hours later, that cost goes up. Days later and we're into code-and-fix territory.
So it's in our interest to make the tests run as fast as possible. Teams who strive for a testing pyramid, where the base of the pyramid - the bulk of the tests - is made up of fast-running unit tests can usually get good test feedback in minutes or even seconds. Teams whose testing pyramid is upside-down, with the bulk of their tests being slow-running system or integration tests, tend to find test execution a barrier to progress.
Teams should be putting continual effort into performance engineering their test suites as they grow from dozens to hundreds to thousands of tests. Be aware of how long test execution takes, and when it's too long, optimise the test architecture or execution environment. My 101 TDD Tips e-book contains a tip about optimising test performance that you might find useful.
Basically, the more often you want to run a test suite, the faster it needs to run. Simples.
May 17, 2017
20 Dev Metrics - 16. Dev Pay Market PercentileThe 16th metric in my series 20 Dev Metrics is a simple but powerful one, which can determine how easy (or hard) it could be for you to hire and retain the developers you need.
As someone who gets asked to helped clients find developers, I can attest that Dev Pay Market Percentile is an accurate predictor of how long you'll have to search to find the right person.
Online sources of advertised salaries and contract rates, like itjobswatch.co.uk can show you how much your competitors are offering for specific skills like Java and TDD, in specific locations and specific industries (e.g., London, banking).
You'd be amazed how many employers scratch their heads wondering why they can't find the ace-whizzo developer they need for their dev team, whilst offering an average (or below average) salary.
Want good developers? Aim for the upper quartile on pay. Want great developers who'll stay? Aim for the upper tenth percentile.
I've lost count of the times the "skill shortage" mysteriously disappeared when the employer upped the offer. And I've also lost count of the times that demand for a skill quietly ramped up, leaving employers wondering why all of their long-serving devs suddenly upped and left.
And, of course, if you are a developer, it's worth knowing where you sit in the pay range. If your bosses are poor payers, they may be relying on you not knowing.
May 5, 2017
20 Dev Metrics - 15. Backwards CompatibilityMetric No. 15 in my 20 Dev Metrics series is short and sweet - Backwards Compatibility.
If you've heard of the Liskov Substitution Principle (the "L" in "SOLID"), which states that an instance of any class can be replaced with an instance of any of its subclasses... Well, let me introduce you to the Gorman Substitution Principle
"A version of any API can be replaced with a later version"
Or, to put it more bluntly: thou shalt not break client shit that was working.
For a published component or service (reusable code with an API), run new releases against the tests for previous releases. How many releases back can you go before tests start to break?
This is a particular bug-bear of mine; we're just a bit too change-happy with our APIs. So much so, that I wonder how many billions of dollars are wasted every year fixing client code that didn't need to be broken.
May 4, 2017
20 Dev Metrics - 14. Interface SpecificityThe 14th in my series of 20 Dev Metrics is Interface Specificity, which measures the extent to which interfaces are made to be client or usage-specific. That is to say, the extent to which interfaces only include methods that specific clients need to use.
This helps us to observe the interface segregation principle (the "I" in "SOLID"), and reminds us that interfaces are for collaborating through, and therefore should be designed from the client's perspective.
Imagine we have a class Book, which has methods for getting the ISBN of a publication, and the rating. A class Library uses the ISBN to search for books, and a different class BookStats uses the rating to calculate statistics about the book.
The Library doesn't need to know about a book's rating, and BookStats doesn't need to know its ISBN. Generally speaking, we should seek to limit the knowledge classes have about other classes in the system, so we can limit the chances of it being broken by changes. So instead of binding both Library and BookStats to the same general Book class, instead we can split Book's interface and expose them only to the method they need to use.
Interface Specificity is calculated thus: divide the number of methods used by a client class by the total number of methods exposed by the supplier type. If the supplier only exposes methods used by that client, then Interface Specificity is 100%. If the supplier has 4 methods, and the client only uses 2, then it's 50%. And so on.
An average of Interface Specificity across the software could serve as an indicator of how we're doing generally on this front. It would rarely reach 100%, but 80% or above would suggest we're probably doing okay.
May 3, 2017
20 Dev Metrics - 13. Swappability of DependenciesThe 13th in my series 20 Dev Metrics is Swappability of Dependencies.
Swappability lies at the core of object oriented and component-based design, and so we should take a keen interest on how easy it would be to replace an object's collaborators without it having to change. For example, we might want to swap a data access object with a stub for testing, or swap a payment processing service when the customer is in a specific country.
Swappability as a general concept is pretty much universal, but differs in its implementation depending on the language. To make a dependency swappable in C++, we must do more than we would need to in, say, Ruby and other dynamically-typed languages.
I'll illustrate with a Java example.
Here we're depending directly on a static method of a class ImdbService to get information about a video the customer wants to rent. If we wanted to get that information from a different source (e.g., Amazon), there's no easy way to do it.
In our refactored design, we've made that dependency swappable by 3 steps:
1. We made the static method an instance method, so it can be overridden
2. We passed the instance into the constructor ("dependency injection"), so instantiation happens outside of Pricer. i.e., someone else decides what implementation to use
3. We extracted an interface for ultimate swappability ("dependency inversion"). Pricer can use any service that implements that interface.
In dynamically-typed languages, we may not need an interface - technically speaking - but many programmers get into the habit of creating classes with empty methods to represent an interface, mostly because it makes more sense than extending an implementation (e.g., is an AmazonVideoService really a kind of ImdbService?).
In C++, we would absolutely need an interface, as we can only readily override methods declared as virtual. And other languages like Java are somewhere in between.
Measuring swappability in Java would be a matter of analysing references to other objects and determining where those references are instantiated. If they're instantiated inside the client class, then they're not swappable. If they're passed in as a method parameter, they're swappable - but only if all of the methods used are overrideable. Hence, binding to a pure interface gives ultimate swappability. And, of course, if static methods are used, then that's zero swappability.
How I would I calculate swappability for a Java class?
I'd calculate swappability for each individual reference, and then divide the total for all of them by the maximum possible swappability.
If a reference is static, then it has 0% swappability.
If a reference isn't dependency-injected, it has 0% swappability.
If a reference is dependency-injected, it's swappability will depend on which of its methods are being used:
a. If a method used is abstract, that counts as 100% swappable
b. If a method used has an implementation, but is overrideable, that counts as partially swappable - 50%
c. If a method used cannot be overriden, that has 0% swappability.
For each reference, swappability is the average swappability of methods used. For the class as a whole, swappability is the average swappability of references. And at a package or system level, it's the average swappability across all of the classes
So, when Pricer uses ImdbInfo.fetchVideo(), it has zero swappability because it's a static reference. When Pricer uses a dependency-injected VideoInfoService.fetchVideo(), it has 100% swappability because that method is abstract.
You'll no doubt be delighted to learn that there are no automated tools for calculating this metric at present for any languages. So this is some tooling you would need to rig up yourself. For now, though, I find it a very useful conceptual tool for reasoning about swappability of dependencies.
A cruder approach would be to calculate what proportion of references are to interfaces, and from a tooling perspective this is much simpler, but arguably a bit of a blunt instrument... And very language-specific. For example, a field may be of an interface type, but if it's instantiated inside the constructor of that class, then it's not swappable.