July 4, 2017
Are We Only Pretending To Care About Cost of Change?Wise folk have occasionally told me - when I've claimed that "I really wanted X" - that, if X was within my control, then I couldn't have wanted it badly enough or I'd have X.
You know, like when someone says "I really wish I knew Spanish"? Obviously, they really don't. Or they'd know Spanish.
Likewise when development teams say "I really wish we understood our end users better". Evidently not. Or we'd understand our end users better.
And, talking about it today with colleagues, there's a nice little list of things development teams are only pretending to care about. If they did, they'd have done something about it.
Take the cost of changing code. Is your team tracking that? Do you know how much it cost to add, change or delete a line of code for your last release? Do you know how the cost of changing is, well, changing?
The vast majority of teams don't keep those kinds of records, even though the information is almost always available to figure it out. Got version control? You can get a graph of code churn. Got project management or accounts? Then you know how much money was spent during those same periods. Just divide the latter by the former, and - bazinga! - cost of changing a line of code.
The fact that most of us don't have that number to hand strongly suggests that, despite our loudest protestations, we don't really care about it.
And what's very interesting is that it's no different within the software craftsmanship community. We talk about cost of change a great deal, but I've yet to meet a self-identifying software craftsperson who knows the cost of changing their own code.
This seems, to me, to be like a club for really serious golf enthusiasts in which nobody knows what their handicap is. At the very least, should we not be building a good-sized body of data to back up our claims that code craft really does reduce the cost of change? It's been nearly a decade since the software craftsmanship manifesto. What have we been doing with our time that's more important than verifying its central premise?
May 4, 2016
Scaling Kochō for the Enterprise
Unless you've been living under a rock, you'll no doubt have heard about Kochō. It's the new management technique that's been setting the tech world on fire.
Many books, blogs and Hip Hop ballets have been written about the details of Kochō, so it's suffice for me to just quickly summarise it here for anyone who needs their memory refreshing.
Kochō is an advanced technique for scheduling and tracking work that utilises hedgehogs and a complex network of PVC tubes. Task cards are attached to the hedgehogs - by the obvious means - and then they're released into the network to search for cheese or whatever it is that hedgehogs eat. The tubes have random holes cut out above people's desks. When a hedgehog falls through one of these holes, the person at that desk removes the task card and begins work. Progress is measured by asking the hedgehogs.
So far, we've mainly seen Kochō used successfully on small teams. But the big question now is does it scale?
There are many practical barriers to scaling Kochō to the whole enterprise, including:
* Availability of hedgehogs
* Structural weakness of large PVC tube networks
* Infiltration of Kochō networks by badgers
* Shortage of Certified Kochō Tubemasters
In this blog post, I will outline how you can overcome these hurdles and scale Kochō to any size of organisation.
Availability of hedgehogs
As Kochō has become more and more popular, teams have been hit by chronic hedgehog shortages. This is why smart organisations are now setting up their own hedgehog farms. Thankfully, it doesn't take long to produce a fully-grown, Kochō-ready hedgehog. In fact, it can be done in just one hour. We know it's true, because the organiser of the Year Of Hedgehogs said so on TV.
Structural weaknesses of large PVC tube networks
Infiltration of Kochō networks by badgers
Regrettably, some managers have trouble telling a badger from a hedgehog. Well, one mammal is pretty much the same as another, right? Weeding out the badgers on small Kochō teams is straightforward. But as team sizes grow, it becomes harder and harder to pay enough attention to each individual "hedgehog" to easily spot imposters.
Worry not, though. If you make the holes bigger, badgers can work just as well.
Carry on. As you were.
Shortage of Certified Kochō Tubemasters
Many teams employ CKTs to keep an eye on things and ensure the badgers - sorry, "hedgehogs" - are following the process correctly. But, if hedgehogs are in short supply these days, CKTs are like proverbial hen's teeth.
Only a few teams dare try Kochō without a CKT. And they have learned that you don't actually need one... not really.
In fact, Kochō can work perfectly well without CKTs, tube networks, hedgehogs, or Kochō. Indeed, we're discovering that not doing Kochō scales best of all.
August 23, 2013
Software Ideas & Their Tendency Towards UbiquityOne marked way in which ideas in software development sometimes behave like religious movements is their tendency towards ubiquity.
It all starts innocently enough, with some bright spark saying "hey, you know what's worked for me?" Usually, it's a point solution to a specific problem, like writing the test before we write the code, or scheduling work so that developers pick up the next most important task from the queue as soon as they've completed the last one.
Simple ideas to solve particular problems.
Religions too, can start out with a simple idea like "hey, let's all treat each other the way we'd wish others to treat us" and so on.
But before we know it, the thought leaders of these religious movements are asking questions like "What does God have to say about wearing Nike on a Thursday?" and "What sort of toppings are acceptable on a low-sodium bagel?" and their religion starts to burrow its way into every aspect of our daily lives, dictating everything from beard length to when we can and cannot eat certain kinds of dairy products. Not unsurprisingly, the original underlying idea can get lost, and we end up with religious zealots who will gleefully nail you to a tree for wearing the wrong kind of underpants during a month with a 'Y' in the name, but who seem to have no hang-ups about nailing people to trees in the first place.
So, too, do some ideas burrow their way into other aspects of the way we write software. There seems to be a built-in predaliction for some ideas - usually methodological, but I've seen it happen with tools, too - to grow to become all-encompassing, and for the original underlying idea to get forgotten.
And I can understand the motivations behind this; particularly for consultants. A hammer gets a much larger potential market if we claim it can tell the time, too. We can dramatically extend the scope of our influence by making what we're experts in apply to just about everything.
June 10, 2012
Late Night Thoughts On "It Works For Me"Before I retire up the Apples & Pears to Bedfordshire, I just wanted to share some thoughts on an ongoing discussion I've been having in That Twitter with Dan North (@tastapod).
Now, I'm aware that I can be overly dismissive of people making claims that aren't supported by evidence, and I feel it's important to go beyond the limitations of 140 characters to try - and probably fail, as usual - to express what I'm really thinking about all this.
To cut a long story short, Dan's been writing and speaking a lot recently about a discovery he's made that involves writing software that is not - GASP! - test-driven. Indeed, there may be no automated tests at all. And he's finding that in the context he and his colleagues are working in, not TDD-ing is sometimes better and faster at delivering value, and, presumably, at sustaining that pace of innovation for business advantage.
Dan, if you're not aware, comes very highly recommended by programmers who also come very highly recommended. If programmer kudos was PageRank, and recommendations were web links, Dan's home page would be bbc.co.uk. So, at a personal level, I'm inclined to just shrug and say "fair enough, what he said".
But I've been at this game a while (and I've even won the odd round), and my two decades programming for shiny objects and sexual favours has taught me that our industry is rife with claims.
Some are out-and-out lies. The people making them know full well it's not true, and what they're saying is designed purely to appeal to the people who are holding the purse strings - a highly suggestible bunch at the best of times.
I think it's very doubtful that Dan doesn't believe what he's telling us, from what I've heard of him. But some very genuine people, with all the best intentions, also make claims that turn out not to be true. Software is a very complicated business, and mirages are not uncommon.
I know how prone I am to succumbing to that feeling of "productivity" I get when I cut corners. It's very seductive.
My Mum used to drive a Citroen 2CV (mint green with stripes on the roof - it looked like a boiled sweet on wheels), and I remember the sheer thrill of us coasting down hills, feeling like the car could take off at any moment. We must have been doing all of 45 miles per hour.
I've discovered, from my own experiments into quality-centred practices that, when I actually look back at what's been achieved objectively, what felt fast while I was doing it can turn out to be slower in real terms.
So, my issue is this: it's not that I think Dan's misleading us, or that he's necessarily misleading himself, either. What he's discovered may well be real, and may even be reproducible.
But, right now, he's that lone parent who didn't vaccinate their children and found that their children got better. Or that they think their children got better, and that it had something to with not vaccinating them. Maybe they did, and maybe it was.
However, before I start advising teams to not bother with the vaccinations - vaccinations whose efficacy is supported by a growing body of evidence in a wide range of situations (everything from embedded software in vending machines to labyrinthine distributed "enterprise" systems via the BBC iPlayer) - I need to see a similar body of evidence to persuade me that in some situations, skipping the jabs will be better for them.
I'd also like to understand why. I'm fairly convinced now of the causal mechanisms that link defect prevention to higher productivity, having seen so many wide studies published by the SEI, IBM, NASA and other august bodies. Taking steps to prevent issues saves more time later than it costs now. Simples.
I'm also aware of the limits of defect prevention on saving us time and money, and why those limits exist (e.g., in safety-critical software).
The same goes for the relationship between our ability to retest our software and systems quickly, frequently and cheaply. I'm not aware of any other way than by automating our tests, and I'm especially aware of the economic value of automated unit tests (or some automated equivalent - e.g., model checkers), having spent very little time in a debugger personally since about 2002.
It's not inconceivable that somewhere in the spectrum of quality vs. cost vs. test automation etc etc, there is an oasis that Dan's discovered of which we're all currently unaware. But if there is, then it's a tropical island in the middle of the Arctic ocean. It runs contrary to the picture that surrounds it - a picture that's still being corroborated as more and more data comes in, and for which no credible data currently exists to contradict it.
Right now, Dan's telling us he's been to this undiscovered island, and is describing it to us in vivid detail - thrilling tales of strange and exotic animals, wierd and wonderful plant life and azure-blue waters lapping at golden sands. But he's yet to give us the photos, videos, or any samples of unique flora and fauna that might convince me that he wasn't actually in Fiji (that's the danger of flying without instruments). Most important of all, he needs to give us the grid reference so we can all go and find this island for ourselves.
He tells me he's in the process of doing this now, so we can try his approach on our own projects and see what we think. This is very encouraging.
My hope is that we'll finally see this mysterious Lost World for ourselves and know that he was right.
Either that or we'll confirm that he is indeed lost in Fiji.
April 19, 2012
Enough With The Movements! Movements Are Stupid.
I've been around the block a few times as a software developer, and as such I've witnessed several movements in the industry come and go.
Each movement (object technology, patterns, component-based, model-driven, Agile, service-oriented, Lean, craftsmanship etc etc) attempts to address a genuine problem, usually. And at the core of every movement, there's a little kernel of almost universal truth that remains true long after the movement that built upon it fell out of favour with the software chattering classes.
The problem I perceive is that this kernel of useful insight tends to become enshrouded in a shitload of meaningless gobbledygook, old wives tales and sales-speak, so that the majority of people jumping on to the bandwagon as the movement gains momentum often miss the underlying point completely (often referred to as "cargo cults").
Along with this kernel of useful insights there also tends to be a small kernel of software developers who actually get it. Object technology is not about SmallTalk. Patterns are not about frameworks. Components are not about COM or CORBA. Model-driven is not about Rational Rose. SOA is not about web services. Agile is not about Scrums. Responsibility-driven Design is not about mock objects. Craftsmanship is not about masters and apprentices or guilds or taking oaths.
In my experience, movements are a hugely inefficient medium for communicating useful insights. They are noisy and lossy.
My question is, do we need movements? When I flick through my textbooks from my physics degree course, they don't read as a series of cultural movements within the physics community. What is true is true. If we keep testing it and it keeps working, then the insights hold.
What is the problem in switching from a model of successive waves of movements, leaving a long trail of people who still don't get it, and possibly never will, to a model that focuses on testable, tested, proven insights into software development?
I feel for the kid who comes into this industry today - or on any other day. I went through the exact same thing before I started reading voraciously to find out what had come before. They may be deluged with wave after wave of meaningless noise, and every year, as more books get published about the latest, greatest shiny thing, it must get harder and harder to pick out the underlying signal from all the branding, posturing and reinvention of the wheel.
You see, it's like this. Two decades of practice and reading has inexorably led me to the understanding that very little of what I've learned that's genuinely important wasn't known about and written about before I was even born. And, just as it it is with physics, once you peel away the layers of all these different kinds of particle, you discover underlying patterns that can be explained surprisingly succinctly.
For those who say "oh, well, software development's much more complicated than that", I call "bullshit". We've made it much more complicated than it needs to be. It's a lot like physics or chess (both set-theoretic constructs where simple rules can give rise to high complexity, just like code): sure, it's hard, but that's not the same as complicated. The end result of what we do as programmers can be massively complicated. But the underlying principles and disciplines are simple. Simple and hard.
We do not master complexity by playing up to it. By making what we do complicated. We master complexity by keeping it simple and mastering how software comes about at the most fundamental level.
Logic is simple, but algorithms can be complex. A Turing Machine is simple, but a multi-core processor is complex. Programming languages are simple, but a program can be highly complex. Programming principles are simple, but can give rise to highly complex endevours.
Complexity theory teaches us that to shape complex systems, we must focus on the simple underlying rules that give rise to them. At its heart, software development has a surprisingly small core of fundamental principles that are easy to understand and hard to master, many of which your average programmer is blissfully unaware.
True evolution and progress in software development, as far as I can see, will require us to drop the brands, dump the fads and the fashions, and focus on what we know - as proven from several decades of experience and several trillion lines of code.
January 15, 2011
Enough With The Software Holy Wars!Religions are funny things.
It turns out the Christians, Jews and Muslims are all worshipping the same god. Where they disagree is on the detail of what that god has for breakfast and what his policy on beards and public holidays is.
We can point and laugh, of course. But we can be just as bad.
You see, it also turns out that "software craftsmen", "software engineers" and "Agile Software Developers" all worship the same god, too. We just disagree on the finer details of precisely how our god expects us to achieve the exact same results we all seem to agree we should be striving for.
There is no disagreement that our mutual god's primary commandment is that Thou Shalt Not Write Software Thy Customer Didn't Want.
Nor do we disagree that we will need to iterate to converge on the most useful, usable software.
We're also in agreement that testing should happen as early as possible and as often as possible if we're to avoid wasting the bulk of our time fixing bugs that slipped through the net.
Indeed, in every important respect, we agree on everything. (Well, anyone whose opinion matters agrees, anyway.)
Where we disagree is on whether we should call them "use cases" or "user stories", and on whether we should write our tests first or write them after the code, or whether we should put aside time to deliberately practice these skills or whether we should join an accredited professional body and get certified on them. And so on.
The underlying beliefs, the foundations of what we do and why we do it, have remained unchanged for decades. The Old Testament of software development is a shared religious text among anyone who does it well.
In case you need reminding, here are the Ten Commandments Of Software Development:
- Thou Shalt Not Write Software Thy Customer Didn't Want
- Thou Shalt Iterate Thy Solutions, Indefinitely If Necessary
- Thou Shalt Test Early & Often
- Thou Shalt Manage Versions and Configurations Of Thy Software, Even When Working Alone
- Thou Shalt Not Jump Straight Into Writing Code If Thou Hast Not Put A Bit Of Thought Into The Design
- Thou Shalt Not Write Code That Is Hard To Change
- Thou Shalt Not Integrate Or Release Untested Code
- Thou Shalt Not Create User Interfaces That Are Hard To Use
- Thou Shalt Treat Functional & Non-Functional Requirements Equally
- Thou Shalt Automate Oft-Repeated Tasks & Share Oft-Repeated Code
Whether we call ourselves "craftsmen", or "engineers", or "artisans", or simply "software developers" or even "computer programmers", we all have to hark back, by necessity, to the basic foundations of writing software professionally.
Each of our commandments implies a discpline, with it's own skillset, it's own practices, it's own standards and it's own body of knowledge. We may disagree on the detail of exactly how to follow each commandment, but fundamentally, underneath it all, we're all worshipping the same god.
So, enough with the Holy Wars! Let's get on and get better at delivering working software that will satisfy, maybe even delight, our customers.
November 22, 2010
Nothing Killed Waterfall. It Evolved Into Half-Arsed Agile.Robert Martin has posted a very interesting article warning of the dangers of elitism in Agile; in particular the elitism of Scrum Masters as project managers or team leaders, which was never what was intended for the role of a "coach" or "process champion" in those early days of Agile.
Quite rightly, Uncle Bob warns that the elitism that killed Waterfall could kill Agile. Organisations have replaced architects and analysts with Scrum Masters, and another revolt from the people who actually write the software and test the software is fermenting.
I'm not convinced that anything killed Waterfall, though. Like the dinosaurs, there could be another explanation for their extinction. Agile isn't the asteroid that wiped out the Big Process and Big Architecture elite. It's the disruptive influence that forced them to adapt in order to retain their authority under the nuclear winter of an increasingly "agile" and "self-organising" world.
The reality many of us talk about is that teams that are genuinely self-organising and that genuinely respond to feedback are few and far between. As Brian Marick put it, only the word "Agile" may have crossed the chasm.
They say that many a true word is said in gest. I'm sure many of us have first-hand experience of the kind of nonsense described in the Half-Arsed Manifesto. This is real enough, sadly. Too many teams are wearing Agile clothes, but underneath they are something very different, and quite familiar.
Most "Agile" teams I come into contact with are recognisably the direct descendents of Waterfall. They retain the vestigial limbs and organs of command-and-control and of plan-driven software development and Big Design Up-Front. Just as birds are really dinosaurs with feathers, many Agile teams are just Waterfall teams with stand-ups and Scrum Masters.
I've seen this first-hand. A project manager fiercely defended her Waterfall process, so the developers defied her and did XP anyway. And they succeeded. The bosses ears pricked up and suddenly Agile was something they wanted to see more of. Seeing the writing on the wall, she went and got certified in Scrum. And now she fiercely defends Agile, or at least, her version of it. She still demands schedule commitments. She still demands the complete UI design and thick requirements document before coding begins. She still demands that developers do what she tells them to do. It's the same old song, but she's singing it in the key of Scrum. We all saw a way to deliver more value, she saw a way to retain her control. Such leopards will never change their spots.
Deep inside the rational, modern Agile brain pulses the irrational, neanderthal limbic system of Waterfall. And while we may be consciously aware of a logical and progressive thought process that drives our projects, it's quite possible that most teams only become aware of that after their fearful and superstitious Waterfall subconscious has already made the decisions for them.
The elite of 90's Big Process Software Engineering never went away. Many of them got certified and are still up to their old tricks. Most of us are still staying at Hotel Waterfall, under the same old management, getting the same poor service from the same haughty and unhelpful staff. Check under the chic, minimalist new wallpaper and you'll find that familiar old over-elaborate floral print.
February 15, 2010
Wheel-driven ReinventionOne aspect of software development which is at once both amusing and troubling is the ability of us young whippersnappers to completely ignore what's gone before and reinvent established wisdom in our own image - often stealing the credit.
Take testing as an example. What do we know about testing software today that we didn't know, say, thirty years ago? Sure, we have new tools and testing has to fit within new approaches to the process of writing software as a whole, but fundamentally what have we discovered in the last decade or so?
Testing behaviour still works, by necessity, much as it has always worked by necessity. We must put the system under test in some desired initial state, then we must provide some stimulus to the system to trigger the behaviour we wish to test, then we must make observations about the final state of the system or about any behaviours that should have been invoked (e.g., a remote procedure call or a database request) in response to the combination of our stimulus and the initial conditions. And this process must be repeatable and predictable, like any good scientific test.
Though the culture of testing software may have evolved, much of it for the better, and the technology may have improved (though that is questionable), and though there are undoubtedly more people testing their systems today, when it comes to the business of writing and executing tests, there's really nothing new under the sun.
The same is true of many aspects of contemporary software development. Like it or nay, iterative and incremental development is older than C. We just weren't doing it back then, in the main.
Indeed, pick any "new" aspect of development and trace it back to its roots, and we discover that most novelties are actually much older than many of us thought. Objects are an invention from the sixties. Use cases hail from the seventies. Responsibility-driven design was being practiced before Frankie told us to Relax. UML existed in a fragmentary form before the Berlin Wall came down. People were writing code to satisfy tests back when those tests were stored on magnetic tape. Indeed, some of the descriptions of programming that was done for the very first computers rings bells with those of us who practice that black art today.
Younger developers like me, though, seem to readily believe that our time around is the first time around and feel no compunction to educate ourselves about the achievements of "old-timers", preferring instead to invent things anew - with sexier names and shinier tools, admittedly.
Our desire to reinvent goes as far as redefining words that already have a well-established definition. "Agile" no longer means "nimble" , "quick" or "spry". Today it apparantly means "communication, feedback, simplicity and courage". Or "iterative and incremental". Or "evolutionary". Or "Scrum-Certified". I caught someone the other day proferring their definition of "testable", which apparantly now requires us to go through "public interfaces". This is bad news for many scientists, who must now rewrite their peer-reviewed papers to incorporate the appropriate programming language with which to express the "testability" of their theories.
If software development was physics, we might expect newcomers to work through and understand the current body of knowledge before they start adding to it. That way, at the very least, we could avoid a great deal of duplication of effort. We may also avoid the tendency of our industry to throw "old-timers" on the scrapheap just because, even though they are probably just as current in their practical ability to deliver working software, they're not "down with the kids" on all the latest street slang for concepts that have been kicking around the block for decades.
The thinking of our elders and betters is far from irrelevent and outmoded. We can still learn a thing or two from the likes of Jacobson, Knuth and Hoare, should we choose to reject fashion in favour of substance in the approach we take to our work.
March 22, 2009
Physicist Tests Journal of Cultural Studies - Finds Relativism Gone Mad!Alan D. Sokal, a physicist at NYU, submitted a wordy article to a leading journal of cultural studies packed full of outrageous and totally unsupported claims suggesting that physical reality is a social construct and even that science should be led by a political agenda. And it got published. Without any corrections.
As Sokal puts it:
"What concerns me is the proliferation, not just of nonsense and sloppy thinking per se, but of a particular kind of nonsense and sloppy thinking: one that denies the existence of objective realities, or (when challenged) admits their existence but downplays their practical relevance. At its best, a journal like Social Text raises important questions that no scientist should ignore -- questions, for example, about how corporate and government funding influence scientific work. Unfortunately, epistemic relativism does little to further the discussion of these matters.
In short, my concern over the spread of subjectivist thinking is both intellectual and political. Intellectually, the problem with such doctrines is that they are false (when not simply meaningless). There IS a real world; its properties are not merely social constructions; facts and evidence DO matter. What sane person would contend otherwise? And yet, much contemporary academic theorizing consists precisely of attempts to blur these obvious truths -- the utter absurdity of it all being concealed through obscure and pretentious language."
One can't help wondering if many of our leading professional publications would be just as easily bamboozled by relativist nonsense. I have read articles that, at the time, I thought must surely be spoofs.
Software development abounds with outrageous and totally unsupported claims and the "epistemic relativism" Sokal accuses Social Text of championing. Most specifically, the pernicious and ultimately disastrous notion that your way is as good as my way is as good as their ways (so everybody gets to be right - hoorah for everybody!) and that, in software projects, there is no objective reality and all that matters is perspective and discourse. In less fancy language, we get to sit around yapping all day and everybody's opinion is equally valid - and evidence doesn't matter.
Indeed, as critics of metrics are fond of pointing out, reliance on evidence leads to oppression. I would argue that misinterpretation and/or misapplication of - often poor quality - evidence can and does lead to bad things in all kinds of organisational life. But I don't infer from this that seeking evidence is therefore always wrong (far from it), or that the reality of software and software development itself is somehow beyond empiricial understanding in certain key respects.
This is currently a very unfashionable view, and one for which I'm often chided by my peers. But I increasingly believe it to be important, and see a dark future for our profession if we continue down the slippery slope of woolly thinking that we're on.
March 18, 2009
The Movement Formerly Known As "Software Craftsmanship"As with all intellectual movements, it seems the mere act of choosing a name is enough to stoke up heated debate.
It's actually not possible to use words currently in existence because they all have some kind of semantic baggage, and someone, somewhere is going to take offence somehow.
One possible solution is to take a leaf out of pop singer Prince's book and adopt an abstract symbol that's currently not in use.
The Movement Formerly Known As "Software Craftsmanship"
But then again, if you've ever taken the ink blot experiment, you'll know that even abstract shapes can get people hot under the collar.
On a related note, while some argue over the name, others continue to argue over what it is we actually believe in. Already there's a manifesto that's generating fun and games on discussion groups and blogs around and about the countryside. Meanwhile, a few folk seem to be venturing out into the Cursed Earth to preach the good word of Software Craftsmanship - from what bible I do not know, because we're still writing the outline.
Certainly, there seem to be different schools of emerging. Some, for example, see apprenticeship as a key requirement. I don't. Don't get me wrong, I think apprenticeships is a potentially great way to learn and improve. No argument there. But are there other ways? And are the people who tread that other path really "software craftsman"? (Sorry, are they really ...) Yes, I rather think they probably are.
Which all brings me to the conclusion that we may be overthinking things. Again. Because we're like that.