September 16, 2012
Are Woolly Definitions Of "Success" At The Heart Of Software Development's Thrall To Untested Ideas?In the ongoing debate about what works and what doesn't in software development, we need to be especially careful to define what we mean by "it worked".
In my Back To Basics paper, I made the point that teams need to have a clear, shared and testable understanding of what is to be achieved.
Without this, we're a ship on a course to who-knows-where, and I've observed all manner of ills stemming from this.
Firstly, when we don't know where we're supposed to be headed, steering becomes a fruitless exercise.
It also becomes nigh-on impossible to gauge progress in any meaningful way. It's like trying to score an archery contest with an invisible target.
To add to our worries, teams that lack clear goals have a tendency to eat themselves from the inside. We programmers will happily invent our own goals and persue our own agendas in the absence of a clear vision of what we're all meant to be aiming for.
This can lead to excess internal conflict as team members vie to stamp their own vision on a product or project. Hence an HR system can turn into a project to implement an "Enterprise Service Bus" or to "adopt Agile".
Since nobody can articulate what the real goals are, any goal becomes more justifiable, and success becomes much easier to claim. I've met a lot of teams who rated their product or project as a "big success", much to the bemusement of the end users, project sponsors and other stakeholders, who can take a very different view.
There are times when we can display all the misplaced confidence and self-delusion of an X Factor contestant who genuinely seems to have no idea that they're singing out of tune and dancing like their Dad at a wedding.
Much of the wisdom we find on software development comes from people, and teams, who are basing their insights on a self-endowed sense of success. "We did X and we succeeded, therefore it is good to X" sort of thing.
Here's my beef with that: first off, it's bad science.
It's bad science for three reasons: one is that one data point doesn't make a trend, two is that perhaps you have incorrectly attributed your success to X rather than one of the miriad other factors in software development, and three is that can we really be sure that you genuinely succeeded?
If I claim that rubbing frogspawn into your eyes cures blindness, we can test that by rubbing frogspawn into the eyes of blind people and then measuring the accuity of their eyesight afterwards.
If, on ther hand, I claim that rubbing frogspawn into your eyes is "a good thing to do", and that after I rubbed frogspawn into my eyes, I got "better" - well, how can we test that? What is "better"? Maybe I rubbed frogspawn into my eyes and my vocabulary improved.
My sense is that a worrying proportion of what we read and hear about "things that are good to do" in software development is based on little more than "how good (or how right) it felt" to do them. Who knows; maybe rubbing fresh frogspawn in your eyes feels great. But that has little bearing on its efficacy as a treatment.
Without clear goals, it's not easy to objectively determine if what we're doing is working, and this - I suspect - is the underlying reason why so much of what we know, or we think we know, about software development is so darned subjective.
Teams who've claimed to me that they're "winning" (perhaps because of all the tiger blood) have turned out to be so wide of the mark that, in reality, the exact oppsosite was true. These days, when I hear proclamations of great success, it's usually a precursor to the whole project getting canned.
The irony is that those few teams who knew exactly what they were aiming for often measure themselves more brutally against their goals, and are more pessimistic, despite in real terms being more "winning" than teams who were prematurely doing their victory lap.
This, I suspect, has also contributed to the dominance of subjective ideas in software development. Ideas backed up by objective successes seem to be expressed more tentatively and with more caveats than ideas backed up by little more than feelgood and tiger blood, which are expressed more confidently and in more absolute terms.
The naked ape in all of us seems to respond more favourably to people who present their ideas with confidence and a greater sense of authority. In reality, many of these ideas have never really been put to the test.
Once an idea's gained traction, there can be benefits within the software development community to being its originator or a perceived expert in it. Quickly, vested interests build up and the prospect of having their ideas thoroughly tested and potentially debunked becomes very unattractive. The more popular the idea, and the deeper the vested interests, the more resistance to testing it. We do not question whether a burning bush really could talk when we're in the middle of a fundraising drive for the church roof...
It's saddening to see, then, that in the typical lifecycle of an idea, publicising it often preceds testing it. More fools us, though. We probably need to be much more skeptical and demanding of hard evidence to back these ideas up.
Will that happen? I'd like to think it could, but the pessimist in me wonders if we'll always opt for the shiny-and-new and leave our skeptical hats at home when sexy new ideas - with sexy new acronyms - come along.
But a good start would be to make the edges of our definition of "success" crisper and less forgiving.
April 19, 2012
Enough With The Movements! Movements Are Stupid.
I've been around the block a few times as a software developer, and as such I've witnessed several movements in the industry come and go.
Each movement (object technology, patterns, component-based, model-driven, Agile, service-oriented, Lean, craftsmanship etc etc) attempts to address a genuine problem, usually. And at the core of every movement, there's a little kernel of almost universal truth that remains true long after the movement that built upon it fell out of favour with the software chattering classes.
The problem I perceive is that this kernel of useful insight tends to become enshrouded in a shitload of meaningless gobbledygook, old wives tales and sales-speak, so that the majority of people jumping on to the bandwagon as the movement gains momentum often miss the underlying point completely (often referred to as "cargo cults").
Along with this kernel of useful insights there also tends to be a small kernel of software developers who actually get it. Object technology is not about SmallTalk. Patterns are not about frameworks. Components are not about COM or CORBA. Model-driven is not about Rational Rose. SOA is not about web services. Agile is not about Scrums. Responsibility-driven Design is not about mock objects. Craftsmanship is not about masters and apprentices or guilds or taking oaths.
In my experience, movements are a hugely inefficient medium for communicating useful insights. They are noisy and lossy.
My question is, do we need movements? When I flick through my textbooks from my physics degree course, they don't read as a series of cultural movements within the physics community. What is true is true. If we keep testing it and it keeps working, then the insights hold.
What is the problem in switching from a model of successive waves of movements, leaving a long trail of people who still don't get it, and possibly never will, to a model that focuses on testable, tested, proven insights into software development?
I feel for the kid who comes into this industry today - or on any other day. I went through the exact same thing before I started reading voraciously to find out what had come before. They may be deluged with wave after wave of meaningless noise, and every year, as more books get published about the latest, greatest shiny thing, it must get harder and harder to pick out the underlying signal from all the branding, posturing and reinvention of the wheel.
You see, it's like this. Two decades of practice and reading has inexorably led me to the understanding that very little of what I've learned that's genuinely important wasn't known about and written about before I was even born. And, just as it it is with physics, once you peel away the layers of all these different kinds of particle, you discover underlying patterns that can be explained surprisingly succinctly.
For those who say "oh, well, software development's much more complicated than that", I call "bullshit". We've made it much more complicated than it needs to be. It's a lot like physics or chess (both set-theoretic constructs where simple rules can give rise to high complexity, just like code): sure, it's hard, but that's not the same as complicated. The end result of what we do as programmers can be massively complicated. But the underlying principles and disciplines are simple. Simple and hard.
We do not master complexity by playing up to it. By making what we do complicated. We master complexity by keeping it simple and mastering how software comes about at the most fundamental level.
Logic is simple, but algorithms can be complex. A Turing Machine is simple, but a multi-core processor is complex. Programming languages are simple, but a program can be highly complex. Programming principles are simple, but can give rise to highly complex endevours.
Complexity theory teaches us that to shape complex systems, we must focus on the simple underlying rules that give rise to them. At its heart, software development has a surprisingly small core of fundamental principles that are easy to understand and hard to master, many of which your average programmer is blissfully unaware.
True evolution and progress in software development, as far as I can see, will require us to drop the brands, dump the fads and the fashions, and focus on what we know - as proven from several decades of experience and several trillion lines of code.
February 29, 2012
The Maturity Model Maturity Model (MMMM)There are various maturity models in this business of wares that we call "soft". Maturity models fulfil a vital role in providing reassurance to managers who have no intention of actually improving anything ever, as well as providing incomes for consultants who might otherwise starve or be forced to work in the sex industry or something or other.
But there's a glaring hole in the maturity model market, namely that there's no maturity model for maturity models.
Fear not, though, as tonight I'm going to fill that glaring hole with my own Maturity Model Maturity Model (MMMM)
MMMM has 5 levels of maturity:
Level 1 - Ad hoc: You have invented a method or process that teams are pretending to adopt, but have yet to provide any guidance on how effectively they are pretending to adopt it.
Level 2 - Certifying: You offer training and certification in your method or process that informs organisations that people are properly qualified to pretend to adopt it
Level 3 - Certifier Certifying: You offer training and certification to people that lets organisations know that they are qualified to train and certify other people in the effective pretence of adopting your method or process
Level 4 - Upselling: You offer certification of whole organisations as well as individual people in the effective pretence of adopting your method or process. You have a checklist of things organisations must appear to be doing (evidenced by them having a document somewhere that says that they do it) that creates the convincing impression that they have actually adopted your method or process.
Level 5 - Reproducing: You offer certification to organisations that tells other organisations that they are qualified in the certification of organisations in the effective pretence of adopting your method or process, as well as certifying organisations in the certification of certifying organisations.
Level 6 - Expanding: You offer certification in things you know nothing about that are tenuously connected with the adoption of your method or process (e.g., the Correct Use Of Office Furniture Maturity Model), and continue to add arbitrary levels of maturity up to and including "Level Infinity - Transcendent Beings Of Pure Energy & Thought". Level 6 of MMMM is, of course, an illustration of Level 6 of MMMM.
June 27, 2011
Continuous Delivery is a Platform for Excellence, Not Excellence ItselfIn case anyone was wondering, I tend to experience a sort of "heirarchy of needs" in software development. When I meet teams, I usually find out where they are on this ladder and ask them to climb up to the next rung.
It goes a little like this:
0. Are you using a version control system for your code? No? Okay, things really are bad. Sort this out first. You'd be surprised how much relies on that later. Without the ability to go back to previous versions of your code, everything you do will carry a much higher risk. This is your seatbelt.
1. Do you produce working software on a regular basis (e.g., weekly) that you can get customer feedback on? No? Okay, start here. Do small releases and short iterations.
2. How closely do you collaborate with the customer and the end users? If the answer is "infrequently", "not at all", or "oh, we pay a BA to do that", then I urge them to get regular direct collaboration with the customer - this means programmers talking to customers. Anything else is a fudge.
3. Do you agree acceptance tests with the customer so you know if you've delivered what they wanted? No? Okay, then you should start doing this. "Customer collaboration" can be massively more effective when we make things explicit. Teams need a testable definition of "done": it makes things much more focused and predictable and can save an enormous amount of time. Writing working code is a great way to figure out what the customer really needed, but it's a very expensive way to find out what they wanted.
4. Do you automate your tests? No? Well, the effect of test automation can be profound. I've watched teams go round and round in circles trying to stabilise their code for a release, wasting hundreds of thousands of pounds. The problem with manual testing (or little or noe testing at all), is that you get very long feedback cycles between a programmer making a mistake and that mistake being discovered. It becomes very easy to break the code without finding out until weeks or even months later, and the cost of fixing those problems escalates dramatically the later they're discovered. Start automating your acceptance tests at the very least. The extra effort will more than pay for itself. i've never seen an instance when it didn't.
5. Do your programmers integrate their code frequently, and is there any kind of automated process for building and deploying the software? No? Software development has a sort of metabolism. Automated builds and continuous integration are like high fibre diets. You'd be surprised how many symptoms of dysfunctional software development miraculously vanish when programmers start checking inevery hour or three. It will also be the foundation for that Holy Grail of software development, which will come to later.
6. Do your programmers write the tests first, and do they only write code to pass failing tests? No? Okay, this is where it gets more serious. Adopting Test-driven Design is a none-trivial undertaking, but the benefits are becoming well-understood. Teams that do TDD tend to produce mucyh more reliable code. They tend to deliver more predictably, and, in many cases, a bit sooner and with less hassle. They also often produce code that's a bit simpler and cleaner. Most importantly, the feedback we get from developer tests (unit tests) is often the most useful of all. When an acceptance test fails, we have to debug an entire call stack to figure out what went wrong and pinpoint the bug. Well-written unit tests can significantly narrow it down. We also get feedback far sooner from small unit tests than we do from big end-to-end tests, because we write far less code to pass each test. Getting this feedback sooner has a big effect on our ability to safely change our code, and is a cornerstone in sustaining the pace of development long enough for us to learn valuable lessons from it.
Now, before we continue, notice that I called it "Test-driven Design", and not "Test-driven Development". Test-driven Development is defined as "Test-driven Design + Refactoring", which brings us neatly on to...
7. Do you refactor your code to keep it clean? The thing about Agile that too many teams overlook is that being responsive to change is in no small way dependent on our ability to change the code. As code grows and evolves, there's a tendency for what we call "code smells" to creep in. A "code smell" is a design flaw in the code that indicates the onset of entropy - growing disorder in the code. Examples of code smells include things like long and complex methods, big classes or classes that do too many things, classes that depend too much on other classes, and so on. All these things have a tendency to make the code harder to change. By aggressively eliminating code smells, we can keep our code simple and malleable enough to allow us to keep on delivering those valuable changes.
8. Do you collect hard data to help objectively measure how well you're doing 1-7? If you come to me and ask me to help you diet (though God knows why you would), the first thing I'm going to do is recommend you buy a set of bathroom scales and a tape measure. Too many teams rely on highly subjective personal feelings and instincts when assessing how well they do stuff. Conversely, some teams - a much smaller number - rely too heavily on metrics and reject their own experience and judgement when the numbers disagree with their perceptions. Strike a balance here: don't rely entirely on voodoo, but don't treat statistics as gospel either. Use the data to inform your judgement. At best, it will help you ask the right questions, which is a good start towards 9.
9. Do you look at how you're doing - in particular at the quality of the end product - and ask yourselves "how could we do this better?" And do you actually follow up on those ideas for improving? Yes, yes, I know. Most Agile coaches would probably introduce retrospectives at stage 0 in their heirarchy of needs. I find, though, that until we have climbed a few rungs up that ladder, discussion is moot. Teams may well need them for clearing the air and for personal validation and ego-massaging and having a good old moan, but I've seen far too many teams abuse retrospectives by slagging everything off left, right and centre and then doing absolutely nothing about it afterwards. I find retrospectives far more productive when they're introduced to teams who are actually not doing too badly, actually, thanks very much. and I always temper 9 with 8 - too many retrospectives are guided by healing crystals and necromancy, and not enough benefit from the revealing light of empiricism. Joe may well think that Jim's code is crap, but a dig around with NDepend may reveal a different picture. You'd be amazed how many truly awful programmers genuinely believe it's everybody elses' code that sucks.
10. Can your customer deploy the latest working version of the software at the click of a mouse whenever they choose to, and as often as they choose to? You see, when the code is always working, and when what's in source control is never more than maybe an hour or two away from what's on the programmer's desktops, and when making changes to the code is relatively straightfoward, and when rolling back to previous versions - any previous version - is a safe and simple process, then deployment becomes a business decision. They're not waiting for you to debug it enough for it to be usable. They're not waiting for smal changes that should have taken hours but for some reason seem to take weeks or months. They can ask for feature X in the morning, and if the team says X is ready at 5pm then they can be sure that it is indeed ready and, if they choose to, they can release feature X to the end users straight away. This is the Holy Grail - continuous, sustained delivery. Short cycle times with little or no latency. The ability to learn your way to the most valuable solutions, one lesson at a time. The ability to keep on learning and keep on evolving the solution indefinitely. To get to this rung on my ladder, you cannot skip 1-9. There's little point in even trying continuous delivery if you're not 99.99% confident that the software works and that it will be easy to change, or that it can be deployed and rolled back if necessary at the touch of a button.
Now at this point you're probably wondering what happened to user experience, scalability, security, or what about safety-critical systems, or what about blah blah blah etc etc. I do not deny that these things can be very important. But I've learned from experience that these are things that come after 1-10 in my heirarchy of needs for programmers. That's not to say they can't be more important to customers and end users - indeed, user experience is often 1 on their list. But to achieve a great user experience, software that works and that can evolve is essential, since it's user feedback that will help us find the optimal user experience.
To put it another way, on my list, 10 is actually still at the bottom of the ladder. Continuous delivery and ongoing optmisation of our working practices is a platform for true excellence, not excellence itself. 10 is where your journey starts. Everything before that is just packing and booking your flights.
July 14, 2010
Codemanship's Code Smell Of The Week - Data ClassesA key goal of OO design is to minimise depdencies between classes by packaging data and behaviour as close together as we can. In practice, a good rule of thumb for class design is to put fields and methods that use those fields in the same classes. Data classes are classes which just have fields and no behaviour (besides simple getters and setters), and they break this rule of thumb, creating serious dependency issues in your code.
Here, Jason Gorman demonstrates how to refactor data classes by moving the methods (or parts of methods) that use fields of dta classes into the classes that contain those fields.
Download the source code from http://bit.ly/czsOHP
For training and coaching in refactoring, TDD and OO design, visit http://www.codemanship.com
April 24, 2010
Software Is Both Art & Science. Can We Move On Now?Bonjour, mes enfants.
SEMAT has gotten me thinking. Any mention of "engineering" and "science" in software development seems to polarise opinion.
Undoubtedly, there's a large section of the software development community who believe those words simply do not apply. Software development is not a science. It's an art. Or a craft. Like basket weaving. It's not engineering. No sir!
And there's an equally large section of the community who believe the exact opposite. Software development is a science. It is engineering. We can apply scientific principles to shape predictable, well-engineered end products.
Of course, they're both wrong.
Anyone who dismisses the notion of any kind of scientific basis for software development is running away from reality. Everything that exists has a scientific basis. Even American Idol. You just have to understand it. That we don't fully understand the science of software does not mean that no such science exists or that we'll never understand. I just can't help being reminded of UFOlogists who claim that "science cannot be applied to the study of UFOs". What they mean, of course, is "I've got a good thing going here selling my unscientific ideas to these schmucks, and I'd like to keep it that way, thanks".
And anyone who believes that software development can be completely tamed by science is equally deluded. There are sciences - emerging in recent decades - that teach us that there are many things in the world that, while it's possible that we can fully understand them, we will never be able to control them. Chaos is science. And software development is mostly chaos. Any vision of "software engineering", where pulling lever A causes X and pulling lever B causes Y with certainty, is the product of naivity at the macro, project scale. Life just isn't like that. Clockwork might be, but life isn't. Software development is intractably complex and unpredictable.
In the real world, biology has come up with processes that deal effectively - but not predictably - with intractably complex problems. Evolution is one such process. Evolution solves complex, multi-dimensional problems by iterating through generations of potential solutions. That is science. We can understand it. But we cannot even begin to predict what solution a process of evolution will reach, or how long it will take to reach it.
The clockwork, Newtonian paradigm of "software process enginering" is fundamentally flawed. And anyone who believes that it's possible to attain "value" deterministically is deeply mistaken. "If we pull lever A, we'll ship an extra 10,000 units". Give me a break!
Flawed, too, is any notion that this means that there's no engineering at all to be done in creating good software. I doubt anyone would claim that making rock music is "engineering" - well, anyone sane, at least. But there is science that can be applied within this process.
It's possible mathematically to predict what effect the choice of software compressor and the compression settings used will have on the amplitude of a recording across a certain frequency range. Indeed, it is helpful in getting the best-sounding mix. There is such a thing as "audio engineering" within the music production process. Granted, it's chiefly a creative process. But there is useful science we can appky within in to help tweak the results closer to perfection.
Similarly, while software design is chiefly a creative process, it can be useful to know if the code we're writing "smells" in any significant way. Some code smells can be detected just by looking at the code and using our judgement. Others are more subtle and harder to spot, but just as damaging to maintainability. Code analysis tools, as they grow more sophisticated, can complement our "eye for good design" every bit as much as audio engineering tools complement our "ear for a good mix" and music composition aids can complement our "ear for a good tune".
The trick, I believe, is to find the balance and work within the limitations of science and engineering and creative disciplines. Trying to figure out the formula for "valuable software" is every bit as futile as looking for a formula for making "hit records". And relying entirely on your eyes and ears to refine an end product has severe limitations. For millenia, we've used tools and theory to tweak and refine all manner of creative end products. We even have a word for it: "machine-tooled". The fact that you can fit a computer more powerful than all of the computers in the world were 30 years ago in your breast pocket, and it looks good, is testament to this symbiosis of science and art.
We can continue to refine and extend our scientific understanding of code and coding through research and exploration, just like any science. And new and useful tools will emerge from that understanding that will make it possible for us to produce better quality code, for sure.
And we can also continue to develop and refine the art of software development. Through reflection, practice and sharing.
There is such a thing as "software engineering", but it has limited scope, just like "audio engineering". Specifically, it's limited to things we can predict and can control, like what happens to coupling and cohesion if we move a class from one package to another.
The bigger picture of "delivering value" is a complex human endevour, and creativity, judgement and more than a sprinkling of luck is all we have that we can bring to bear in any meaningful way at this level. We may be capable of understanding, with the benefit of hindsight, why Feature X was used more than Feature Y when the software was released. But then, with hindsight, we understand quite a lot about volcanos and hurricanes, too. These are things that can only be understood with hindsight, really. We don't see them coming until they're almost upon us, and we have two choices - stay and risk everything or get out of the way and live to fight another day.
In years to come, I'll probably notice more and more a difference between "hand-rolled" software and software that has been written with some help from "software engineering" tools - the more grown-up descendants of tools like XDepend and Jester.
But I sincerely doubt I will ever be able to tell at the start of a project whether the resulting software will enjoy success or not. Sure, I'll be able to look back on projects and say "hey, y'know what we got wrong?" But far in advance, the outcome is every bit as unknowable as a hurricane, volcanic eruption or hit record. So where things like requirements and processes and "enterprise architeture" are concerned, I'll stick with arts and crafts.
February 15, 2010
Wheel-driven ReinventionOne aspect of software development which is at once both amusing and troubling is the ability of us young whippersnappers to completely ignore what's gone before and reinvent established wisdom in our own image - often stealing the credit.
Take testing as an example. What do we know about testing software today that we didn't know, say, thirty years ago? Sure, we have new tools and testing has to fit within new approaches to the process of writing software as a whole, but fundamentally what have we discovered in the last decade or so?
Testing behaviour still works, by necessity, much as it has always worked by necessity. We must put the system under test in some desired initial state, then we must provide some stimulus to the system to trigger the behaviour we wish to test, then we must make observations about the final state of the system or about any behaviours that should have been invoked (e.g., a remote procedure call or a database request) in response to the combination of our stimulus and the initial conditions. And this process must be repeatable and predictable, like any good scientific test.
Though the culture of testing software may have evolved, much of it for the better, and the technology may have improved (though that is questionable), and though there are undoubtedly more people testing their systems today, when it comes to the business of writing and executing tests, there's really nothing new under the sun.
The same is true of many aspects of contemporary software development. Like it or nay, iterative and incremental development is older than C. We just weren't doing it back then, in the main.
Indeed, pick any "new" aspect of development and trace it back to its roots, and we discover that most novelties are actually much older than many of us thought. Objects are an invention from the sixties. Use cases hail from the seventies. Responsibility-driven design was being practiced before Frankie told us to Relax. UML existed in a fragmentary form before the Berlin Wall came down. People were writing code to satisfy tests back when those tests were stored on magnetic tape. Indeed, some of the descriptions of programming that was done for the very first computers rings bells with those of us who practice that black art today.
Younger developers like me, though, seem to readily believe that our time around is the first time around and feel no compunction to educate ourselves about the achievements of "old-timers", preferring instead to invent things anew - with sexier names and shinier tools, admittedly.
Our desire to reinvent goes as far as redefining words that already have a well-established definition. "Agile" no longer means "nimble" , "quick" or "spry". Today it apparantly means "communication, feedback, simplicity and courage". Or "iterative and incremental". Or "evolutionary". Or "Scrum-Certified". I caught someone the other day proferring their definition of "testable", which apparantly now requires us to go through "public interfaces". This is bad news for many scientists, who must now rewrite their peer-reviewed papers to incorporate the appropriate programming language with which to express the "testability" of their theories.
If software development was physics, we might expect newcomers to work through and understand the current body of knowledge before they start adding to it. That way, at the very least, we could avoid a great deal of duplication of effort. We may also avoid the tendency of our industry to throw "old-timers" on the scrapheap just because, even though they are probably just as current in their practical ability to deliver working software, they're not "down with the kids" on all the latest street slang for concepts that have been kicking around the block for decades.
The thinking of our elders and betters is far from irrelevent and outmoded. We can still learn a thing or two from the likes of Jacobson, Knuth and Hoare, should we choose to reject fashion in favour of substance in the approach we take to our work.
December 15, 2009
Value Is Not The Opposite Of Waste: Why I Don't Buy Into Process ImprovementIf you've been a regular visitor to my blog since 2005, first of all, thank you. You may also know that I'm not the biggest fan of mechanistic or pseudo-scientific approaches to making teams better at creating software.
Which is why I don't buy into process improvement any more. At all. In all it's guises. TQM, BPR, Six Sigma, Lean. All old wives tales and hifalutin mumbo-jumbo, in my honest opinion.
Yes, there are the success stories that devotees and evangelists routinely point to. Usually in Asia. Mostly Toyota.
But anyone can find the exception that proves the rule. I can point to 80-a-day smokers who lived to be 100. In a wider sample population, there doesn't seem to be compelling evidence that process improvement makes a positive difference in the long term.
Commoditising "value" in this way, suggesting it can "flow" (and no doubt can be diced, sliced, weighed and stored for future use, like charcoal briquettes), seems very alien to me. "Value", in my mind, is a very complex and vague concept. There's fiscal value, of course. Profit. But businesses have been learning the hard way that such a one-dimensional view of what matters can lead to a very one-dimensional approach to management. Companies that are only interested in profits tend to be so at the expense of other kinds of "value", like satisfied customers, content employees, safe neighbourhoods, clean air, and so on.
Businesses have been learning, albeit very slowly and clumsily, that long-term success comes from chasing a richer, multi-dimensional and balanced set of outcomes. They are also learning that, like all multi-bodied problems, these outcomes interact and affect each other in complex ways. Who knew that improving the quality of your products could actually reduce costs, for example?
To suggest that this complex web of interconnected "things" can somehow "flow", to me, sounds as bizarre as proposing that "happiness" is rectangular for easy stacking.
The reality is so much richer and nuanced and unpredictable, of course. It is not like sorting out blockages in your plumbing, or hot-rodding an engine. A does not necessarily follow B.
The most damning indictment of process improvement is the widely-accepted fact that good developers tend to produce better software. All the tweaking and Six Sigma-ing and Lean-ing in the world won't make a piss-poor team appreciably better at delivering "value", any more than it could make an orchestra any better at playing Mozart.
Did it occur to anyone that the folks at Toyota just got better at making cars?
And it's not without it's unpleasant side-effects, either. Many methods focus on reducing "waste". Lean goes the whole hog and suggests that if we reduce waste, we improve the "flow of value". This has a very definite "profit vs. loss" feel to it. It is distinctly one-dimensional.
Complex systems need a fair dollop of waste. Waste might come in the form of unsuccessful prototypes, or a range of choices, or a level of redundancy. We only use 10% of our brains. Most of our DNA is "junk". Many species fail to flourish, ending up as food for the ones who do. Is anyone suggesting that removing 90% of my brain would make me smarter? Or that removing my junk DNA will make my childern healthier and stronger? Or that only successful organisms should be allowed to be born?
My point, laboured as it is, is that in many cases "waste" turns out to be there for a very good reason. In creative and innovative persuits, which are inherently novel and therefore unpredictable, who can say what will never be needed?
The adaptive capacity of a complex system necessitates some waste. Some choice. Some diversity. Some redundancy. Some slack.
In this respect, the likes of Six Sigma are effectively anti-Agile, if we interpret "Agile" to mean "responsive to change". If that's your goal, then focus on the people in your teams, and make plenty of room for their innate learning and adaptive abilities to work their magic.
You may now start throwing the furniture around.
November 15, 2009
Skepticism Is What Software Development Needs NowI've blogged in the past about the rather alarming amount of hearsay and pseudoscience in software development.
Today I tried a little social experiment. On a social network. Which I thought was fitting.
I asked on Twitter whether typing speed was a significant factor in programmer productivity. As far as I'm aware, there's no data on this, so the answer of a scientist would have to be "we don't know". You may have your theories about whether it would help with flow if you typed faster, or whether writing more code in less time would mean you make more mistakes and waste even more time fixing them. But none of these theories has been put to the test.
Bottom line is, we just don't know.
I got quite a mix of responses, many of which had similar theories. But few said those magic words: I don't know.
As it happens, there's a whole bunch of stuff we just don't know in software development. Most of the accepted wisdom can be traced back to proclamations made with little or no evidence to back them up.
Refactoring makes change easier? Does it? I mean, really, does it? Where's the evidence to support that?
Sure, there's plenty of anecdotal evidence. And I'm every bit as guilty of saying with an air of certainty that refactoring makes change easier and other spurious proclamations. But there's plenty of anecdotal evidence for alien abductions, faith healing and the Loch Ness monster.
Now, I'm pretty sure refactoring does make it easier to change software, if you do enough of it and do it effectively. But I have not a shred of convicing evidence for that. I just believe it. Very strongly. Probably as strongly as Tony Blair believes bombing civilians in Iraq was the right decision. He is very, very wrong. How do I know I'm not wrong about refactoring? I don't. I take it on faith. Which isn't good enough, really, I'm afraid.
If we're to distinguish ourselves from the loonies sittings out in the Nevada desert wearing tin-foil hats waiting for aliens to show up and give us world peace, we need to start adopting a more practical kind of skepticism, and start subjecting some of these old wives tales to a more scientific kind of scrutiny.
April 8, 2009
SPA2009, SC2010, Carpool & General NonsenseJust a quick post today with a few honourable mentions.
Firstly, SPA2009; with all sorts of stuff going on at the moment, I only managed to make it to one afternoon yesterday, which is a shame because it looked liked it was going swimmingly. SPA's a much more sophisticated affair than SC2009 was (though cheap and cheerful has its place, I should add), and hats off to the organisers for putting it all together. I'm due to be in town later this afternoon, so maybe I'll try and hook up with a few folk after the conference for a beer.
My own session at SPA was fun, but poorly attended, I'm afraid. I enjoyed it, anyway. And that's all that matters, I s'pose :-)
You can view a Flash demo of the practical elements of the session by clicking here. Alas, what you will have missed out on is the experienced insights of people like Alan Wills, Dave Cleal and Mark Dalgarno at the session itself. But I'm sure you'll get the jist of it.
On the subject of Flash demos, I just checked my stats on Libsyn and it looks like the TDD in C# demo has now been viewed 3,000 times. Which is nice. Remind me to do a Java version soon.
Finally, a quick mention about next year's Software Craftsmanship conference.
In exactly what form remains to be seen, but I'm committed. And if I'm committed, that means it's happening. Even if it's happening in my flat. I can't tell you too much at this early stage, because there are lots of decisions that need to made, alliances that need to be brokered and sexual favours that need to be administered to make a conference possible. What I can tell you is this:
1. I'm banning talks, presentations or any other kind of session that doesn't involve real live coding. The feedback has been very clear about this; the best bit was getting a bunch of smart, talented and skilled developers around some code and talking turkey with real, concrete examples. I was impressed by how powerful that simple act can be when I used to take teams into a room with a laptop and a projector for a week or two and we'd take turns at the keyboard implementing solutions to real requirements. It's a great way to bring everybody to the same page quickly and with a lot less blah blah blah. And I feel very strongly now that this is the way forward for SC2010 and hopefully beyond. Let the code do the talking!
2. We're going to sort out network access, source control and other technical stuff this time around, so hopefully the first twenty minutes of your session won't be taken up with folk handing memory sticks around. We'll also keep copies of all the code that gets created, which will be donated to medical science for their secret occult experiments. Or something like that.
And while we're on the subject of software craftsmanship, in answer to one Tweeter's question - no, not everybody who promotes craftsmanship necessarily thinks they're a "master craftsman". Far from it, in fact. Don't be bullied by this sort of anti-craftsmanship rhetoric. If you go out there and push the craftsmanship message, you are doing a good thing. It doesn't make you arrogant or elitist. It does mean that you care, and that you at least aspire to high standards. Any negativity or animosity some people might voice about this nascent movement, I suspect, says more about the naysayers than it does about the size of your ego.
My ego, of course, is massive and I do - as the same Tweeter kindly pointed out in a private message to me - have a very high opinion of myself. And why not? I still hold the self-endowed title of "World's Greatest Software Developer" and nobody has challenged me for it yet. I am also a brilliant rock guitar player, Kung Fu fighter and I'm fantastic in bed. But that has nothing to do with my support of software craftsmanship. I'm just a big-headed gobshite who just happens to care about software development. I'm perfectly comfortable with that. Hey, it works for me :-)
Anyhoo, I'm off to watch this week's Carpool (highly recommended, by the way) before I launch into some actual work. Toodle-pip!