April 24, 2010
Software Is Both Art & Science. Can We Move On Now?Bonjour, mes enfants.
SEMAT has gotten me thinking. Any mention of "engineering" and "science" in software development seems to polarise opinion.
Undoubtedly, there's a large section of the software development community who believe those words simply do not apply. Software development is not a science. It's an art. Or a craft. Like basket weaving. It's not engineering. No sir!
And there's an equally large section of the community who believe the exact opposite. Software development is a science. It is engineering. We can apply scientific principles to shape predictable, well-engineered end products.
Of course, they're both wrong.
Anyone who dismisses the notion of any kind of scientific basis for software development is running away from reality. Everything that exists has a scientific basis. Even American Idol. You just have to understand it. That we don't fully understand the science of software does not mean that no such science exists or that we'll never understand. I just can't help being reminded of UFOlogists who claim that "science cannot be applied to the study of UFOs". What they mean, of course, is "I've got a good thing going here selling my unscientific ideas to these schmucks, and I'd like to keep it that way, thanks".
And anyone who believes that software development can be completely tamed by science is equally deluded. There are sciences - emerging in recent decades - that teach us that there are many things in the world that, while it's possible that we can fully understand them, we will never be able to control them. Chaos is science. And software development is mostly chaos. Any vision of "software engineering", where pulling lever A causes X and pulling lever B causes Y with certainty, is the product of naivity at the macro, project scale. Life just isn't like that. Clockwork might be, but life isn't. Software development is intractably complex and unpredictable.
In the real world, biology has come up with processes that deal effectively - but not predictably - with intractably complex problems. Evolution is one such process. Evolution solves complex, multi-dimensional problems by iterating through generations of potential solutions. That is science. We can understand it. But we cannot even begin to predict what solution a process of evolution will reach, or how long it will take to reach it.
The clockwork, Newtonian paradigm of "software process enginering" is fundamentally flawed. And anyone who believes that it's possible to attain "value" deterministically is deeply mistaken. "If we pull lever A, we'll ship an extra 10,000 units". Give me a break!
Flawed, too, is any notion that this means that there's no engineering at all to be done in creating good software. I doubt anyone would claim that making rock music is "engineering" - well, anyone sane, at least. But there is science that can be applied within this process.
It's possible mathematically to predict what effect the choice of software compressor and the compression settings used will have on the amplitude of a recording across a certain frequency range. Indeed, it is helpful in getting the best-sounding mix. There is such a thing as "audio engineering" within the music production process. Granted, it's chiefly a creative process. But there is useful science we can appky within in to help tweak the results closer to perfection.
Similarly, while software design is chiefly a creative process, it can be useful to know if the code we're writing "smells" in any significant way. Some code smells can be detected just by looking at the code and using our judgement. Others are more subtle and harder to spot, but just as damaging to maintainability. Code analysis tools, as they grow more sophisticated, can complement our "eye for good design" every bit as much as audio engineering tools complement our "ear for a good mix" and music composition aids can complement our "ear for a good tune".
The trick, I believe, is to find the balance and work within the limitations of science and engineering and creative disciplines. Trying to figure out the formula for "valuable software" is every bit as futile as looking for a formula for making "hit records". And relying entirely on your eyes and ears to refine an end product has severe limitations. For millenia, we've used tools and theory to tweak and refine all manner of creative end products. We even have a word for it: "machine-tooled". The fact that you can fit a computer more powerful than all of the computers in the world were 30 years ago in your breast pocket, and it looks good, is testament to this symbiosis of science and art.
We can continue to refine and extend our scientific understanding of code and coding through research and exploration, just like any science. And new and useful tools will emerge from that understanding that will make it possible for us to produce better quality code, for sure.
And we can also continue to develop and refine the art of software development. Through reflection, practice and sharing.
There is such a thing as "software engineering", but it has limited scope, just like "audio engineering". Specifically, it's limited to things we can predict and can control, like what happens to coupling and cohesion if we move a class from one package to another.
The bigger picture of "delivering value" is a complex human endevour, and creativity, judgement and more than a sprinkling of luck is all we have that we can bring to bear in any meaningful way at this level. We may be capable of understanding, with the benefit of hindsight, why Feature X was used more than Feature Y when the software was released. But then, with hindsight, we understand quite a lot about volcanos and hurricanes, too. These are things that can only be understood with hindsight, really. We don't see them coming until they're almost upon us, and we have two choices - stay and risk everything or get out of the way and live to fight another day.
In years to come, I'll probably notice more and more a difference between "hand-rolled" software and software that has been written with some help from "software engineering" tools - the more grown-up descendants of tools like XDepend and Jester.
But I sincerely doubt I will ever be able to tell at the start of a project whether the resulting software will enjoy success or not. Sure, I'll be able to look back on projects and say "hey, y'know what we got wrong?" But far in advance, the outcome is every bit as unknowable as a hurricane, volcanic eruption or hit record. So where things like requirements and processes and "enterprise architeture" are concerned, I'll stick with arts and crafts.
February 15, 2010
Wheel-driven ReinventionOne aspect of software development which is at once both amusing and troubling is the ability of us young whippersnappers to completely ignore what's gone before and reinvent established wisdom in our own image - often stealing the credit.
Take testing as an example. What do we know about testing software today that we didn't know, say, thirty years ago? Sure, we have new tools and testing has to fit within new approaches to the process of writing software as a whole, but fundamentally what have we discovered in the last decade or so?
Testing behaviour still works, by necessity, much as it has always worked by necessity. We must put the system under test in some desired initial state, then we must provide some stimulus to the system to trigger the behaviour we wish to test, then we must make observations about the final state of the system or about any behaviours that should have been invoked (e.g., a remote procedure call or a database request) in response to the combination of our stimulus and the initial conditions. And this process must be repeatable and predictable, like any good scientific test.
Though the culture of testing software may have evolved, much of it for the better, and the technology may have improved (though that is questionable), and though there are undoubtedly more people testing their systems today, when it comes to the business of writing and executing tests, there's really nothing new under the sun.
The same is true of many aspects of contemporary software development. Like it or nay, iterative and incremental development is older than C. We just weren't doing it back then, in the main.
Indeed, pick any "new" aspect of development and trace it back to its roots, and we discover that most novelties are actually much older than many of us thought. Objects are an invention from the sixties. Use cases hail from the seventies. Responsibility-driven design was being practiced before Frankie told us to Relax. UML existed in a fragmentary form before the Berlin Wall came down. People were writing code to satisfy tests back when those tests were stored on magnetic tape. Indeed, some of the descriptions of programming that was done for the very first computers rings bells with those of us who practice that black art today.
Younger developers like me, though, seem to readily believe that our time around is the first time around and feel no compunction to educate ourselves about the achievements of "old-timers", preferring instead to invent things anew - with sexier names and shinier tools, admittedly.
Our desire to reinvent goes as far as redefining words that already have a well-established definition. "Agile" no longer means "nimble" , "quick" or "spry". Today it apparantly means "communication, feedback, simplicity and courage". Or "iterative and incremental". Or "evolutionary". Or "Scrum-Certified". I caught someone the other day proferring their definition of "testable", which apparantly now requires us to go through "public interfaces". This is bad news for many scientists, who must now rewrite their peer-reviewed papers to incorporate the appropriate programming language with which to express the "testability" of their theories.
If software development was physics, we might expect newcomers to work through and understand the current body of knowledge before they start adding to it. That way, at the very least, we could avoid a great deal of duplication of effort. We may also avoid the tendency of our industry to throw "old-timers" on the scrapheap just because, even though they are probably just as current in their practical ability to deliver working software, they're not "down with the kids" on all the latest street slang for concepts that have been kicking around the block for decades.
The thinking of our elders and betters is far from irrelevent and outmoded. We can still learn a thing or two from the likes of Jacobson, Knuth and Hoare, should we choose to reject fashion in favour of substance in the approach we take to our work.
November 10, 2009
Scrum or Kanban? Pick One And Get On With Delivering Quality Code!I'm getting increasingly vexed by this unhealthy obsession with planning and project management, especially among the Agile community.
The likes of Scrum, Kanban and other variations of the put-stuff-into-some-kind-of-prioritised-work-queue-and-pick-new-work-from-the-top theme have become an obsession to the point that one could be forgiven for thinking that this is what software projects are all about.
They are not optional, of course. You need the work queue. It needs to be effectively prioritsed. You need to track progress as objectively as possible. It needs to be highly visible and transparent. And you need the customer to drive all of this.
But these are no-brainers. There's an inescapable logic behind them, and they should take mere minutes to learn to a practical level where they can be successfully applied.
Writing reliable and maintainable code, on the other hand, takes years to master. And I see increasing numbers of teams who are so caught up in the whole planning and project management aspect of their work that they lose focus on bettering themselves as programmers. Indeed, many of them fall so in love that they cease to be programmers and instead travel the land as disciples of their chosen methodology, spreading the good word to hapless other teams, who in turn become infected with the Scrum/Kanban meme.
That these practices are so very easy to learn is what makes them so virulent. And, if done right, they do help. They help a lot. There's no questioning that.
But if you are churning out crappy unclean code, they don't. Agile relies on code being easier to change. If it is complicated, riddled with duplication and unmanaged dependencies, lacking regression test assurance and basically cobbled together under the relentless pressure of a Scrum or Kanban drumbeat (Kanban's beat sounding a bit more like free-form jazz, obviously), teams will inevitably hit the barrier of increasing software "viscosity" and all their brilliant planning and tracking will just reveal for all to see how quickly productivity is slowing down.
You cannot deliver a continuous stream of anything if your bad habits keep clogging up the pipes.
So clean code is a prerequisite of Agile project management. Teams must focus 90%+ of their effort on to delivering higher quality code, and not waste their time obsessing about whether they should estimate using Fibonacci numbers in their Planning Poker sessions or what colour index cards they should use for reporting bugs.
I'm not saying these things aren't important. But they are practically trivial and easy to master, and they'll mean diddly-squat if you aren't keeping a very tight reign on code quality.
There. I've said it.
May 18, 2009
EU Proposes Consumer Protection Against Buggy GamesIf this story is accurate, then this is a very interesting development indeed.
It seems the EU Commission now feel that bugs in game software constitute a faulty product, and as such should be covered by the same kind of consumer protection laws that cover is when we buy a faulty toaster or a faulty lawnmower.
This is a radical step forward in their thinking. Historically, software license agreements have provided developers with a "get out of jail free" card they can play that says that just because the product you paid good money for doesn't necessarily work as advertised that doesn't mean you're entitled to a refund.
If this law came into effect, it would mean games developers could no longer fob us off with the "we'll fix it in the upgrade" excuse, which often requires us to actually pay to get the fix in many kinds of software (along with a whole bunch of new bugs, of course.)
This would require games developers to seriously up their game - if you'll excuse the pun - as far as reliability is concerned.
I'm sure I don't need to tell you that I'd like to see this law come into force, and to see it extended to cover all commercial software - especially bespoke.
Is it enforceable, though? Well, perhaps with a few simple standards regarding product delivery, then yes, it might just be. A software product is essentially just a set of files that are built from the source and other artefacts. If that product is created using an automated build proess, and if both the source files and the build scripts are strictly managed - in both the practical SCM sense and in the legal sense that a copy is kept as part of the developers' records - just as a civil engineering project will have to keep records of plans and engineering calculations and wotnot just in case the bridge falls down or something - then it should be possible in a dispute to trace a shipped product directly back to the source it was built from. Any attempts at shenanigans on the part of the developers could be rebuked simply by running the build and comparing the resulting set of outputs against what was shipped/downloaded.
Anyway, hurrah for the EU (for once) and let's keep our fingers crossed for the best outcome.
November 10, 2008
SPA 2009 - Scalable .NET Code ReviewsSo the results are in and - NEWSFLASH - it looks like my session proposal on scaling up code reviews using automated analysis has been accepted for Software Practice Advancement 2009.
What with that, the Software Craftsmanship conference I'm launching, and my annual electro-shock therapy, the first quarter of 2009 could be pretty busy for one Senor Gormando.
Another scoop is that the SPA venue is changing. Originally they were going to hold it in some remote, isolated hell-hole on the Cambridgeshire-Northamptonshire border. I can't remember the exact name of the place, but I seem to recall Skeletor lived there for a while. Anyway, the new venue is going to be what some would say is the spiritual home of the SPA conference, namely the BCS building in Covent Garden, London.
This is geat news for folks who live in or around London, because the registration fee no longer has the considerable burden of accomodation to shoulder, making participation more financially attractive this year. It's not so good news if you live on the Cambridgeshire-Northamptonshire border, but if you do then there's a very good chance that you're Skeletor or one of his minions, in which case you're probably not welcome this year. (Unless, that is, you're presenting a session called "Universal Domination in Ruby", of course...)
If you're coming to SPA (which will be in early April) and are a .NET bod who might want to attend my session, then you'll probably need to bring a laptop and it'll need Visual Studio 2005 or later installed for the practical stuff. We'll be using NDepend (yeah, I know - big surprise) to do the actual code analysis, and if you haven't already installed and fiddled with it before the session, I'll be making a trial version available on disk that installs in the same time that it takes to unzip the files to you hard drive (literally, that's the installation procedure.)
My second session proposal, for a panel debate called The Agile Delusion, was cruelly overlooked again by the selection committee (the fools! Dont they knows genius when they sees it?) But the relocation of the conference to central London opens up the tantelising possibility of running it in a nearby hostelry as a sort of "Off Broadway" event one evening during the 3-day conference. Watch this space.
October 28, 2008
Software Craftsmanship 2009 - Conference In DevelopmentFirst the good news.
I'm in the process of launching a new conference here in sunny old London Town (or "Larndarn Tarn", if you happen to have been born here).
I can't give away too much just yet, because:
a. There's not that much to give away, and
2. There's many a slip twixt cup and lip, and there's always the danger of these things falling through
But I can tell you that the working title for the conference is Software Craftsmanship 2009.
And I can tell you that the focus is going to be on the "hard skills" that take years to master. You know, the actual craft of writing good software. OO design, test-driven development, refactoring, build automation, architecture, patterns, code generation, modeling, concurrent and distributed programming. That sort of thing. Certainly there won't be any sessions about yet more things you can do with coloured bits of card and lego. Well, not unless anyone's discovered a way to generate working code from them.
I can also tell you that we have a provisional date and a provisionally booked venue. The provisional date is February 26th 2009. I'm not going to reveal the venue just yet, though. But it will be in London, rest assured.
Finally, I can tell you that the program selection committe is already starting to shape up very nicely indeed. And the invites are still going out, so we're looking forward to a very healthy pool of world-class expertise to help pick the final schedule.
Keep your eyes peeled for more information posted on this blog, or join my Yahoo! group for announcements.
An informal request for session proposals will be going out in about a week's time. Email me if you'd like to be included in this mailing.
October 26, 2008
Outsourcing The "Build Phase"Managers often ask me about the so-called "build phase" of the software development lifecycle, often with the intention of outsourcing it to cheaper and possibly less skilled programmers.
It's actually very easy to identify and has very clear milestones.
If you're working in Visual Studio, for example, it starts when you hit CTRL+Shift+B and usually ends with a message telling you that the "build succeeded".
This is easy to outsource, but arguably this would offer very limited savings.
October 24, 2008
Example Agile Quality Assurance StrategyFor the morbidly curious among you, here's a link to a (suitably anonymised) quality assurance strategy for what some might describe as an "Agile project" team.
There's much emphasis on defect prevention and "left-shifting" (the practice of moving testing further upstream in the design and development process), as well as on automation and the economy of scale that can be achieved.
There are metrics. Don't be afraid. They're only baby metrics, and haven't learned how to bite yet.
There's also some process guidance and a bt of innovation for incorporating non-functional quality requirements into a lightweight, iterative and - most importantly - test-driven development approach.
October 14, 2008
The WAgile Software Development Life CycleI've had lots of emails from people asking for guidance on how to adopt a WAgile approach to software delivery.
WAgile, as all know, stands for "Waterfall-Agile", and is the pinnacle of dysfunctional development methodologies. Yes, folks: literally thousands of projects have failed the WAgile way, and - with the help of this handy cut-out-and-worship WAgile Software Development Life Cycle chart - yours could soon be following them.
The WAgile SDLC, yesterday
And don't forget to write lots of useless comments and completely meaningless tests, too. It's not essential, but every little helps.
And, hey, maybe when you look at this chart you'll realise that you're already doing WAgile. Lucky you! Failure is now imminent, and you can completely screw up, safe in the knowledge that following the WAgile process to the letter will completely exhonorate you of any professional responsibility.
Happy days are here again!
September 20, 2008
SPI - "Software People Inspiring"I don't often meet people who actively disagree with the point of view that the largest factor in the success or failure of software projects is the people who work on them.
Fools who try to argue that "it would never have happened if we'd used Java", or who cite the lack of Scrum or Pair Programming or that they just weren't Unified Process-y enough, as the cause of their difficulties are missing the point somewhat.
If we'd got the right people, those problems would have melted into the background: a light drizzle over an ocean of skill.
Let's not forget that some of the greatest achievements in computer software were made by people punching holes in cards, following no visibly discernible process whatsoever. Perhaps a flow chart of their Software Development Lifecycle would include a "replacing vacuum tubes" phase, but requirements management, analysis and design, testing, change and configuration management and so on?
Not to say that these things don't make life easier. But history proves we can take off without them. Skill, creativity, imagination, dedication, and raw intelligence trump all of these and make the lion's share of the difference when we look at the quality of the end result.
I've worked with enough CMM Level 5 development shops to know that excellent processes can produce risible software. And anyone who's worked with Sun Certified this, Microsoft Certified that and IBM Certified the other will know that mastery of the technology doesn't add much value, either.
And in light of all this, as a consultant who often has to sum up what he does in the time it takes to take the lift from the top floor to the ground floor, I've been looking for a better acronym to describe what I do to replace S.P.I. ("Software Process Improvement"). Because that's not what I do.
I've looked at Agile-type phrases for it (e.g., "evolutionary capability improvement"), and I've looked at goal-driven versions like "software delivery improvement".
But the key word really is PEOPLE. When things get better in a software development organisation, ultimately it's usually because of people getting better at software development. Better individually, and better working together. Indeed, my whole view of "processes" now is that they are just descriptions of how individuals and groups of individuals can interact to achieve a shared goal.
I help developers to learn and improve, and I help teams to interact and collaborate more effectively. I do this mainly by encouraging them to care about what they're doing, and to motivate them to pursue their own learning and to actively share their knowledge. The specific practices, processes, tools and techniques always vary, and - for the most part - they teach themselves what they feel they need to know, while I cheerlead from the sidelines and hopefully inspire them to go further.
The acronym for what I do, then, is still S.P.I.
But now it stands for Software People Inspiring...