December 10, 2018
The Gaps Between The ToolsOn a training course I ran for a client last week, I issued my usual warning about immediately upgrading to the latest versions of dev frameworks. Wait for the ecosystem to catch up, I always say.
This, naturally, drew some skepticism from developers who like to keep bang up-to-date with their toolsets. On this occasion, I was vindicated the very next day when one participant realised they couldn't get Cucumber to work with JUnit 5.
I abandoned an attempt to be bang up-to-date using JUnit 5 for a training workshop in Sweden the previous month. The workshop was all about "Third-Generation Testing", which required integration between unit testing and a number of other tools for generating test cases, tracking critical path code coverage, and parallelising test execution. I couldn't get any of them to work with the new JUnit Jupiter model.
So I reverted back to safe old JUnit 4. And it all worked just spiffy.
No doubt, at some point, the tools ecosystem will catch up with JUnit 5. But we're not there today. So I'm sticking with JUnit 4. And NUnit 2.x. And .NET Framework 3.5. And the list goes on and on. Basically, take the latest version, subtract 1 from the major release number, and that's where you'll find me.
For sure, the newer versions have newer features, which may or may not prove useful to me. But I'm more concerned about the overall development workflow, and that's why compatibility and interoperability mean more to me than new features.
We're notoriously bad at building dev tools and frameworks that "play nice" with each other. Couple that with a gung ho attitude to backwards compatibility, and you end up with a very heterogenous landscape of tools that can barely co-exist within much wider development practices and processes. It's our very own Tower of Babel.
In other technical design disciplines, like electronic engineering, tool developers worked hard to make sure things work together. Simulation tools plug seamlessly into ECAD tools, which talk effortlessly with manufacturing tools and even accounting solutions to provide a relatively frictionless workflow from initial concept to finished product. The latest release of your ASIC design tool may have some spiffy new features, but if it won't work with that expensive simulator you invested in, then upgrading will just have to wait.
Given that many of us are engaged professionally in integrating software to provide our customers with end-to-end processes, it's surprising that we ourselves invest so little in getting our own house in order.
Looking at the average software build pipeline, it tends to be a Heath Robinson affair of clunky adaptors, fudges and workarounds to compensate for the fact that - for example - every test automation tool produces a different output for what is essentially the exact same information. And it boggles the mind why we need 1,001 different adaptors to run the tests in the first place; every combination of build tool + test framework imaginable.
If test automation tools all supported the same basic command interface, and produced their outputs in the same standard formats, we could focus on the task in hand instead of wasting time reinventing the same plumbing over and over again. JUnit 5 would already work with Cucumber. No need to wait for someone to patch them back together again.
And if you're a tool or framework developer protesting "But how will the tools evolve if nobody upgrades?", my advice is to stop breaking my workflows when you release new versions. They're more important than your point solution.
I vote we start focusing more on the gaps between the tools.
February 2, 2014
Announcing Codemanship Developer Suite - Architect EditionIntroducing the new Codemanship Developer Suite - Architect Edition.
If you are a developer, DO NOT READ BEYOND THIS POINT. The following text is for non-technical budget holders only.
For an annual fee of £500,000 (not including sales tax, maintenance and catering), Codemanship Developer Suite - Architect Edition will solve all of your software development problems, and some problems that don't even have anything to do with developing software, like male-pattern baldness and erectile dysfunction.
Codemanship Developer Suite - Architect Edition does this by putting you in the driving seat, removing all key technical decisions from your developers, who are probably incompetent because they keep telling you that creating a social media site with all the features of Faceboook can't be done in 4 weeks.
Codemanship Developer Suite - Architect Edition comes as a tightly-integrated family of tools that take the guesswork out of making shit up as you go along.
* Enterprise Text Manipulation & Management allows architects to precisely capture the logic of your software and systems in textual form that can be shared among developers. Choose among dozens of available logic specification languages, including Java, C#, C++, Visual Basic, Ruby, PHP and Python.
* Cloud-based Text File History Management "Git" Hub allows teams to share their logic specifications and maintain a history of all changes for audit purposes. It's good because it's in the cloud.
* Automated Executable Software Generation directly from textual descriptions of the software's logic targeted at a wide variety of compatible platforms
* Automated Testing & Verification & Test Suite Management using the exact same logic specification languages used to describe the software itself, greatly reducing the testing learning curve.
* Team Architecture Management tools that have been proven to work over thousands of years. (Pencils not included.)
* Gratuitous & Highly Misleading Reports Of Development Activities for that comforting illusion of control
Our objective and entirely plausible studies have shown that teams can expect a return on investment of up to 10,000,000%, meaning that most businesses will become immensely wealthy purely as a result of buying Codemanship Developer Suite - Architect Edition.
But don't take our word for it; here's some eyewitness testimony from an entirely real customer who really uses it:
"Since moving to Codemanship Developer Suite, my daughters nightmares have abated and we no long suffer at the hands of the poltergeist who has haunted us since 1977." - Mortimer D. Batmobile, Head of Development, S.P.E.C.T.R.E.
To find out more about Codemanship Developer Suite - Architect Edition and to arrange a trial demonstration, click here
April 24, 2010
Software Is Both Art & Science. Can We Move On Now?Bonjour, mes enfants.
SEMAT has gotten me thinking. Any mention of "engineering" and "science" in software development seems to polarise opinion.
Undoubtedly, there's a large section of the software development community who believe those words simply do not apply. Software development is not a science. It's an art. Or a craft. Like basket weaving. It's not engineering. No sir!
And there's an equally large section of the community who believe the exact opposite. Software development is a science. It is engineering. We can apply scientific principles to shape predictable, well-engineered end products.
Of course, they're both wrong.
Anyone who dismisses the notion of any kind of scientific basis for software development is running away from reality. Everything that exists has a scientific basis. Even American Idol. You just have to understand it. That we don't fully understand the science of software does not mean that no such science exists or that we'll never understand. I just can't help being reminded of UFOlogists who claim that "science cannot be applied to the study of UFOs". What they mean, of course, is "I've got a good thing going here selling my unscientific ideas to these schmucks, and I'd like to keep it that way, thanks".
And anyone who believes that software development can be completely tamed by science is equally deluded. There are sciences - emerging in recent decades - that teach us that there are many things in the world that, while it's possible that we can fully understand them, we will never be able to control them. Chaos is science. And software development is mostly chaos. Any vision of "software engineering", where pulling lever A causes X and pulling lever B causes Y with certainty, is the product of naivity at the macro, project scale. Life just isn't like that. Clockwork might be, but life isn't. Software development is intractably complex and unpredictable.
In the real world, biology has come up with processes that deal effectively - but not predictably - with intractably complex problems. Evolution is one such process. Evolution solves complex, multi-dimensional problems by iterating through generations of potential solutions. That is science. We can understand it. But we cannot even begin to predict what solution a process of evolution will reach, or how long it will take to reach it.
The clockwork, Newtonian paradigm of "software process enginering" is fundamentally flawed. And anyone who believes that it's possible to attain "value" deterministically is deeply mistaken. "If we pull lever A, we'll ship an extra 10,000 units". Give me a break!
Flawed, too, is any notion that this means that there's no engineering at all to be done in creating good software. I doubt anyone would claim that making rock music is "engineering" - well, anyone sane, at least. But there is science that can be applied within this process.
It's possible mathematically to predict what effect the choice of software compressor and the compression settings used will have on the amplitude of a recording across a certain frequency range. Indeed, it is helpful in getting the best-sounding mix. There is such a thing as "audio engineering" within the music production process. Granted, it's chiefly a creative process. But there is useful science we can appky within in to help tweak the results closer to perfection.
Similarly, while software design is chiefly a creative process, it can be useful to know if the code we're writing "smells" in any significant way. Some code smells can be detected just by looking at the code and using our judgement. Others are more subtle and harder to spot, but just as damaging to maintainability. Code analysis tools, as they grow more sophisticated, can complement our "eye for good design" every bit as much as audio engineering tools complement our "ear for a good mix" and music composition aids can complement our "ear for a good tune".
The trick, I believe, is to find the balance and work within the limitations of science and engineering and creative disciplines. Trying to figure out the formula for "valuable software" is every bit as futile as looking for a formula for making "hit records". And relying entirely on your eyes and ears to refine an end product has severe limitations. For millenia, we've used tools and theory to tweak and refine all manner of creative end products. We even have a word for it: "machine-tooled". The fact that you can fit a computer more powerful than all of the computers in the world were 30 years ago in your breast pocket, and it looks good, is testament to this symbiosis of science and art.
We can continue to refine and extend our scientific understanding of code and coding through research and exploration, just like any science. And new and useful tools will emerge from that understanding that will make it possible for us to produce better quality code, for sure.
And we can also continue to develop and refine the art of software development. Through reflection, practice and sharing.
There is such a thing as "software engineering", but it has limited scope, just like "audio engineering". Specifically, it's limited to things we can predict and can control, like what happens to coupling and cohesion if we move a class from one package to another.
The bigger picture of "delivering value" is a complex human endevour, and creativity, judgement and more than a sprinkling of luck is all we have that we can bring to bear in any meaningful way at this level. We may be capable of understanding, with the benefit of hindsight, why Feature X was used more than Feature Y when the software was released. But then, with hindsight, we understand quite a lot about volcanos and hurricanes, too. These are things that can only be understood with hindsight, really. We don't see them coming until they're almost upon us, and we have two choices - stay and risk everything or get out of the way and live to fight another day.
In years to come, I'll probably notice more and more a difference between "hand-rolled" software and software that has been written with some help from "software engineering" tools - the more grown-up descendants of tools like XDepend and Jester.
But I sincerely doubt I will ever be able to tell at the start of a project whether the resulting software will enjoy success or not. Sure, I'll be able to look back on projects and say "hey, y'know what we got wrong?" But far in advance, the outcome is every bit as unknowable as a hurricane, volcanic eruption or hit record. So where things like requirements and processes and "enterprise architeture" are concerned, I'll stick with arts and crafts.
February 15, 2010
Wheel-driven ReinventionOne aspect of software development which is at once both amusing and troubling is the ability of us young whippersnappers to completely ignore what's gone before and reinvent established wisdom in our own image - often stealing the credit.
Take testing as an example. What do we know about testing software today that we didn't know, say, thirty years ago? Sure, we have new tools and testing has to fit within new approaches to the process of writing software as a whole, but fundamentally what have we discovered in the last decade or so?
Testing behaviour still works, by necessity, much as it has always worked by necessity. We must put the system under test in some desired initial state, then we must provide some stimulus to the system to trigger the behaviour we wish to test, then we must make observations about the final state of the system or about any behaviours that should have been invoked (e.g., a remote procedure call or a database request) in response to the combination of our stimulus and the initial conditions. And this process must be repeatable and predictable, like any good scientific test.
Though the culture of testing software may have evolved, much of it for the better, and the technology may have improved (though that is questionable), and though there are undoubtedly more people testing their systems today, when it comes to the business of writing and executing tests, there's really nothing new under the sun.
The same is true of many aspects of contemporary software development. Like it or nay, iterative and incremental development is older than C. We just weren't doing it back then, in the main.
Indeed, pick any "new" aspect of development and trace it back to its roots, and we discover that most novelties are actually much older than many of us thought. Objects are an invention from the sixties. Use cases hail from the seventies. Responsibility-driven design was being practiced before Frankie told us to Relax. UML existed in a fragmentary form before the Berlin Wall came down. People were writing code to satisfy tests back when those tests were stored on magnetic tape. Indeed, some of the descriptions of programming that was done for the very first computers rings bells with those of us who practice that black art today.
Younger developers like me, though, seem to readily believe that our time around is the first time around and feel no compunction to educate ourselves about the achievements of "old-timers", preferring instead to invent things anew - with sexier names and shinier tools, admittedly.
Our desire to reinvent goes as far as redefining words that already have a well-established definition. "Agile" no longer means "nimble" , "quick" or "spry". Today it apparantly means "communication, feedback, simplicity and courage". Or "iterative and incremental". Or "evolutionary". Or "Scrum-Certified". I caught someone the other day proferring their definition of "testable", which apparantly now requires us to go through "public interfaces". This is bad news for many scientists, who must now rewrite their peer-reviewed papers to incorporate the appropriate programming language with which to express the "testability" of their theories.
If software development was physics, we might expect newcomers to work through and understand the current body of knowledge before they start adding to it. That way, at the very least, we could avoid a great deal of duplication of effort. We may also avoid the tendency of our industry to throw "old-timers" on the scrapheap just because, even though they are probably just as current in their practical ability to deliver working software, they're not "down with the kids" on all the latest street slang for concepts that have been kicking around the block for decades.
The thinking of our elders and betters is far from irrelevent and outmoded. We can still learn a thing or two from the likes of Jacobson, Knuth and Hoare, should we choose to reject fashion in favour of substance in the approach we take to our work.
November 10, 2009
Scrum or Kanban? Pick One And Get On With Delivering Quality Code!I'm getting increasingly vexed by this unhealthy obsession with planning and project management, especially among the Agile community.
The likes of Scrum, Kanban and other variations of the put-stuff-into-some-kind-of-prioritised-work-queue-and-pick-new-work-from-the-top theme have become an obsession to the point that one could be forgiven for thinking that this is what software projects are all about.
They are not optional, of course. You need the work queue. It needs to be effectively prioritsed. You need to track progress as objectively as possible. It needs to be highly visible and transparent. And you need the customer to drive all of this.
But these are no-brainers. There's an inescapable logic behind them, and they should take mere minutes to learn to a practical level where they can be successfully applied.
Writing reliable and maintainable code, on the other hand, takes years to master. And I see increasing numbers of teams who are so caught up in the whole planning and project management aspect of their work that they lose focus on bettering themselves as programmers. Indeed, many of them fall so in love that they cease to be programmers and instead travel the land as disciples of their chosen methodology, spreading the good word to hapless other teams, who in turn become infected with the Scrum/Kanban meme.
That these practices are so very easy to learn is what makes them so virulent. And, if done right, they do help. They help a lot. There's no questioning that.
But if you are churning out crappy unclean code, they don't. Agile relies on code being easier to change. If it is complicated, riddled with duplication and unmanaged dependencies, lacking regression test assurance and basically cobbled together under the relentless pressure of a Scrum or Kanban drumbeat (Kanban's beat sounding a bit more like free-form jazz, obviously), teams will inevitably hit the barrier of increasing software "viscosity" and all their brilliant planning and tracking will just reveal for all to see how quickly productivity is slowing down.
You cannot deliver a continuous stream of anything if your bad habits keep clogging up the pipes.
So clean code is a prerequisite of Agile project management. Teams must focus 90%+ of their effort on to delivering higher quality code, and not waste their time obsessing about whether they should estimate using Fibonacci numbers in their Planning Poker sessions or what colour index cards they should use for reporting bugs.
I'm not saying these things aren't important. But they are practically trivial and easy to master, and they'll mean diddly-squat if you aren't keeping a very tight reign on code quality.
There. I've said it.
May 18, 2009
EU Proposes Consumer Protection Against Buggy GamesIf this story is accurate, then this is a very interesting development indeed.
It seems the EU Commission now feel that bugs in game software constitute a faulty product, and as such should be covered by the same kind of consumer protection laws that cover is when we buy a faulty toaster or a faulty lawnmower.
This is a radical step forward in their thinking. Historically, software license agreements have provided developers with a "get out of jail free" card they can play that says that just because the product you paid good money for doesn't necessarily work as advertised that doesn't mean you're entitled to a refund.
If this law came into effect, it would mean games developers could no longer fob us off with the "we'll fix it in the upgrade" excuse, which often requires us to actually pay to get the fix in many kinds of software (along with a whole bunch of new bugs, of course.)
This would require games developers to seriously up their game - if you'll excuse the pun - as far as reliability is concerned.
I'm sure I don't need to tell you that I'd like to see this law come into force, and to see it extended to cover all commercial software - especially bespoke.
Is it enforceable, though? Well, perhaps with a few simple standards regarding product delivery, then yes, it might just be. A software product is essentially just a set of files that are built from the source and other artefacts. If that product is created using an automated build proess, and if both the source files and the build scripts are strictly managed - in both the practical SCM sense and in the legal sense that a copy is kept as part of the developers' records - just as a civil engineering project will have to keep records of plans and engineering calculations and wotnot just in case the bridge falls down or something - then it should be possible in a dispute to trace a shipped product directly back to the source it was built from. Any attempts at shenanigans on the part of the developers could be rebuked simply by running the build and comparing the resulting set of outputs against what was shipped/downloaded.
Anyway, hurrah for the EU (for once) and let's keep our fingers crossed for the best outcome.
November 10, 2008
SPA 2009 - Scalable .NET Code ReviewsSo the results are in and - NEWSFLASH - it looks like my session proposal on scaling up code reviews using automated analysis has been accepted for Software Practice Advancement 2009.
What with that, the Software Craftsmanship conference I'm launching, and my annual electro-shock therapy, the first quarter of 2009 could be pretty busy for one Senor Gormando.
Another scoop is that the SPA venue is changing. Originally they were going to hold it in some remote, isolated hell-hole on the Cambridgeshire-Northamptonshire border. I can't remember the exact name of the place, but I seem to recall Skeletor lived there for a while. Anyway, the new venue is going to be what some would say is the spiritual home of the SPA conference, namely the BCS building in Covent Garden, London.
This is geat news for folks who live in or around London, because the registration fee no longer has the considerable burden of accomodation to shoulder, making participation more financially attractive this year. It's not so good news if you live on the Cambridgeshire-Northamptonshire border, but if you do then there's a very good chance that you're Skeletor or one of his minions, in which case you're probably not welcome this year. (Unless, that is, you're presenting a session called "Universal Domination in Ruby", of course...)
If you're coming to SPA (which will be in early April) and are a .NET bod who might want to attend my session, then you'll probably need to bring a laptop and it'll need Visual Studio 2005 or later installed for the practical stuff. We'll be using NDepend (yeah, I know - big surprise) to do the actual code analysis, and if you haven't already installed and fiddled with it before the session, I'll be making a trial version available on disk that installs in the same time that it takes to unzip the files to you hard drive (literally, that's the installation procedure.)
My second session proposal, for a panel debate called The Agile Delusion, was cruelly overlooked again by the selection committee (the fools! Dont they knows genius when they sees it?) But the relocation of the conference to central London opens up the tantelising possibility of running it in a nearby hostelry as a sort of "Off Broadway" event one evening during the 3-day conference. Watch this space.
October 28, 2008
Software Craftsmanship 2009 - Conference In DevelopmentFirst the good news.
I'm in the process of launching a new conference here in sunny old London Town (or "Larndarn Tarn", if you happen to have been born here).
I can't give away too much just yet, because:
a. There's not that much to give away, and
2. There's many a slip twixt cup and lip, and there's always the danger of these things falling through
But I can tell you that the working title for the conference is Software Craftsmanship 2009.
And I can tell you that the focus is going to be on the "hard skills" that take years to master. You know, the actual craft of writing good software. OO design, test-driven development, refactoring, build automation, architecture, patterns, code generation, modeling, concurrent and distributed programming. That sort of thing. Certainly there won't be any sessions about yet more things you can do with coloured bits of card and lego. Well, not unless anyone's discovered a way to generate working code from them.
I can also tell you that we have a provisional date and a provisionally booked venue. The provisional date is February 26th 2009. I'm not going to reveal the venue just yet, though. But it will be in London, rest assured.
Finally, I can tell you that the program selection committe is already starting to shape up very nicely indeed. And the invites are still going out, so we're looking forward to a very healthy pool of world-class expertise to help pick the final schedule.
Keep your eyes peeled for more information posted on this blog, or join my Yahoo! group for announcements.
An informal request for session proposals will be going out in about a week's time. Email me if you'd like to be included in this mailing.
October 26, 2008
Outsourcing The "Build Phase"Managers often ask me about the so-called "build phase" of the software development lifecycle, often with the intention of outsourcing it to cheaper and possibly less skilled programmers.
It's actually very easy to identify and has very clear milestones.
If you're working in Visual Studio, for example, it starts when you hit CTRL+Shift+B and usually ends with a message telling you that the "build succeeded".
This is easy to outsource, but arguably this would offer very limited savings.
October 24, 2008
Example Agile Quality Assurance StrategyFor the morbidly curious among you, here's a link to a (suitably anonymised) quality assurance strategy for what some might describe as an "Agile project" team.
There's much emphasis on defect prevention and "left-shifting" (the practice of moving testing further upstream in the design and development process), as well as on automation and the economy of scale that can be achieved.
There are metrics. Don't be afraid. They're only baby metrics, and haven't learned how to bite yet.
There's also some process guidance and a bt of innovation for incorporating non-functional quality requirements into a lightweight, iterative and - most importantly - test-driven development approach.