May 17, 2013
Straw Man TDDA lot of the criticisms of Test-driven Develoment I hear are really attacks on a mythical version of TDD that no right-minded advocate ever put forward.
Nevertheless, being a TDD trainer and coach, I do still devote time to answering these straw man criticisms and objections. I thought it would be useful to collect some of the most common misconceptions in one place that I can point people to when I'm just too tired and/or drunk to answer them any more.
1. TDD means not doing any up-front thinking about design
Nobody has ever suggested this. It would be madness. Read books like Extreme Programming Explained again. You'll see sketches. You'll see CRC cards. You'll even see UML. (Gasp!)
The question really is about how much up-front design is sufficient. And the somewhat glib answer is "just enough". I tend to qualify that as "just enough to know what tests you need to pass". So, if your approach is focused on roles, responsibilities and interactions, then I'd want to have a high-level idea of what those are before diving in to code. If it's more an algorithmic focus, I'd want to have a test list that can act as a roadmap for key examples that - taken together - explain the algorithm. And so on.
I'd stop at the point where I'm asking questions that are best answered in code (e.g., is this an interface? Should this method be exposed? etc) Code is for details.
2. TDD takes significantly longer because you write twice as much code
Once you've got the hang of TDD - and that can take months of practice - we find it doesn't take significantly longer. Mostly because the bulk of our time isn't spent typing, it's spent thinking and, when we don't take care, fixing problems. Fixing problems, we find, generally takes more time than avoiding them. So much so, in fact, that working in the very short feedback loops of TDD and testing thoroughly as we go can turn out to be a way of saving time.
Most developers and teams who report a loss of productivity when they try TDD are actually reporting the learning curve. Which can be steep. This is why it can make good commercial sense to seek help in those early stages from someone who's been there, done that and got the t-shirt.
3. TDD leads to mountains of test code that make it harder to change your source code
There are three key steps in TDD, but most developers miss out or skimp on the third one - refactoring. So, when they report that they tried TDD for a few months, but found after a while that they couldn't change their source code without breaking loads of unit tests, I'm inclined to believe that this is what's really happened.
Test code is source code. If the test code is difficult to change, your code is difficult to change. So we must apply as much effort to the maintainability of test code as to the code it's testing. It must be easy to read and understand. It must be as simple as we can make it. It must be low in duplication. And, very importantly, it must be loosely coupled to the interfaces of the objects it's testing.
Think of UI testing. Maybe we wrote thousands of lines of scripts that click buttons and populate text boxes and all that sort of thing, binding our UI tests very closely to the implementation of the UI itself. So if we want to change the UI design - and we will - a whole bunch of dependent tests break.
Better to refactor our UI test scripts so that interactions with the concrete UI are encapsulated in one place and invoked through meaningfully-named helper functions. so we can write tests scripts in the abstract (e.g., submitMortgageApplication() instead of submitButton.click() )
The same applies to unit tests. If we repeatedly invoke the same methods on an object in our tests, better to encapsulate those interactions behind abstract and meaningful interfaces so it all happens in one place only.
4. TDD does not guarantee bug-free code
This isn't a straw man, per se. But to say that "we don't bother doing X because X is not completely perfect" isn't much of an argument against doing X when no approach guarantees perfection. When people throw this one at me, I'm naturally keen to see their bug-free code.
Let's face it, the vast majority of teams who don't do TDD would benefit from doing something like TDD. They'd benefit from working towards more explicit, testable outcomes. They'd benefit from shorter and less subjective feedback loops. They'd benefit from continuous refactoring. They'd benefit from fast, cheap regression testing. Their software would be more reliable and easier to maintain, and - once they've worked their way up the learning curve - it won't cost them more to achieve those better results. There are, of course, other approaches than TDD that can achieve these things. But, by Jiminy, they don't half feel like TDD when you're doing them (which I have).
5. you are not designing domain abstractions, you are designing tests.
This is a new addition to the fold, courtesy of some chap on That Twitter who obviously thinks I don't know one end of a domain model from a horse's backside.
Now, I've spent a fair chunk of my career modeling businesses - back in the good old days of "enterprise architecture", when that was where the big bucks were. So I do know a thing or two about this.
What I know is that those domain abstractions have to come from somewhere. How do we know we need a customer and that customer might have both a billing address and a shipping address, which my be the same address, and that customer may be a person or a company?
We know it because we see examples that require it to be so. If we don't see examples on which these generalisations are based, then our domain model is pure conjecture based on what we think the world our systems are modeling might look like (probably). I design software to be used, and it has been considered a good idea to drive the design from examples of usage for longer than I've been alive. Even when we're not designing software, but simply modeling the domain in order to understand it - perhaps to improve the way our business works - it workes best when we explore with examples and generalise as we go. In TDD, we call this "triangulation".
I will very often sketch out the concepts that play a part in a collection of scenarios - or examples - and create a generalised model that satisfies them all as a basis for the tests I'm about to write. (See Straw Man #1, of which this is just another example.)
When we generalise without exploring examples, we tend to find our domain models suffer from a smell we call "Speculative Generality". We can end up with unnecessarily complex models that often turn out not to be what's needed to satisfy the needs of end users.
Good user-centred software design is a process of discovery. We don't magic these abstractions and generalisations out of thin air. We discover the need for them. At it's very essence, that's what TDD is. I can't think of a single mainstream software development method of the last few decades that wasn't driven by usage scenarios or examples. There's a very good reason for that. To just go off and "model the domain" is a fool's errand. Model for a purpose, and that purpose comes first.
If you practice TDD, but don't think about the domain and the design up-front, then you're doing TDD wrong. It's highly recommended you think ahead. Just as long as you don't code ahead.
6. TDD doesn't work for the User Interface
Let's backtrack a little. Remember those good old days, about 10 minutes ago, when I told you that you should decouple your test code from the interfaces that it tests?
Those were the days. David Cameron was Prime Minister, and you could buy a pint of beer for under £4.
Anyhoo, it turns out - as if by magic - that it's not such a bad idea to decouple the logic of user interactions from the specific UI implementation in the architecture of your software. That is to say, you knobs and widgets in the UI should do - to use the scientific parlance - "f**k all" as regards the logic of your application.
The workflow of user interactions exists independent of whether that workflow is through a Java dekstop application or an iOS smartphone app.
A tiny slither of code is needed to glue the logical user experience to the physical user experience. If more than 5% of you code is dependent on the UI framework you're using, you're very probably doing it wrong.
And for that last 5%... well, you'd be surprised at how testable it really is. It may take some ingenuity, but it's often more do-able than you think.
Take web apps: all it takes is a fake HTTP context, and we've got ourselves 100% coverage. (Whatever that means.) Java Swing is equally get-at-able. As are .NET desktop GUIs. You just have to know where to stick your wotsit.
If you'd like to see a few other TDD myths debunked, while getting some hands-on practice in an intensive and fun workshop, join us in London on July 13th.
May 10, 2013
Making The Untestable Testable With Mocks - Resist Temptation To Bake In A Bad DesignJust a quick note before my next pairing session about using mock object frameworks to make untestable code testable.
Mocking frameworks have grown in their sophistication, for sure. But I fear they may have mutated into testing tools, rather than the design aids that their originators intended.
Say, for example, you're trying to write unit tests for some legacy code that depends on a static method which accesses the file system. We want unit tests that run quickly, and reading and writing files means slow unit tests. So we want somehow that we can invoke the methods we want to test without them calling that static method.
Enter stage right: UberMock (or whatever you're using). UberMock solves this problem with some metaprogramming jiggery-pokery that makes it possible to specify that a mock version of a static method be invoked at runtime. We write unit tests that set up expectations on that mock static method call. That is to say: we expose an internal detail that the static method - in mock form - should be invoked.
That's a legacy code "gotcha". We now have unit tests. Hoorah! But these unit tests depend on this internal design detail. And make no mistake - it's a design flaw we'll want to get rid of later.
If we decide, after we've got some tests around it, to refactor this horrid code so that we're observing the Open-Closed Principle (The "O" in "SOLID" - meaning that classes should be open for extension but closed for modification, which is not possible when we depend on static methods that can't be susbtituted with overrided implementations without the aforementioned meta-programming jiggery-pokery), we cannot do so without re-writing our tests.
The tests we write that depend on internal design details of legacy code effectively bake in that legacy design, making refactoring doubly difficult at the very least.
If our ultimate aim is to invert that dependency on a static method, so that the code now relies on some dependency-injected abstraction, it tends to work out easier in the long run to put that abstraction in place first, and then use mocks to unit test that code.
Don't bake in a design that you'll later need to change
It's a little chicken-and-egg, I grant you. Ideally, we'd want unit tests around that code before we tried to introduce the abstraction, but how do we do that - without baking in the old design - until the abstraction's in place.
It's one of those situations where, I'm afraid, the answer is that you're going to have to be disciplined about it. There's usually no quick fix. You may have to rely on slow and cumbersome system tests for a while. Or even - gulp - manual testing.
But experience has taught me that, in the final reckoning, it can be well worth it to avoid pouring quick-drying cement on an already rigid and brittle design.
Ah, and I hear my next
March 7, 2013
Intenstive Test-driven Development, London April 20thThe world's best-value public TDD course is back!
I'll be running an intensive TDD workshop in central London on Saturday April 20th.
Previous public TDD workshops have sold out, and folk have traveled from as far afield as Russia and Dubai to take advantage of the amazingly low £99 price tag.
You can find out more and book here
February 13, 2013
TDD & Mocks - Working Backwards From ExpectationsOn my infamous TDD training workshop, I encourage participants to write the test assertion first and work backwards to the set-up. This is a good way to turn our thinking around, starting with the "what" and working our way back to a "how" that directly supports the "what".
What often throws people is working with mock objects. Because we're not writing explicit assertions using our assert() functions, we may fail to spot that mock tests also make assertions.
In Mockito, for example, we assert that a method should be invoked using verify(). This is the "what" of an interaction test. And it's possible to start there and work our way back, just as we might do with traditional assertions.
In this example, I verify that a reserveSeat() method is called on a mock Performance object, which I intend to inject into a new instance of BoxOffice; the class under test.
Working my way backwards, I write the verify() statement, and than declare Performance and the reserveSeat() method.
Eclipse, which the eagle-eyed among you may have realised is designed for working backwards if we so wish, prompts me to declare a local variable for mockPerformance, which I make of type Performance.
It then prompts me to declare the type Performance. Many mocking tools might require me to make Performance and interface if I intend to mock it in the simplest way. Not so with Mockito. It treats mock classes and interfaces transparently the same (one of the things I like about it - saves me declaring lots of unneeded abstractions.) I declare the class Performance. Eclipse then prompts me to declare the method reserveSeat() on Performance.
Then , Eclipse prompts me to initialise the mockPerformance variable, which I do as a mock object.
As I'm working backwards, the next thing I want to write (having now written my assertion) is the action I want to test.
My test is that, in the execution of boxOffice.reserveSeat(), it should tell my mockPerformance to reserve that seat for the specific credit card number. Again, working back from here, Eclipse prompts me to declare the local variable boxOffice, which in turn leads to me declaring the BoxOffice class and the reserveSeat() method on it.
Then I'm prompted to initialise boxOffice, which I do, injecting a collection of performances from which "Boffoonery" - the name of my mock Performance - should be selected by name.
I'm now prompted to declare the constructor on BoxOffice that accepts a HashMap as the parameter.
Finally, I need to inject my mock Performance into the appropriate slot in the HashMap so it will be found using the name "Boffoonery".
And there is my complete failing interaction test. If I run the test now - always a good idea to see the test fail in the way you expect it should, so you know it's a valid test - it fails with the message: Wanted but not invoked: mockPerformance.reserveSeat('A', 6, '1234')
So we're good to go on writing the simples code to pass that test.
February 10, 2013
Parameterising Unit Tests Without Framework SupportPairing with my apprentice-to-be, Will, on Friday highlighted a problem with some unit testing frameworks, which is that they don't all offer built-in support for parameterised tests.
This isn't a major stumbling block to parameterising our tests. Back in the days when most frameworks didn't support them, we just used parameterised methods and called them from our tests (e.g., when looping through a list of test case data.)
If we wanted to refactor the test fixture above into a single paramaterised test, without using built-in framework support, we have simply to extract the body of one of the tests into its own method and introduce parameters for sequence index and expected result.
It's as easy as that, really. Well, almost.
What happens if one of our tests fails? With built-in support for parameterised tests, the framework would report which test case failed. But here, we'd only get a report of the assertion that failed. We would have to work backwards from that to deduce which test case failed. And if multiple test cases may expect the same result (e.g., the second and third Fibonacci numbers should be 1), or if we're asserting that some condition is true instead of comparing expected and actual outcomes, it may be ambiguous as to which test case actually failed.
So we can add a little extra information when assertions failed to make it clear exactly which test case we're talking about:
Another drawback with hand-rolled parameterised tests is that, with most unit test frameworks, when an assertion fails, the test stops executing. So if we wrap up the execution of multiple test cases in one test method, if the first case fails, we'll get no results for subsequent test cases.
To overcome this, we need to go further. One solution would be, instead of calling assert() functions, to remember the result of each check and keep a rolling score. If all the cases come up green, then we call pass() at the end. If any come up red, we call fail() and report all the test cases that failed when we do. At this point, of course, we'd be edging closer and closer to writing our own parameterised unit testing framework.
Fortunately, in JUnit, this isn't necessary. But programmers working with xUnit implementations that don't support parameterised tests may have to do it. My advice is, if you do, then consider adding it to the framework, too.
Lastly, Will has raised a good question: at what point would we consider parameterising our tests?
When we paired, we did it the old-fashioned way (working in Python) and then - as an exercise - I asked him to parameterise the tests after we'd completed the TDD exercise.
In the real world, I might have done it much earlier, when I could see the duplication that was emerging, and knowing that there'd be several more similar tests coming up.
You need to make a judgement call on whether to expend that effort or live with a bit of duplication. The more duplication there is, the easier that judgement call gets, but also the harder the refactoring gets. My tendency is to refactor early - often when I have just 2-3 examples. Look ahead and ask "how many more examples might there be that follow this pattern?"
January 20, 2013
The Mysterious Art Of TriangulationI've started the New Year by introducing my apprentice-to-be, Will, to Test-driven Development.
I think the mental exercise will be good for learning programming languages (our first session uncovered some interesting Python quirks, for example), and I think when he starts his degree studies - which will likely involve writing at least some simple software - he'll benefit from the basic discipline of it.
I recall as an undergraduate - and as a newbie professional - trying to implement relatively simple algorithms, and all those late nights staring at the screen wondering why it wasn't working. If someone had showed me TDD before I started all that, I could have saved myself a lot of frustrating debugging time. Which, as we all know, is the antithesis of valuable drinking time.
Will has hit a pitfall I see a lot - let's be honest, most - fall into when they start learning TDD.
He's doing FizzBuzz (any why not?), and he started something like this:
Test #1 - First element is 1.
Solution - return array 
Test #2 - second element is 2
Solution - return array [1,2]
Now, this is indeed the simplest thing he could do to pass both tests. But, is solution #2 a step forward towards a more general solution? Or is it a step sideways to a longer hardcoded solution?
Fast forward a few unit tests, and we find that the solution [1,2,Fizz,4,Buzz,Fizz,7,8,Fizz,Buzz] is still simpler than a general algorithm that might produce the same sequence.
Do we keep hardcoding the sequence? When do we generalise it to an algorithm that generates the same sequence? When the sequence is made up of one more character than the algorithm?
That would be silly. If we hardcoded up to, say, 40 and then decided that an algorithm would be shorter than hardcoding up to 41, we'd have to write the entire algorithm in one go.
In TDD, our goal is to discover the design in baby steps, verifying that it works and cleaning up our code as we go.
I like to use the analogy of trying to cross a river using stepping stones. To get to the other side, you want each step to take you closer, whilst at the same time avoiding risky leaps that might see you fall into treacherous water.
To me, "forward" in TDD means closer to a general solution. If we keep adding array elements, or branches, or appending to the same string (and so on) for each new test case then that's like we're moving downstream from one stepping stone to the next, getting no closer to the opposite shore. Then we end up having to make one giant, potentially very risky leap at the end to reach it. Ideally, we'd want to end up on a stepping stone that's right next to our goal, so that final step is just another baby step.
Of course, the other danger is overconfidence. We're not so hot at judging distances, it seems. And some developers - particularly inexperienced ones - are tempted to think they can leap the river in a single bound.
The discipline of triangulation is to balance taking the smallest steps you can with making good progress. You want to get there safely, but you do want to get there.
July 8, 2012
Testing The Testers - A Vague Hiring Process.Over sunday lunch with a tester friend today, I got to thinking about testing interviews.
There's been quite a lot of good ideas floating around recently about interviewing developers (e.g., Hibri Marzook's Pair Programming Interviews workshop at SC2012), and I've seen testers put through their paces with what are essentially developer interviews, too.
But testing is not programming - though programming may well be involved. On the principle that if you want to see if a juggler can juggle, ask to see them juggle, what kind of practical techniques could we use to put a tester through his or her paces?
What occured to me over lunch is that there'd be three distinct areas I'd look into.
The most obvious is the tester's ability to find bugs. Bring them in (after some basic vetting to weed out the testers who, let's face it, just aren't - still too many of those about, sadly) and sit them down with a copy of some software in which there are known bugs. Then give them a fixed amount of time to find those bugs, and document them in a useful way (i.e., how to reproduce them.)
This is sort of a human variant of mutation testing. We test the tester by introducing known defects into the code and then see if they can find them.
We could make it more meaningful by introducing the bugs in places where bugs would be more likely to lurk (long/complex methods, multithreaded code accessing global variables etc) so that they could use their understanding of the relationship between code and quality to make educated guesses. You could also include an incomplete automated test suite so they could look for parts of the software that aren't being tested, where bugs are more likely to lurk. You could even be really cheeky and leave a test failing, to see if they even bother to check. You might also like to leave them a pile of user stories with points assigned to them by the customer for relative value, or feature usage statistics, to test their ability to not only find bugs, but find the most important ones first.
There's more to being a tester than finding bugs, of course. So the second thing I'd want to look into is the tester's ability to drive out the details of what a customer wants and "bridge the communication gap", as Gojko Adzic puts it.
One way I thought of might be to get a "customer" - a non-technical domain/application expert - to describe features of an existing piece of software to our candidate. The candidate can ask questions and use examples and test cases to firm up their understanding of what it should actually be like, eventually agreeing a set of acceptance test scripts for each feature with the "customer". Because this software actually exists, we can execute this tests against a running version of it, and test the tests, effectively.
Finally, these days, a tester often needs to be a programmer - and a pretty handy one at that. So my third focus would be on programming skills, probably with an emphasis on automating tests. I might ask them to write Selenium scripts for the acceptance tests they agreed for this existing piece of software, looking not only for test automation abilities, but also clean code and generally good dev instincts.
Realistically, you might be looking at a whole day to put a tester through their paces, but this could be a progression. If they can't find bugs, probably not much point moving on to the next stage, so it might only be a whole day if you're actually any good.
And then there's the whole question of team fit. Sure, they may have the technical chops, but can this person actually work well with us? Maybe round the day off, if they get through all the previous stages, with a Team Dojo with the candidate fulfilling tester duties.
So, in practice, how might I do it? I think I might run it as elimination rounds. Invite a sixteen of the best candidates in to do the bug-finding exercise, and select the best eight at that to do the "customer" exercise, and the best four from that to do a pair programming interview to check their dev skills, and the two remaining after that participate in a Team Dojo to determine which one will be a better fit. (Those numbers are pretty arbitrary - you may be looking for several testers, for example - but that's the general idea. Whittle them down over the course of a day.)
Of course, I'm just thinking out loud. Again.
June 18, 2012
Summer Madness! Intensive, Budget-friendly TDD Master Class for £99I'll be running an intensive and budget-friendly TDD workshop in central London on August 18th.
It's essentially key elements from the more relaxed 2-day course crammed into 8 hours, and at just £99, it's unbeatable value. Being an ex-freelancer myself, I know how hard it can be to get time off, so I've scheduled it on a Saturday so you can have your cake and eat it.
Your £99 includes refreshments, a bit of lunch and wi-fi. The training venue's a stone's throw from Waterloo Station, and easily reachable from London Bridge and other central London transport hubs.
June 10, 2012
Late Night Thoughts On "It Works For Me"Before I retire up the Apples & Pears to Bedfordshire, I just wanted to share some thoughts on an ongoing discussion I've been having in That Twitter with Dan North (@tastapod).
Now, I'm aware that I can be overly dismissive of people making claims that aren't supported by evidence, and I feel it's important to go beyond the limitations of 140 characters to try - and probably fail, as usual - to express what I'm really thinking about all this.
To cut a long story short, Dan's been writing and speaking a lot recently about a discovery he's made that involves writing software that is not - GASP! - test-driven. Indeed, there may be no automated tests at all. And he's finding that in the context he and his colleagues are working in, not TDD-ing is sometimes better and faster at delivering value, and, presumably, at sustaining that pace of innovation for business advantage.
Dan, if you're not aware, comes very highly recommended by programmers who also come very highly recommended. If programmer kudos was PageRank, and recommendations were web links, Dan's home page would be bbc.co.uk. So, at a personal level, I'm inclined to just shrug and say "fair enough, what he said".
But I've been at this game a while (and I've even won the odd round), and my two decades programming for shiny objects and sexual favours has taught me that our industry is rife with claims.
Some are out-and-out lies. The people making them know full well it's not true, and what they're saying is designed purely to appeal to the people who are holding the purse strings - a highly suggestible bunch at the best of times.
I think it's very doubtful that Dan doesn't believe what he's telling us, from what I've heard of him. But some very genuine people, with all the best intentions, also make claims that turn out not to be true. Software is a very complicated business, and mirages are not uncommon.
I know how prone I am to succumbing to that feeling of "productivity" I get when I cut corners. It's very seductive.
My Mum used to drive a Citroen 2CV (mint green with stripes on the roof - it looked like a boiled sweet on wheels), and I remember the sheer thrill of us coasting down hills, feeling like the car could take off at any moment. We must have been doing all of 45 miles per hour.
I've discovered, from my own experiments into quality-centred practices that, when I actually look back at what's been achieved objectively, what felt fast while I was doing it can turn out to be slower in real terms.
So, my issue is this: it's not that I think Dan's misleading us, or that he's necessarily misleading himself, either. What he's discovered may well be real, and may even be reproducible.
But, right now, he's that lone parent who didn't vaccinate their children and found that their children got better. Or that they think their children got better, and that it had something to with not vaccinating them. Maybe they did, and maybe it was.
However, before I start advising teams to not bother with the vaccinations - vaccinations whose efficacy is supported by a growing body of evidence in a wide range of situations (everything from embedded software in vending machines to labyrinthine distributed "enterprise" systems via the BBC iPlayer) - I need to see a similar body of evidence to persuade me that in some situations, skipping the jabs will be better for them.
I'd also like to understand why. I'm fairly convinced now of the causal mechanisms that link defect prevention to higher productivity, having seen so many wide studies published by the SEI, IBM, NASA and other august bodies. Taking steps to prevent issues saves more time later than it costs now. Simples.
I'm also aware of the limits of defect prevention on saving us time and money, and why those limits exist (e.g., in safety-critical software).
The same goes for the relationship between our ability to retest our software and systems quickly, frequently and cheaply. I'm not aware of any other way than by automating our tests, and I'm especially aware of the economic value of automated unit tests (or some automated equivalent - e.g., model checkers), having spent very little time in a debugger personally since about 2002.
It's not inconceivable that somewhere in the spectrum of quality vs. cost vs. test automation etc etc, there is an oasis that Dan's discovered of which we're all currently unaware. But if there is, then it's a tropical island in the middle of the Arctic ocean. It runs contrary to the picture that surrounds it - a picture that's still being corroborated as more and more data comes in, and for which no credible data currently exists to contradict it.
Right now, Dan's telling us he's been to this undiscovered island, and is describing it to us in vivid detail - thrilling tales of strange and exotic animals, wierd and wonderful plant life and azure-blue waters lapping at golden sands. But he's yet to give us the photos, videos, or any samples of unique flora and fauna that might convince me that he wasn't actually in Fiji (that's the danger of flying without instruments). Most important of all, he needs to give us the grid reference so we can all go and find this island for ourselves.
He tells me he's in the process of doing this now, so we can try his approach on our own projects and see what we think. This is very encouraging.
My hope is that we'll finally see this mysterious Lost World for ourselves and know that he was right.
Either that or we'll confirm that he is indeed lost in Fiji.
April 25, 2012
Entrepreneurial Programming - The Sixty Four ChallengeAll this talk about "lean start-ups" and "bacon entrepreneurs" (or whatever... TBH, I wasn't really paying attention) has got me thinking...
It seems that a little experiment, in the form of a challenge, might be in order. Many people - including people who should know better - continue to assert that quality and getting something to market quickly are a trade-off. It's the old "quick and dirty" school of thought.
If quick-and-dirty is the best short-term solution, then it stands to reason that in a short-term endevour, quick-and-dirty would give you an advantage over Clean Code.
I'm not at all convinced that it would. All the evidence I've seen suggests that the opposite is true.
But I'm not here to tell you ghost stories. How could we put it to the test? Asking a sample of people to start a real tech business and run it in a certain way just for an experiment doesn't seem reasonable. We've all got better things to do with our time. Well, maybe.
But, for a big enough sample, it might be worth investing a chunk of time to answer this question - along with potentially lots of other questions about what is the least we can do to start a successful tech business?
Here's a rough outline of an experiment in entrepreneurial programming I've been kicking around. I'll be interested to know what folk think.
This experiment would be called THE SIXTY FOUR CHALLENGE:
We would create an artificial tech business economy. 64,000 people will be given 64 tokens to spend on tech products and services created by one or more of 64 "tech businesses".
Each tech business is a team of people who get together to create a product or service out of software (e.g., a web or smartphone app).
Each team has no more than 64 person-days (64 x 8 hours) to design, build, sell and support their product or service.
The challenge lasts 64 days from a standing start to the final reckoning. At the end of those 64 days, we would tot up how much money (tokens) each startup has made from our artificial market.
Each start-up has a seed fund of 64 tokens, which they can use to buy things like hosting and professional services from other start-ups (at a negotiated value in tokens/hour or day - so a team made up entirely of web designers could potentially win just by doing web design for other teams, which many would argue is what the web is anyway). Hours worked for other teams would not count against the maximum 64 hours alotted to your team.
We would create special payment gateways and other tools for processing token payments and exchanging tokens between teams, sitting behind which would be an artificial bank that holds all of these accounts and provides transparency to the whole endevour.
You can change - and even completely re-write - the code as many times as you like over the 64 days.
At the end, the final accounts would be totted up and also the source code would be evaluated, and we'd see whether cleaner code = slower start-up. My guess is we would see no clear correlation, and that taking care over code quality would not be a significant disadvantage.
What do you reckon? Answers on a postcard, please.