May 15, 2013
Legacy Code Without Automated Tests Is Not An Excuse For Less RigourWhile I'm on the subject of bad ideas when refactoring legacy code, I feel I should draw attention to what appears to be a common misunderstanding - even among us experts.
I watch a lot of screencasts where folk demonstrate how they would refactor legacy code. Typically, they start by stating bluntly that you shouldn't refactor code without automated tests.
Then they go on to do exactly that so that they can write their first automated unit test in order to make the code initially testable - usually to introduce some kind of dependency injection.
Some excuse themselves from the need to re-test while they do these initial refactorings because they were using automated refactoring tools. I'm not quite sure how this urban myth got started, but let me burst that bubble right here.
At the time of writing, no refactoring tool is that reliable. Even if you're expert at using the tool, and selecting all the right options for more complex refactorings , which most of us aren't, every once in a while the tool screws up our code. And when I say "once in a while", I mean regularly.
I've learned from the school of Hard Knocks to re-test my code even after using the simplest automated refactorings. Even if those tests run slowly. Even if I have to follow manual test scripts and click the buttons myself.
The reason we fear legacy code is because it is difficult to change. It's difficult to change because it's easy to break. The time to be giving ourselves a hall pass to excuse ourselves from regression testing is not right at the start, when the code is probably at ots most brittle.
In these early stages, when our priority is probably getting fast-running automated tests (i.e., unit tests) in place to enable the kind of architectural refactoring we want to do, we must approach the code with utmost care. That means we need to apply the greatest rigour.
May 10, 2013
Making The Untestable Testable With Mocks - Resist Temptation To Bake In A Bad DesignJust a quick note before my next pairing session about using mock object frameworks to make untestable code testable.
Mocking frameworks have grown in their sophistication, for sure. But I fear they may have mutated into testing tools, rather than the design aids that their originators intended.
Say, for example, you're trying to write unit tests for some legacy code that depends on a static method which accesses the file system. We want unit tests that run quickly, and reading and writing files means slow unit tests. So we want somehow that we can invoke the methods we want to test without them calling that static method.
Enter stage right: UberMock (or whatever you're using). UberMock solves this problem with some metaprogramming jiggery-pokery that makes it possible to specify that a mock version of a static method be invoked at runtime. We write unit tests that set up expectations on that mock static method call. That is to say: we expose an internal detail that the static method - in mock form - should be invoked.
That's a legacy code "gotcha". We now have unit tests. Hoorah! But these unit tests depend on this internal design detail. And make no mistake - it's a design flaw we'll want to get rid of later.
If we decide, after we've got some tests around it, to refactor this horrid code so that we're observing the Open-Closed Principle (The "O" in "SOLID" - meaning that classes should be open for extension but closed for modification, which is not possible when we depend on static methods that can't be susbtituted with overrided implementations without the aforementioned meta-programming jiggery-pokery), we cannot do so without re-writing our tests.
The tests we write that depend on internal design details of legacy code effectively bake in that legacy design, making refactoring doubly difficult at the very least.
If our ultimate aim is to invert that dependency on a static method, so that the code now relies on some dependency-injected abstraction, it tends to work out easier in the long run to put that abstraction in place first, and then use mocks to unit test that code.
Don't bake in a design that you'll later need to change
It's a little chicken-and-egg, I grant you. Ideally, we'd want unit tests around that code before we tried to introduce the abstraction, but how do we do that - without baking in the old design - until the abstraction's in place.
It's one of those situations where, I'm afraid, the answer is that you're going to have to be disciplined about it. There's usually no quick fix. You may have to rely on slow and cumbersome system tests for a while. Or even - gulp - manual testing.
But experience has taught me that, in the final reckoning, it can be well worth it to avoid pouring quick-drying cement on an already rigid and brittle design.
Ah, and I hear my next
February 10, 2013
Parameterising Unit Tests Without Framework SupportPairing with my apprentice-to-be, Will, on Friday highlighted a problem with some unit testing frameworks, which is that they don't all offer built-in support for parameterised tests.
This isn't a major stumbling block to parameterising our tests. Back in the days when most frameworks didn't support them, we just used parameterised methods and called them from our tests (e.g., when looping through a list of test case data.)
If we wanted to refactor the test fixture above into a single paramaterised test, without using built-in framework support, we have simply to extract the body of one of the tests into its own method and introduce parameters for sequence index and expected result.
It's as easy as that, really. Well, almost.
What happens if one of our tests fails? With built-in support for parameterised tests, the framework would report which test case failed. But here, we'd only get a report of the assertion that failed. We would have to work backwards from that to deduce which test case failed. And if multiple test cases may expect the same result (e.g., the second and third Fibonacci numbers should be 1), or if we're asserting that some condition is true instead of comparing expected and actual outcomes, it may be ambiguous as to which test case actually failed.
So we can add a little extra information when assertions failed to make it clear exactly which test case we're talking about:
Another drawback with hand-rolled parameterised tests is that, with most unit test frameworks, when an assertion fails, the test stops executing. So if we wrap up the execution of multiple test cases in one test method, if the first case fails, we'll get no results for subsequent test cases.
To overcome this, we need to go further. One solution would be, instead of calling assert() functions, to remember the result of each check and keep a rolling score. If all the cases come up green, then we call pass() at the end. If any come up red, we call fail() and report all the test cases that failed when we do. At this point, of course, we'd be edging closer and closer to writing our own parameterised unit testing framework.
Fortunately, in JUnit, this isn't necessary. But programmers working with xUnit implementations that don't support parameterised tests may have to do it. My advice is, if you do, then consider adding it to the framework, too.
Lastly, Will has raised a good question: at what point would we consider parameterising our tests?
When we paired, we did it the old-fashioned way (working in Python) and then - as an exercise - I asked him to parameterise the tests after we'd completed the TDD exercise.
In the real world, I might have done it much earlier, when I could see the duplication that was emerging, and knowing that there'd be several more similar tests coming up.
You need to make a judgement call on whether to expend that effort or live with a bit of duplication. The more duplication there is, the easier that judgement call gets, but also the harder the refactoring gets. My tendency is to refactor early - often when I have just 2-3 examples. Look ahead and ask "how many more examples might there be that follow this pattern?"
January 12, 2013
The World-Famous Legacy Code Singleton FudgeWhen refactoring legacy code that relies on static methods to access external systems - for example, for data access - our first goal is usually to make the code unit-testable. Therefore, we seek to invert that dependency on a static method to make it substitutible.
I demonstrated in previous blog posts how we do this in fairly simple situations. The dance is pretty straightfoward: you turn the static method into an instance method and then do a "Find and Replace" to swap references to the class that method is on into with "new ClassName().", so the target of invocation is now an instance. We then give client code a way to do the old polymorphic switcheroo by injecting that instance into, say, the constructor of the class where it's being used.
As easy as cake.
What often comes up though is a more complex, and less than ideal situation. What if our static method is being accessed by many, many classes? We'd have to introduce dependency injection into every class that uses it, and then - if those classes are some way down the call stack - into the every link in the chain to pass references from the top down to where they're used. This could be a lot of work, and I've not found a quick automated way of doing that. And what if - horror of horros - the methods that access our static data access method are also static?
Take this example. Here's a pretty nasty data access class that offers static methods for updating and retreiving object state to an external database.
Imagine that -for reasons best known to themselves - the developers use these methods inside every single business object in their middle tier. Like:
Our goal is to get decent automated unit test assurance in place quickly, so we can then safely set about refactoring this whole can of worms properly.
In this situation, I've often used a fudge to get my tests in place. The fudge being to deliberately introduce a Singleton. (Gasp!)
I first extract a new class that contains the implementations of the Update and Fetch methods, and leave delegate methods in our original DataAccess class. Then I extract an interface from this new DataAccessImpl class (IDataAccess) so we can make those methods polymorphic.
Here comes the fudge - I then introduce a static method for setting the data access implementation at runtime. so our unit tests can set it as a mock or a stub, and our production code can set it once as the real McCoy.
Yes, pretty rank. But we've made our application logic unit-testable at a relatively low expense, and can now start writing those tests that will be the safety net for unpicking this whole mess once and for all. To unpick it first and then write the tests would be too risky, in my experience.
November 21, 2012
Who's Afraid Of Legacy Code?British society has a bit of a problem with age. We're culturally obsessed with youth, and prefer to hide away our ageing population - presumably so they don't keep reminding us that we, too, will one day get old.
This paradox leads to severe consequences for society, as we choose to ignore the facts of life and live as if old age will never come. We don't look after ourselves as well as we should, we don't plan for our futures as well as we could, and we structure our society to favour youth in many respects. That our society is gradually getting older on average, with over 65's now making up a big chunk of the population, confounds our desire to live in a youthful world. Ironically, this growing generation of senior citizens has led to successive governments pandering to the older vote at the expense of the young. Young people are now paying net for an older generation who have not adequately provided for themselves (chiefly because nobody thought people would be living as long as they are), and the "grey vote" has become such a powerful block that young adults of tomorrow can expect to be shouldering even more of that burden.
Software development has grown a similar paradox. We're obsessed with the shiny and the new, despite the fact that we're surrounding by a growing legacy of old code.
Nobody thought that the software they were writing back in the 90's, or the 80's, or 70's or 60's, or even the 50's, would still be in use today. And so, they didn't plan for the future we now find ourselves in.
When you open up a book or a magazine or read a blog post about software development, chances are it will be about writing new code. Aside from some noteworthy exceptions like Michael Feather's Working Effectively With Legacy Code, most coverage of software development is about programming on a blank sheet.
This has a two-fold effect; firstly, most developers lack the skills and the disciplines needed to maintain or add value to existing softare. And secondly, most software is not written with a potentially long life in mind.
Not only do developers lack the skills for legacy code, they have a marked tendency to run a mile in the opposite direction from acquiring those skills. I run a training company, so I know how low the demand is for learning them.
Employers, too, fail to recognise the need for and the value of legacy code skills. They rarely ask for them when hiring developers, and tend not to support developers seeking to improve things in those areas. When did you last see a job advertisement asking for experience of restoring or rehabilitating old, knackered code?
This is despite the fact that most developers are working on legacy code, and that their inability to add value to it and respond to the changing needs of the business and the end users is often cited as a major barrier to business competitiveness by managers.
As Ivan Moore recently put it, legacy code is "the elephant in the room". It comprises the bulk of the work and the overall cost of software development, but occupies a minimal slice of our thoughts and our care.
In the last decade, Test-driven Development has become de rigeur. And a jolly good thing, too.
But, as I see time after time, it's entirely possible to produce legacy code doing TDD. And even if you master writing clean, maintainable code, what about all the code you already write that's out there serving users right now? Do we just write those huge investments off to experience?
No. That would be silly, and immensely wasteful.
If only for the learning experience of dealing with the consequences of design decisions - I've yet to meet a genuinely great developer who hasn't devoted significant time to cleaning up old code - it's high time for us to really get to grips with legacy code.
October 28, 2012
Refactoring Legacy Code #2 - Making Web Apps More Unit-TestableFollowing on from that last post about refactoring legacy classes that depend on external systems (like a database) - which has been read by literally dozens of people, and that's no idle boast - I also get asked a lot about making web applications unit-testable.
Taking classic ASP.NET as a typical example - and again using a toy but typical example - the problem is also external dependencies. When we reference ASP.NET objects like Session and Request, we tie our code to the ASP.NET process and the lifecycle of our web forms. We can't just create na instance of a web form's class and start invoking methods on the controls on our page, because outside of ASP.NET, those objects won't be there.
Our goal in making our legacy code unit-testable is to be able to test as much of the logic of our app as possible quickly and effectively, and to do this we need to isolate as much code as we can from external dependencies like these.
I'm a big believer that server pages and web forms should do as little as possible. Really, they should just be a very thin film of glue that binds the logic of user interactions and the display - which, if we think about it, is only marginally about Session and Request and HTML controls - with the meat and potatoes seperated away from knowledge of those details.
I might start to refactor this by extracting the meat and potatoes, complete with ASP.NET dependencies, into its own method.
Next, if I'm looking for some way to write the order data to the page without actually referencing the page or any of its controls, I need to extract methods that I can use to delegate this work through.
Now, for the magic. You'll like this. Not a lot, but you'll like it. If I make these helper methods for writing customer data to the web form public, I can extract an interface on the form's class, and have our controlling method speak to the form through that interface.
Next, we need to tackle that reference to Session. There are many different ways of breaking this dependency, but the simplest here might be to hide it behind another extracted helper method as a stepping stone to where I want to go next.
Now, I could just extact another interface on our form's class and pass that in. But I'm guessing we may want to have a shared abstraction we can reuse in a wider set of situations. Basically, imagine we don't want to implement SetSessionVariable (and, presumably, GetSessionVariable) on every web form. So, I'm going to extract a new class, and then extract an interface on that class.
Now we have a DisplayCustomerWithOrders method that depends only on abstractions for Session and for the web form - importantly, abstractions we control.
Next, I would extract this method into its own class. if you like, we can all it a "controller". (Let's make that one sacrifice to appease the gods of enterprise architecture.)
Now we're really getting somewhere. As it stands we could move CustomerController into a new .NET library, along with the interfaces it depends on, and this would all be unit-testable without the need to be running in the ASP.NET process.
We've got as bit of tweaking to do, first, though. For starters, if we follow the rule (not blindly, but with sound reason) that objects should be born with their collaborators, then let's refactor CustomerController along those lines, so any other controller methods we add can access the userSession and the view.
And while we're about it, we should make it possible for us to inject our DataRepository, so we can write unit tests that won't hit a real database.
We now have a controller that's isolated from the front and the back end of this application, and can be unit-tested using, for example, mock objects to check that it calls for the right customer and tells the view to set the right customer field and order values on the GUI.
A little bit of clean-up in our web form's class, just to tie up the loose ends...
The observant among you will have noticed that our refactored ASP.NET web form class is not smaller than it was. This is because my example is very simple in terms of business and control logic, and also because we only had one event to deal with. If this web form had multiple event handlers, and our business logic was more sophisticated, like in a real application, then the ratio of unit-testable code to web form code would normally start to tip in our favour.
It's often feasible to end up with 90% or more of our code to end up in unit-testable classes when we abstract away the external stuff like GUIs and databases, and make them substitutible for testing and other purposes.
Again, while all this refactoring was going on, I was disciplined enough to run a basic Selenium test script after each individual step to make sure the app was still working. But at the earliest opportunity, I would start writing unit tests to check the logic. Selenium's dandy and all, but when you have 10,000 business rules to check, testing them through a web browser requires a lot of down-time.
October 27, 2012
Refactoring Legacy Code #1 - Making Classes Unit-TestableA question that often comes up is "how can we get unit tests around our legacy code?"
The problem is usually one of dependencies that make it impossible to separate your code from external software, such as database servers and web/app servers, so your code won't work unless those external systems are there.
Here's a toy - but typical - example, inspired by some refactoring I worked with a client on recently:
We want to write unit tests for our CustomerServices class, but the calls to the static data access methods on DataRepository - which, let's presume, involve a visit to a database of some kind - mean that we can't test the code unless that database is there. We could write tests for the code as it is, but those tests will run much slower and we may have to run database scripts and wotnot to set up test data.
To write unit tests around CustomerServices that will run entirely in this application's process and won't involve external databases, we need to refactor the code so that we can substitute some kind of test double - e.g., a stub - where we're currently invoking static methods.
While we're about it, can you see why I tend to favour instance methods by default?
We can achieve this in several steps. First, let's turn these static methods into instance methods. (They're stateless, so it's pretty straightforward. A find-and-replace in both the CustomerServices module and the DataRepository will do the trick. Replace "static " with "", and in CustomerServices, replace "DataRepository" with "new DataRepository()".)
Next, because my end goal is to stub DataRepository in my tests, I want to abstract it for those purposes. So I'm going to extract an interface.
Next, I want to inject instances of IDataRepository into CustomerServices, rather than having them created internally. I could introduce them as parameters to both methods, but that seems like a kind of duplication. Better, I think, that CustomerServices should be born with its collaborators, so I'm going to inject it in through the constructor.
I'll do this is two steps. First, introduce a field of type IDataRepository that's initialised in the constructor. Then introduce it as a constructor parameter. A doddle in Resharper.
This is what we call a dependency inversion. Our high-level module CustomerServices depended on low-level details. Now it depends only on an abstraction, IDataRepository.
So I can write unit tests that pass in a stub that implements IDataRepository that allows me to inject test data menaingfully, and the web service that uses CustomerServices can pass in a real DataRepository that will connect it to the database.
Once I've got some tests around what Michael feathers call an "inflection point" in his excellent book Working Effectively With Legacy Code, it's safer to do more fine-grained refactorings on the internal design.
Was it safe to do the refactorings I did to get those tests in place in the first place? This is where the chicken meets its own egg, so to speak.
Our code couldn;t unit tested because we needed to refactor it to make that possible. But it's not safe to refactor without testing throughout to make sure we haven't borked the software.
What I would do at this delicate stage is opt for a number of potential choices to assure myself my code was still working.
I could, for example, have written those unit tests first, and just taken the hit that it would involve trips to a database until I could remove that dependency.
Or I could have chosen a higher-level inflection point and written some automated tests at that level. Often, this means system tests (e.g., GUI), or unit tests that call remote web services.
Or, if you don't have the skills or the tools necessary to automate system tests, the last resort might be - gasp! - manual testing. Yes, sometime we just have to run the app and click some buttons.
My advice is, if you are going to do manual testing, you need to be extra disciplined about it. Write proper scripts, choose real and meaningful test data, and be vigilant as to the results, down to the smallest detail.
Yes, it's a pain in the behind. But that's why we wanted to get some unit tests in there, right?
In the next post, I'll touch on a common problem in web apps - dependencies on HTTP session and application objects.
August 22, 2012
Intensive Refactoring Workshop, London, Oct 20thAfter the success of Saturday's intensive and budget-friendly TDD training workshop, I've decided to organise a follow-up to help folk get to grips with the extremely important discipline of refactoring.
Codemanship's intensive refactoring workshop will be on Saturday Oct 20th in central London, with an unbeatable early bird price of just £99.
Now, I know refactoring isn't considered as sexy as TDD or SOLID, and you may be thinking "do I need this?" Trust me. You do. TDD isn't sustainable without it, and what's the point in knowing your OO design needs fixing if you don't have the skills to fix it?
Refactoring is the "secret sauce" of Agile Software Development. Without strong refactoring muscles, your code quickly becomes complicated, hard to read and difficult to change.
I can immediately spot a good developer by their nose for design problems and the maturity of their refactoring skills. Most don't have any worth speaking of.
If you're only looking to impress potential employers, then this might not be the course for you. But if you want to be a better software developer, it's the most important course we do.
You can find out more and register here.
April 25, 2012
Entrepreneurial Programming - The Sixty Four ChallengeAll this talk about "lean start-ups" and "bacon entrepreneurs" (or whatever... TBH, I wasn't really paying attention) has got me thinking...
It seems that a little experiment, in the form of a challenge, might be in order. Many people - including people who should know better - continue to assert that quality and getting something to market quickly are a trade-off. It's the old "quick and dirty" school of thought.
If quick-and-dirty is the best short-term solution, then it stands to reason that in a short-term endevour, quick-and-dirty would give you an advantage over Clean Code.
I'm not at all convinced that it would. All the evidence I've seen suggests that the opposite is true.
But I'm not here to tell you ghost stories. How could we put it to the test? Asking a sample of people to start a real tech business and run it in a certain way just for an experiment doesn't seem reasonable. We've all got better things to do with our time. Well, maybe.
But, for a big enough sample, it might be worth investing a chunk of time to answer this question - along with potentially lots of other questions about what is the least we can do to start a successful tech business?
Here's a rough outline of an experiment in entrepreneurial programming I've been kicking around. I'll be interested to know what folk think.
This experiment would be called THE SIXTY FOUR CHALLENGE:
We would create an artificial tech business economy. 64,000 people will be given 64 tokens to spend on tech products and services created by one or more of 64 "tech businesses".
Each tech business is a team of people who get together to create a product or service out of software (e.g., a web or smartphone app).
Each team has no more than 64 person-days (64 x 8 hours) to design, build, sell and support their product or service.
The challenge lasts 64 days from a standing start to the final reckoning. At the end of those 64 days, we would tot up how much money (tokens) each startup has made from our artificial market.
Each start-up has a seed fund of 64 tokens, which they can use to buy things like hosting and professional services from other start-ups (at a negotiated value in tokens/hour or day - so a team made up entirely of web designers could potentially win just by doing web design for other teams, which many would argue is what the web is anyway). Hours worked for other teams would not count against the maximum 64 hours alotted to your team.
We would create special payment gateways and other tools for processing token payments and exchanging tokens between teams, sitting behind which would be an artificial bank that holds all of these accounts and provides transparency to the whole endevour.
You can change - and even completely re-write - the code as many times as you like over the 64 days.
At the end, the final accounts would be totted up and also the source code would be evaluated, and we'd see whether cleaner code = slower start-up. My guess is we would see no clear correlation, and that taking care over code quality would not be a significant disadvantage.
What do you reckon? Answers on a postcard, please.
New Screencast - TDD As If You Meant ItIn this short cast I illustrate what I believe is meant by "TDD as if you meant it" (props. to Keith Braithwaite, who invented this workshop format for SC2009).
DISCLAIMER: I am not saying "don't do any up-front design at all". This is an exercise to stretch your refactoring and feedback-driven design muscles.