January 20, 2016

Learn TDD with Codemanship

Developer Archetypes #29: The Serial Monogamist

"I'm a great husband. I must be: I've been married ten times." Or so the joke goes.

And so it is with some software developers, who hop and skip in blissful ignorance from "greenfield project" to "greenfield project", blind to the consequences of their decisions because they never stick around long enough to witness them.

But the fact is that most developers spend most of their time living with those consequences, working on legacy code. It's estimated that we spend anywhere between 50%-80% of our time just reading code. Those developers who have managed to engineer careers that sidestep legacy code can often be oblivious to the profound importance of things like readability, or the benefits of fast-running automated tests, and other factors that can make changing legacy code much, much easier.

Read all you like about design patterns and OOA/D and "enterprise architecture" - if only to impress and intimidate in meetings - but I posit that, until you've lived with the consequences of your design decisions, you know nothing about design. Well, nothing that makes a difference.

If you are one of those developers who never have to look back, then the next time an employer or recruiter approaches you with a "shitty legacy code project", stop to consider whether it might actually be a great learning opportunity in disguise.

March 1, 2015

Learn TDD with Codemanship

Continuous Inspection at NorDevCon

On Friday, I spent a very enjoyable day at the Norfolk developer's conference NorDevCon (do you see what they did there?) It was my second time at the conference, having given the opening keynote last year, and it's great to see it going from strength to strength (attendance up 50% on 2014), and to see Norwich and Norfolk being recognised as an emerging tech hub that's worthy of inward investment.

I was there to run a workshop on Continuous Inspection, and it was a good lark. You can check out the slides, which probably won't make a lot of sense without me there to explain them - but come along to CraftConf in Budapest this April or SwanseaCon 2015 in September and I'll answer your questions.

You can also take a squint at (or have a play with) some code I knocked up in C# to illustrate a custom FxCop code rule (Feature Envy) to see how I implemented the example from the slides in a test-driven way.

I'm new to automating FxCop (and an infrequent visitor to .NET Land), so please forgive any naivity. Hopefully you get the idea. The key things to take away are: you need a model of the code (thanks Microsoft.Cci.dll), you need a language to express rules against that model (thanks C#), and you need a way to drive the implementation of rules by writing executable tests that fail (thanks NUnit). The fun part is turning the rule implementation on its own code - eating your own dog food, so to speak. Throws up all sorts of test cases you didn't think of. It's a work in progress!

I now plan, before CraftConf, to flesh the project out a bit with 2-3 more example custom rules.

Having enjoyed a catch-up with someone who just happens to be managing the group at Microsoft who are working on code analysis tools, I think 2015-2016 is going to see some considerable ramp-up in interest as the tools improve and integration across the dev lifecycle gets tighter. If Continuous Inspection isn't on your radar today, you may want to put it on your radar for tomorrow. It's going to be a thing.

Right now, though, Continuous Inspection is very much a niche pastime. An unscientific straw poll on social media, plus a trawl of a couple of UK job sites, suggests that less than 1% of teams might even be doing automated code analysis at all.

I predicted a few years ago that, as computers get faster and code gets more complex, frequent testing of code quality using automated tools is likely to become more desirable and more do-able. I think we're just on the cusp of that new era today. Today, code quality is an ad hoc concern relying on hit-and-miss practices like pair programming, where many code quality issues often get overlooked by pair who have 101 other things to think about, and code reviews, where issues - if they get spotted at all in the to-and-fro - are flagged up long after anybody is likely to do anything about them.

In related news, after much discussion and braincell-wrangling, I've chosen the name for the conference that will be superceding Software Craftsmanship 20xx later this year (because craftsmanship is kind of done now as a meme). Watch this space.

February 9, 2015

Learn TDD with Codemanship

Mock Abuse: How Powerful Mocking Tools Can Make Code Even Harder To Change

Conversation turned today to that perennial question about mock abuse; namely that there are some things mocking frameworks enable us to do that we probably shouldn't ought to.

In particular, as frameworks have become more powerful, they've made it possible for us to substitute the un-substitutable in our tests.

Check out this example:

Because Orders invokes the static database access method getAllOrders(), it's not possible for us to use dependency injection to make it so we can unit test Orders without hitting the database. Boo! Hiss!

Along comes our mocking knight in shining armour, enabling me to stub out that static method to give a test-specific response:

Problem solved. Right?

Well, for now, maybe yes. But the mocking tool has not solved the problem that I still couldn't substitute CustomerData.getAllOrders() in the actual design if I wanted to (say, to use a different kind of back-end data store or a web service). So it's solved the "how do I unit test this?" problem, but not in a way that buys me any flexibility or solves the underlying design problem.

If anything, it's made things a bit worse. Now, if I want to refactor Orders to make the database back end swappable, I've got a bunch of test code that also depends on that static method (and in arguably a bigger way - more code depends on that internal dependency. If you catch my drift.)

I warn very strongly against using tools and techniques like these to get around inherent internal dependency problems, because - when it comes to refactoring (and what's the point in having fast-running unit tests if we can't refactor?) all that extra test code can actually bake in the design problems.

Multiply this one toy example by 1,000 to get the real scale I sometimes see this one in real code bases. This approach can make rigid and brittle designs even more rigid and more brittle. In the long term, it's better to make the code unit-testable by fixing the dependency problem, even if this means living with slow-running (or even - gasp! - manual) tests for a while.

February 2, 2015

Learn TDD with Codemanship

What's The Problem With Static Methods?

Quick brain dump before bed.

Static methods.


When I'm pairing with people, quite often they'll declare a method as static. And I'll ask "Why did we make that static?" And they'll say "Well, because it doesn't have any state" or "It'll be faster" or "So I can invoke it without having to write new WhateverClassName()".

And then, sensing my disapproval as I subtly bang my head repeatedly against the desk and scream "Why? Why? Why?!", they ask me "What's the problem with static methods?"

I've learned, over the years, not to answer this question using words (or flags or interpretive dance). It's not that it's difficult to explain. I can do it in one made-up word: composibility.

If a method's static, it's not possible to replace it with a different implementation without affecting the client invoking it. Because the client knows exactly which implementation it's invoking.

Instance methods open up the possibility of substituting a different implementation, and it turns out that flexibility is the whole key to writing software with great composibility.

Composibility is very important when we want our code to accomodate change more easily. It's the essence of Inversion of Control: the ability to dynamically rewire the collaborators in a software system so that overall behaviour can be composed from the outside, and even changed at runtime.

A classic example of this is the need to compose objects under test with fake collaborators (e.g., mock objects or stubs) so that we can test them in isolation.

You may well have come across code that uses static methods to access external data, or web session state. If we want to test the logic of our application without hitting the file system or a database or having to run that code inside a web server, then those static methods create a real headache for us. They'd be the first priority to refactor into instance methods on objects that can be dependency injected into the objects we want to unit test.

The alternatives are high maintenance solutions: fake in-memory file systems, in-memory databases, fake web application servers, and so on. The dependency injection solution is much simpler and much cleaner, and in the long run, much, much cheaper.

But unit testability is just the tip of the iceberg. The "flex points" - to use a term coined in Steve Freeman and Nat Pryce's excellent Growing Object Oriented Software Guided By Tests book - we introduce to make our objects more testable also tend to make our software more composable in other ways, and therefore more open to accommodating change.

And if the static method doesn't have external dependencies, the unit testing problem is a red herring. It's the impact on composibility that really hurts us in the long run.

And so, my policy is that methods should be instance methods by default, and that I would need a very good reason to sacrifice that composibility for a static method.

But when I tell people whose policy is to make methods static if they don't need to access instance features this, they usually don't believe me.

What I do instead is pair with them on some existing code that uses static methods. The way static methods can impede change and complicate automated testing usually reveals itself quite quickly.

So, the better answer to "What's the problem with static methods?" is "Write some unit tests for this legacy code and then ask me again."

* And, yes, I know that in many languages these days, pointers to static functions can be dependency-injected into methods just like objects, but that's not what I'm talking about here.

November 19, 2014

Learn TDD with Codemanship

In 2015, I Are Be Mostly Talking About... Continuous Inspection

Just a quick FYI, for event organisers: after focusing this year on software apprenticeships, in 2015 I'll be focusing on Continuous Inspection.

A critically overlooked aspect of Continuous Delivery is the need to maintain the internal quality of our software to enable us to sustain the pace of innovation. Experience teaches us that Continuous Delivery is not sustainable without Clean Code.

Traditional and Agile approaches to maintaining code quality, like code reviews and Pair Programming, have shown themselves to fall short of the level of rigour teams need to apply. While we place great emphasis on automated testing to ensure functional quality, we fall back on ad hoc and highly subjective approaches for non-functional quality, with predictable results.

Just as with functional bugs, code quality "bugs" are best caught early, and for this we find we need some kind of Continuous Testing approach to raise the alarm as soon after code smells are introduced as possible.

Continuous Inspection is the missing discipline in Continuous Delivery. It is essentially continuous non-functional testing of our code to ensure that we will be able to change it later.

In my conference tutorials, participants will learn how to implement Continuous Inspection using readily available off-the-shelf tools like Checkstyle, Simian, Emma, Java/NDepend and Sonar, as well as rigging up our own bespoke code quality tests using more advanced techniques with reflection and parser generators like ANTLR.

They will also learn about key Continuous Inspection practices that can be used to more effectively manage the process and deliver more valuable results, like Non-functional Stories, Clean Code Check-ins, Build Inspections and Rising Tides (a practice that can be applied to incrementally improving the maintainability of legacy code.)

If you think your audience might find this interesting, drop me a line. I think this is an important and undervalued practice, and want to reach as many developers as possible in 2015.

May 15, 2013

Learn TDD with Codemanship

Legacy Code Without Automated Tests Is Not An Excuse For Less Rigour

While I'm on the subject of bad ideas when refactoring legacy code, I feel I should draw attention to what appears to be a common misunderstanding - even among us experts.

I watch a lot of screencasts where folk demonstrate how they would refactor legacy code. Typically, they start by stating bluntly that you shouldn't refactor code without automated tests.

Then they go on to do exactly that so that they can write their first automated unit test in order to make the code initially testable - usually to introduce some kind of dependency injection.

Some excuse themselves from the need to re-test while they do these initial refactorings because they were using automated refactoring tools. I'm not quite sure how this urban myth got started, but let me burst that bubble right here.

At the time of writing, no refactoring tool is that reliable. Even if you're expert at using the tool, and selecting all the right options for more complex refactorings , which most of us aren't, every once in a while the tool screws up our code. And when I say "once in a while", I mean regularly.

I've learned from the school of Hard Knocks to re-test my code even after using the simplest automated refactorings. Even if those tests run slowly. Even if I have to follow manual test scripts and click the buttons myself.

The reason we fear legacy code is because it is difficult to change. It's difficult to change because it's easy to break. The time to be giving ourselves a hall pass to excuse ourselves from regression testing is not right at the start, when the code is probably at ots most brittle.

In these early stages, when our priority is probably getting fast-running automated tests (i.e., unit tests) in place to enable the kind of architectural refactoring we want to do, we must approach the code with utmost care. That means we need to apply the greatest rigour.

May 10, 2013

Learn TDD with Codemanship

Making The Untestable Testable With Mocks - Resist Temptation To Bake In A Bad Design

Just a quick note before my next pairing session about using mock object frameworks to make untestable code testable.

Mocking frameworks have grown in their sophistication, for sure. But I fear they may have mutated into testing tools, rather than the design aids that their originators intended.

Say, for example, you're trying to write unit tests for some legacy code that depends on a static method which accesses the file system. We want unit tests that run quickly, and reading and writing files means slow unit tests. So we want somehow that we can invoke the methods we want to test without them calling that static method.

Enter stage right: UberMock (or whatever you're using). UberMock solves this problem with some metaprogramming jiggery-pokery that makes it possible to specify that a mock version of a static method be invoked at runtime. We write unit tests that set up expectations on that mock static method call. That is to say: we expose an internal detail that the static method - in mock form - should be invoked.

That's a legacy code "gotcha". We now have unit tests. Hoorah! But these unit tests depend on this internal design detail. And make no mistake - it's a design flaw we'll want to get rid of later.

If we decide, after we've got some tests around it, to refactor this horrid code so that we're observing the Open-Closed Principle (The "O" in "SOLID" - meaning that classes should be open for extension but closed for modification, which is not possible when we depend on static methods that can't be susbtituted with overrided implementations without the aforementioned meta-programming jiggery-pokery), we cannot do so without re-writing our tests.

The tests we write that depend on internal design details of legacy code effectively bake in that legacy design, making refactoring doubly difficult at the very least.

If our ultimate aim is to invert that dependency on a static method, so that the code now relies on some dependency-injected abstraction, it tends to work out easier in the long run to put that abstraction in place first, and then use mocks to unit test that code.

Don't bake in a design that you'll later need to change

It's a little chicken-and-egg, I grant you. Ideally, we'd want unit tests around that code before we tried to introduce the abstraction, but how do we do that - without baking in the old design - until the abstraction's in place.

It's one of those situations where, I'm afraid, the answer is that you're going to have to be disciplined about it. There's usually no quick fix. You may have to rely on slow and cumbersome system tests for a while. Or even - gulp - manual testing.

But experience has taught me that, in the final reckoning, it can be well worth it to avoid pouring quick-drying cement on an already rigid and brittle design.

Ah, and I hear my next victim customer a-calling. I'm free!

January 12, 2013

Learn TDD with Codemanship

The World-Famous Legacy Code Singleton Fudge

When refactoring legacy code that relies on static methods to access external systems - for example, for data access - our first goal is usually to make the code unit-testable. Therefore, we seek to invert that dependency on a static method to make it substitutible.

I demonstrated in previous blog posts how we do this in fairly simple situations. The dance is pretty straightfoward: you turn the static method into an instance method and then do a "Find and Replace" to swap references to the class that method is on into with "new ClassName().", so the target of invocation is now an instance. We then give client code a way to do the old polymorphic switcheroo by injecting that instance into, say, the constructor of the class where it's being used.

As easy as cake.

What often comes up though is a more complex, and less than ideal situation. What if our static method is being accessed by many, many classes? We'd have to introduce dependency injection into every class that uses it, and then - if those classes are some way down the call stack - into the every link in the chain to pass references from the top down to where they're used. This could be a lot of work, and I've not found a quick automated way of doing that. And what if - horror of horros - the methods that access our static data access method are also static?

Take this example. Here's a pretty nasty data access class that offers static methods for updating and retreiving object state to an external database.

Imagine that -for reasons best known to themselves - the developers use these methods inside every single business object in their middle tier. Like:

Our goal is to get decent automated unit test assurance in place quickly, so we can then safely set about refactoring this whole can of worms properly.

In this situation, I've often used a fudge to get my tests in place. The fudge being to deliberately introduce a Singleton. (Gasp!)

I first extract a new class that contains the implementations of the Update and Fetch methods, and leave delegate methods in our original DataAccess class. Then I extract an interface from this new DataAccessImpl class (IDataAccess) so we can make those methods polymorphic.

Here comes the fudge - I then introduce a static method for setting the data access implementation at runtime. so our unit tests can set it as a mock or a stub, and our production code can set it once as the real McCoy.

Yes, pretty rank. But we've made our application logic unit-testable at a relatively low expense, and can now start writing those tests that will be the safety net for unpicking this whole mess once and for all. To unpick it first and then write the tests would be too risky, in my experience.

November 21, 2012

Learn TDD with Codemanship

Who's Afraid Of Legacy Code?

British society has a bit of a problem with age. We're culturally obsessed with youth, and prefer to hide away our ageing population - presumably so they don't keep reminding us that we, too, will one day get old.

This paradox leads to severe consequences for society, as we choose to ignore the facts of life and live as if old age will never come. We don't look after ourselves as well as we should, we don't plan for our futures as well as we could, and we structure our society to favour youth in many respects. That our society is gradually getting older on average, with over 65's now making up a big chunk of the population, confounds our desire to live in a youthful world. Ironically, this growing generation of senior citizens has led to successive governments pandering to the older vote at the expense of the young. Young people are now paying net for an older generation who have not adequately provided for themselves (chiefly because nobody thought people would be living as long as they are), and the "grey vote" has become such a powerful block that young adults of tomorrow can expect to be shouldering even more of that burden.

Software development has grown a similar paradox. We're obsessed with the shiny and the new, despite the fact that we're surrounding by a growing legacy of old code.

Nobody thought that the software they were writing back in the 90's, or the 80's, or 70's or 60's, or even the 50's, would still be in use today. And so, they didn't plan for the future we now find ourselves in.

When you open up a book or a magazine or read a blog post about software development, chances are it will be about writing new code. Aside from some noteworthy exceptions like Michael Feather's Working Effectively With Legacy Code, most coverage of software development is about programming on a blank sheet.

This has a two-fold effect; firstly, most developers lack the skills and the disciplines needed to maintain or add value to existing softare. And secondly, most software is not written with a potentially long life in mind.

Not only do developers lack the skills for legacy code, they have a marked tendency to run a mile in the opposite direction from acquiring those skills. I run a training company, so I know how low the demand is for learning them.

Employers, too, fail to recognise the need for and the value of legacy code skills. They rarely ask for them when hiring developers, and tend not to support developers seeking to improve things in those areas. When did you last see a job advertisement asking for experience of restoring or rehabilitating old, knackered code?

This is despite the fact that most developers are working on legacy code, and that their inability to add value to it and respond to the changing needs of the business and the end users is often cited as a major barrier to business competitiveness by managers.

As Ivan Moore recently put it, legacy code is "the elephant in the room". It comprises the bulk of the work and the overall cost of software development, but occupies a minimal slice of our thoughts and our care.

In the last decade, Test-driven Development has become de rigeur. And a jolly good thing, too.

But, as I see time after time, it's entirely possible to produce legacy code doing TDD. And even if you master writing clean, maintainable code, what about all the code you already write that's out there serving users right now? Do we just write those huge investments off to experience?

No. That would be silly, and immensely wasteful.

If only for the learning experience of dealing with the consequences of design decisions - I've yet to meet a genuinely great developer who hasn't devoted significant time to cleaning up old code - it's high time for us to really get to grips with legacy code.