February 4, 2018

Learn TDD with Codemanship

Don't Bake In Yesterday's Business Model With Unmaintainable Code

I'm running a little poll on the Codemanship Twitter account asking whether code craft skills should be something every professional developer should have.




I've always seen these skills as foundational for a career as a developer. Once we've learned to write code that kind of works, the next step in our learning should be to develop the skills needed to write reliable and maintainable code. The responses so far suggest that about 95% of us agree (more than 70% of us strongly).

Some enlightened employers recognise the need for these skills, and address the lack of them when taking on new graduates. Those new hires are the lucky ones, though. Most employers offer no training in unit testing, TDD, refactoring, Continuous Integration or design principles at all. They also often have nobody more experienced who could mentor developers in those things. It's still sadly very much the case that many software developers go through their careers without ever being exposed to code craft.

This translates into a majority of code being less reliable and less maintainable, which has a knock-on effect in the wider economy caused by the dramatically higher cost of changing that code. It's not the actual £ cost that has the impact, of course. It's the "drag factor" that hard-to-change code has on the pace of innovation. Bosses routinely cite IT as being a major factor in impeding progress. I'm sure we can all think of businesses that were held back by their inability to change their software and their systems.

For all our talk of "business agility", only a small percentage of organisations come anywhere close. It's not because they haven't bought into the idea of being agile. The management magazines are now full of chatter about agility. No shortage of companies that aspire to be more responsive to change. They just can't respond fast enough when things change. The code that helped them scale up their operations simultaneously bakes in a status quo, making it much harder to evolve the way they do business. Software giveth, and software taketh away. I see many businesses now achieving ever greater efficiencies at doing things the way they needed to be done 5, 10 or 20 years ago, but unable to adapt to the way things are today and might be tomorrow.

I see this is finance, in retail, in media, in telecoms, in law, in all manner of private sector organisations. And I see it in the public sector, too. "IT delays" is increasingly the reason why government policies are massively delayed or fail to be rolled out altogether. It's a pincer movement: we can't do X at the scale we need to without code, and we can't change the code to do X+1 for a rapidly changing business landscape.

I've always maintained that code craft is a business imperative. I might even go as far as to say a societal imperative, as software seeps into every nook and cranny of our lives. If we don't address issues like how easy to change our code is, we risk baking in the past, relying on inflexible and unreliable systems that are as anachronistic to the way things need to be in the future as our tired old and no-longer-fit-for-purpose systems of governance. An even bigger risk is that other countries will steal a march on us, in much the same way that more agile tech start-ups can steam ahead of established market players simply because they're not dragging millions of lines of legacy code behind them.

While the fashion today is for "digital transformations", encoding all our core operations in software, we must be mindful that legacy code = legacy business model.

So what is your company doing to improve their code craft?






January 26, 2018

Learn TDD with Codemanship

Good Code Speaks The Customer's Language

Something we devote time to on the Codemanship TDD training course is the importance of choosing good names for the stuff in our code.

Names are the best way to convey the meaning - the intent - of our code. A good method name clearly and concisely describes what that method does. A good class name clearly describes what that class represents. A good interface name clearly describes what role an object's playing when it implements that interface. And so on.

I strongly encourage developers to write code using the language of the customer. Not only should other developers be able to understand your code, your customers should be able to follow the gist of it, too.

Take this piece of mystery code:



What is this for? What the heck is a "Place Repository" when it's at home? For whom or for what are we "allocating" places?

Perhaps a look at the original user story will shed some light.

The passenger selects the flight they want to reserve a seat on.
They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it.
We create a reservation for that passenger in that seat.


Now the mist clears. Let's refactor the code so that it speaks the customers language.



This code does exactly what it did before, but makes a lot more sense now. The impact of choosing better names can be profound, in terms of making the code easier to understand and therefore easier to change. And it's something we all need to work much harder at.


January 20, 2018

Learn TDD with Codemanship

10 Classic TDD Mistakes

20 years of practicing Test-Driven Development, and training and coaching a few thousand developers in it, has taught me this is not a trivial skillset to learn. There are many potential pitfalls, and I've seen many teams dashed on the rocks by some classic mistakes.

You can learn from their misfortunes, and hopefully steer a path through these treacherous waters. Here are ten classic mistakes I've seen folk make with TDD.

1. Underestimating The Learning Curve

Often, when developers try to adopt TDD, they have unrealistic expectations about the results they'll be getting in the short term. "Red-Green-Refactor" sounds simple enough, but it hides a whole world of ideas, skills and habits that need to be built to be effective at it. If I had a pound for every team that said "we tried TDD, and it didn't work"... Plan for a journey that will take months and years, not days and weeks.

2. Confusing TDD with Testing

The primary aim of TDD is to come up with a good design that will satisfy our customer's needs. It's a design discipline that just happens to use tests as specifications. A lot of people still approach TDD as a testing discipline, and focus too much on making sure everything is tested when they should be thinking about the design. If you're rigorous about applying the Golden Rule (only write solution code when a failing test requires it), your coverage will be high. But that isn't the goal. It's a side benefit.

3. Thinking TDD Is All The Testing They'll Ever Need

If you practice TDD fairly rigorously, the resulting automated tests will probably be sufficient much of the time. But not all of the time. Too many teams pay no heed to whether high risk code needs more testing. (Indeed, too many teams pay no heed to high risk code at all. Do you know where your load-bearing code is?) And what about all those scenarios you didn't think of? It's rare to see a test suite that covers every possible combination of user inputs. More work has to be done to explore the edges of what was specified.

4. Not Starting With Failing Customer Tests

In all approaches to writing software, how we collaborate with our customers is critically important. Designs should be driven directly from testable specifications that we've explicitly agreed with them. In TDD, unsurprisingly, these testable specifications come in the form of... erm... tests. The design process starts by working closely with the customer to flesh out executable acceptance tests that fail. We do not start writing code until we have those failing customer tests. We do not stop writing code until those tests are passing. But a lot of teams still set out on their journey with only the vaguest sense of the destination. Write all the unit tests you want, but without failing executable customer tests, you're just being super-precise about your own assumptions of what the customer wants.

5. Confusing Tools With Practices

Just because they're written using a customer test specification tool like Cucumber or Fitnesse does not mean those are customer tests. They could be automated using JUnit, and be customer tests. What makes them customer tests is that you wrote them with the customer, codifying their examples of how the software will be used. Similarly, just because you used a mock objects framework, that doesn't mean that you are mocking. Mocking is a technique for discovering the design of interfaces by writing failing interaction tests. Just because you're writing JUnit tests doesn't mean you're doing TDD. Just because you use Resharper doesn't mean you're refactoring. Just because you're running Jenkins doesn't mean you're doing Continuous Integration. Kubernetes != Continuous Delivery. And the list goes on (and on and on). Far too many developers think that using certain tools will automatically produce certain results. The tools will not do your thinking for you. As far as I'm aware, RSpec doesn't discuss the requirements with the customer and write the tests itself. You have to talk to the customer.

6. Not Actually Doing TDD. At All.

When I run the Codemanship TDD training workshop, I often start the first day by asking for a show of hands from people who think they've done TDD. At the end of the first day I ask them to raise their hands if they still think they've done TDD. The number is always considerably lower. Here's the thing: I know from experience that 9 out of 10 developers who put "TDD" on their CV really mean "unit testing". Many don't even know what TDD is. I know this sounds basic, but if you're going to try doing TDD, try doing TDD. Google it. Read an introduction. Watch a tutorial or three. Buy a book. Come on a course.

7. Skimping On Refactoring

To produce code that's as clean as I feel it needs to be, I find I tend to spend about 50% of my time refactoring. Most dev teams do a lot less. Many do none at all. Now, I know many will say "enough refactoring" is subjective, and the debate rages on social media about whether anyone is doing too much refactoring, but let's be frank: the vast majority of us are simply not doing anywhere near enough. The effects of this are felt soon enough, as the going gets harder and harder. Refactoring's a very undervalued skill; I know from my training orders. For every ten TDD courses I run, I might be asked to run one refactoring course. Too little refactoring makes TDD unsustainable. Typical outcome: "We did TDD for 6 months, but our tests got so hard to change that we threw them away."

8. Making The Tests Too Big

The granularity of tests is key to making TDD work as a design discipline, as well as determining how effective your test suites will be at pinpointing broken code. When our tests ask too many questions (e.g., "What are the first 10 Fibonacci numbers?"), we find ourselves having to make a bunch of design decisions before we get feedback. When we work in bigger batches, we make more mistakes. I like to think of it like crossing a stream using stepping stones; if the stones are too far apart, we have to make big, risky leaps, increasing the risk of falling in. Start by asking "What's the first Fibonacci number?".

9. Making The Tests Too Small

Conversely, I also often see people writing tests that focus on minute details that would naturally fall out of passing a more interesting higher-level test. For example, I see people writing tests for getters and setters that really only need to exist because they're used in some interesting behaviour that the customer wants. I've even seen tests that create an object and then assert that it isn't null. Those kinds of tests are redundant. I can kind of see where the thinking comes from, though. "I want to declare a BankAccount class, but the Golden Rule of TDD is I can't until I have a failing test that requires it. So I'll write one." But this is coming at it from the wrong direction. In TDD, we don't write tests to force the design we want. We write tests for behaviour that the customer wants, and discover the design by passing it (and by refactoring afterwards if necessary). We'll need a BankAccount class to test crediting an account, for example. We'll need a getter for the balance to check the result. Focus on behaviour and let the details follow. There's a balance to be struck on test granularity that comes with experience.

10. Going Into "Design Autopilot"

Despite what you may have heard, TDD doesn't take care of the design for you. You can follow the discipline to the letter, and end up with a crappy design.

TDD helps by providing frequent "beats" in development to remind us to think about the design. We're thinking about what the code should do when we write our failing test. We're thinking about how it should do it when we're passing the test. And we're thinking about how maintainable our solution is after we've passed the test as we refactor the code. It's all design, really. But it's not magic.

YOU STILL HAVE TO THINK ABOUT THE DESIGN. A LOT.


So, there you have it: 10 classic TDD mistakes. But all completely avoidable, with some thought, some practice, and maybe a bit of help from an old hand.


January 15, 2018

Learn TDD with Codemanship

Refactoring to the xUnit Pattern

16 days left to get my spiffy on-site Unit Testing training workshop at half-price. It's jam-packed with unit testy goodness. Here's a little taste of the kind of stuff we cover.

In the introductory part of the workshop, we look at the anatomy of unit test suites and see how - from the most basic designs - we eventually arrive by refactoring at the xUnit design pattern for unit testing frameworks.

If you've been programming for a while, there's a good chance you've written test code in a Main() method, like this:



This saves us the bother of having to run an entire application to get quick feedback while we're adding or changing code in, say, a library.

Notice that there are three components to this test:

Arrange - we set up the object(s) we're going to use to be in the initial state we need for this particular test

Act - we invoke the method we want to test

Assert - We ask questions about the final state of our test object(s) to see if the action has had the desired effect

Simples!

Of course, a real-world application might need hundreds or even thousands of such tests. Our Main() method is going to get pretty big and unwieldy if we keep adding more and more test cases.

So we can break it down into multiple test methods, one for each test case. The name of each test method can clearly describe what the test is.



Our original Main() method just calls all of our test methods.

But still, when there are hundreds or thousands of test methods, we can end up with one ginormous class. That too can be broken down, grouping related test methods (e.g., all the tests for a bank account) into smaller test fixtures.



Note that each test fixture has a method that invokes all of its test methods, so our original main method doesn't need to invoke them all itself.

This is a final piece of the unit testing jigsaw: the class that tells all of our test fixtures to run their tests. We call this a test suite.



At the most basic level, this simple design gives us the ability to write, organise and run large numbers of tests quickly.

As time goes on, we may add a few bells and whistles to streamline the process and make it more useful and usable.

For example, in our current design, when an assertion fails (using .NET's built-in Debug.Assert() method), it will halt execution. If the first test fails in a suite of 1,000 tests, it won't run the other 999. So we might write our own assertion methods to check and report test failures without halting execution.

And we might want to make the output more user friendly and display more helpful results, so we may add a custom formatter/reporter to write out test results.

And - I can attest from personal experience - it can be a real pain the you-know-what to have to remember to write code to invoke every test method on every test fixture. So we might create a custom test runner - not just a Main() method - that automates the process of test discovery and execution.

We could, for example, invert the dependencies in our test suite on individual test fixtures by extracting a common interface that all fixtures must implement for running its tests. Then we could use reflection or search through the source code for all classes that implement that interface and build the suite automatically.

Likewise, we could specify that test methods must have a specific signature (e.g., start with "Test", a void return type, and have no parameters) and search for all test methods that match.

In my early career, I wrote several unit testing frameworks, and they tended to end up with a similar design. Thousands more had the same experience, and that commonality of experience is captured in the xUnit design pattern for unit testing frameworks.



The original implementation of this pattern was done in Smalltalk ("SUnit") by Kent Beck, and many more have followed in pretty much every programming language you can think of.

In the years since, some useful advanced features have been added, which we'll explore later in the workshop. But, under the hood, they're all pretty much along these lines.







December 30, 2017

Learn TDD with Codemanship

TDD & "Professionalism"

Much talk (and gnashing of teeth) about the link between Test-Driven Development and "professionalism". It probably won't surprise you to learn that I've given this a bit of thought.

To be clear, I'm not in the business of selling TDD to developers and teams. If you don't want to do TDD, don't do it. (If you do want to do TDD, then maybe I can help.)

But let's talk about "professionalism"...

I believe it's "unprofessional" to ship untested code. Let me qualify that: it's not a good thing to ship code that has been added or changed that hasn't been tested since you added or changed it. At the very least, it's a courtesy to your customers. And, at times, their businesses or even their lives may depend on it.

So, maybe my definition of "professionalism" would include the need to test (and re-test) the software every time I want to ship it. That's a start.

Another courtesy we can do for our customers is to not make them wait a long time for important changes to the software. I've seen many, many businesses brought their knees by long delivery cycle times caused by Big Bang release processes. So, perhaps it's "unprofessional" to have long release cycles.

When I draw my imaginary Venn diagram of "Doesn't ship untested code" and "Doesn't make the customer wait for changes", I see that the intersection of those two sets implies "Doesn't take long to test the software". If sufficiently good testing takes weeks, then we're going to have to make the customer wait. If we skimp on the testing, we're going to have to ship untrustworthy code.

There's no magic bullet for rapidly testing (and re-testing) code. The only technique we've found after 70-odd years of writing software is to write programs that automate test execution. And for those tests - of which there could be tens of thousands - to run genuinely fast enough to ensure customers aren't left waiting for too long, they need to be written to run fast. That typically means our tests should mostly have no external dependencies that would slow them down. Sometimes referred to as "unit tests".

So, to avoid shipping broken code, we test it every time. To avoid making the customer wait too long, we test it automatically. And to avoid our automated tests being slow, we write mostly "unit tests" (tests without external dependencies).

None of this mandates TDD. There are other ways. But my line in the sand is that these outcomes are mandated. I will not ship untested code. I will not make my customer wait too long. Therefore I will write many fast-running automated "unit tests".

And this is not a complete picture, of course. Time taken to test (and re-test) the code is one factor in how long my customer might have to wait. And it's a big factor. But there are other factors.

For example, how difficult it becomes to make the changes the customer wants. As the code grows, complexity and entropy can overwhelm us. It's basic physics. As it expands, code can become complicated, difficult to understand, highly interconnected and easy to break.

So I add a third set to my imaginary Venn diagram, "Minimises entropy in the code". In the intersection of all three sets, we have a sweet spot that I might still call "professionalism"; never shipping untested code, not making our customers wait too long, and sustaining that pace of delivery for as long as our customer needs changes by keeping the code "clean".

I achieve those goals by writing fast-running automated "unit tests", and continually refactoring my code to minimise entropy.

Lastly - but by no means leastly - I believe it's "unprofessional" to ship code the customer didn't ask for. Software is expensive to produce. Even very simple features can rack up a total cost of thousands of dollars to deliver in a working end product. I don't make my customers pay for stuff they didn't ask for.

So, a "professional" developer clearly, unambiguously establishes what the customer requires from the code before they write it.

Now my Venn diagram is complete.



ASIDE: In reality, these are fuzzy sets. Some teams ship better-tested code than others. Some teams release more frequently than others, and so have shorter lead times. Some teams write cleaner code than others. Some teams waste less time on unwanted features than others.

So there are degrees of "professionalism" in these respects. And this is before I add the other sets relating to things like ethics and environmental responsibility. It's not a simple binary choice of "professional" or "unprofessional". It's complicated. Personally, I don't think discussions about "professionalism" are very helpful.


Like I said at the start, TDD isn't mandatory. But I do have to wonder, when teams aren't doing TDD, what are they doing to keep themselves in that sweet spot?



November 20, 2017

Learn TDD with Codemanship

10 Days Left to Book Half-Price TDD Training

A quick reminder about the special offer I'm running this month to help teams whose training budgets have been squeezed by Brexit uncertainty.



If you confirm your booking for a 1, 2 or 3-day TDD training workshop this month (for delivery before end of Feb 2018), you'll get a whopping 50% off.

This is our flagship course - refined through years delivering TDD training to thousands of developers - and is probably the most hands-on and comprehensive TDD and code craft training workshop you can get... well, pretty much anywhere. There are no PowerPoint presentations, just live demonstrations and practical exercises to get your teeth into.

As well as the basics, we cover BDD and Specification by Example, refactoring, software design principles, Continuous Integration and Continuous Delivery, end-to-end test-driven design, mocking, stubbing, data-driven and property-based unit testing, mutation testing and heap more besides. It's so much more than a TDD course!

And every attendee gets a copy of our exclusive 200-page TDD course book, rated 5 stars on goodreads.com, which goes into even more detail, with oodles of extra practical exercises to continue your journey with.

If you want to know more about the course, visit http://www.codemanship.com/tdd.html, or drop me a line.


October 17, 2017

Learn TDD with Codemanship

Manual Refactoring : Convert Static Method To Instance Method

In the previous post, I demonstrated how to introduce dependency injection to make a hard-coded dependency swappable.

This relies on the method(s) our client wants to invoke being instance methods. But what if they're not? Before we can introduce dependency injection, we may need to convert a static method (or a function) to an instance method.

Consider this Ruby example. What's stopping us from stubbing video ratings is that we're getting them via a static fetchRatings() method.



Converting it to an instance method - from where we can refactor to dependency inject - is straightforward, and requires two steps.

1. Find and replace ImdbRatings.fetchRating( with ImdbRatings.new().fetchRating( whereever the static method is called.

2. Change the declaration of fetchRating() to make it an instance method. (In Ruby, static method names are preceded by self. - which strikes me as rather counterintuitive, but there you go.)



NOW RUN THE TESTS!

If fetchRating() was just a function (for those of us working in languages that support them), we'd have to do a little more.

1. Find and replace fetchRating( with ImdbRatings.new().fetchRating( wherever that function is called.

2. Surround the declaration of fetchRating() with a declaring class ImdbRatings, making it an instance method.

(AND RUN THE TESTS!)

Now, for completeness, it woul make sense to demonstrate how to convert an instance method back into a static method or function. But, you know what? I'm not going to.

When I think about refactoring, I'm thinking about solving code maintainability issues, and I can't think of a single maintainability issue that's solved by introducing non-swappable dependencies.

When folk protest "Oh, but if the method's stateless, shouldn't we make it static by default?" I'm afraid I disagree. That's kind of missing the whole point. Swappability is the key to managing dependencies, so I preserve that by default.

And anyway, I'm sure you can figure out how to do it, if you absolutely insist ;)



October 16, 2017

Learn TDD with Codemanship

Manual Refactoring : Dependency Injection



One of the most foundational object oriented design patterns is dependency injection. Yes, dependency injection is a design pattern. (Not a framework or an architectural philosophy.)

DI is how we can make dependencies easily swappable, so that a client doesn't know what specific type of object it's collaborating with.

When a dependency isn't swappable, we lose flexibility. Consider this Ruby example where we have some code that prices video rentals based on their IMDB rating, charging a premium for highly-rated titles and knocking a quid off for poorly-rated ones.



What if we wanted to write a fast-running unit test for VideoPricer? The code as it is doesn't enable this, because we can't swap the imdbRatings dependency - which always connects to the IMDB API - with a stub that pretends to.

What if we wanted to get video ratings from another source, like Rotten Tomatoes? Again, we'd have to rewrite VideoPricer every time we wanted to change the source. Allowing a choice of ratings source at runtime would be impossible.

This dependency needs to be injected so the calling code can decide what kind of ratings source to use.

This refactoring's pretty straightforward. First of all, let's introduce a field for imdbRatings and initialise it in a constructor.



NOW RUN THE TESTS!

Next, introduce a parameter for the expression ImdbRatings.new().



So the calling code decides which kind of ratings source to instantiate.



AND RUN THE TESTS!

Now, technically, this is all we need to do in a language with duck typing like Ruby to make it swappable. In a language like, say, C# or C++ we'd have to go a bit further and introduce an abstraction for the ratings source that VideoPricer would bind to.

Some, myself included, favour introducing such abstractions even in duck-typed languages to make it absolutely clear what methods a ratings source requires, and help the readability of the code.

Let's extract a superclass from ImdbRatings and make the superclass and the fetchRating() method abstract. (Okay, so in C# or Java, this would be an interface. Same thing; an abstract class with only abstract methods.)



DON'T FORGET TO RUN THE TESTS!


One variation on this is when the dependency is on a method that isn't an instance method (e.g., a static method). In the next post, we'll talk about converting between instance and static methods (and functions).



October 12, 2017

Learn TDD with Codemanship

Manual Refactoring : Extract Class



A core principle of software design is that modules (I'll leave you to interpret that word for your own tech stack) should have a single distinct responsibility.

There are two good reasons for this: firstly we need to separate code that's likely to change at different times for different reasons, so we can make one change without touching the other code. And it provides us with much greater flexibility about how we can compose systems to do new things reusing existing modules.

Consider this simple example.



Arguably, this Python class is doing two jobs. I can easily imagine needing to change how movie ratings work independently of how movie summaries work.

To refactor this code into classes that each have a distinct single responsibility, I can apply the Extract Class refactoring.

First, we need a new class Ratings to move the ratings fields and methods to.



NOW RUN THE TESTS!

Next, paste in the features of Movie we want to move to Ratings.



AND RUN THE TESTS!

Now, we need to substitute inside Movie, delegating ratings methods to a field instance of the new class Ratings.



RUN THE TESTS!

Okay, so - technically - that's Extract Method completed. We now have two classes, each with a distinct responsibility. But I think we can clean this up a bit more.

First of all, another core principle of software design is that dependencies should be swappable. Let's introduce a parameter for ratings in Movie's constructor.



We can now vary the implementation of ratings - e.g., to mock or stub it for testing - without changing any code in Movie.

If Movie was part of a public API, we'd leave those delegate methods rate() and average_rating() on it's interface. But let's imagine that it's not. Could we cut out this middle man and have clients interact directly with Ratings?

Let's refactor the test code to speak to Ratings directly.



AND RUN THE TESTS!

Now, arguably, the first two tests belong in their own test fixture. Let's extract a new test class.



RUN THE TESTS!

Then we can remove the now unused delegate methods from Movie.



DON'T FORGET TO RUN THE TESTS!

And, to finish off, put each class (and test fixture) in its own .py file.

AND RUN THE TESTS!

This was a fairly straightforward refactoring, because the methods we wanted to move to the new class accessed a different field to the remaining methods. Sometimes, though, we need to split methods that access the same fields.



If I extract a new class for generating HTML, it will need to access the data of the Movie object it's rendering. One choice is to pass the Movie in as a parameter of to_html().



This has necessarily introduced Feature Envy in HtmlFormatter for Movie, but this may be a justifiable trade-off so that we can render movies in other kinds of formats (e.g., JSON, XML) without changing Movie. Here we trade higher coupling for greater flexibility.

In this refactored design, Movie doesn't need to know anything about HtmlFormatter.

Whether or not that's the right solution will depend on the specific context of your code, of course.



October 10, 2017

Learn TDD with Codemanship

Manual Refactoring - Summary

Due to the increasing popularity of dynamically-typed languages like Python and Ruby, as well as a growing trend for programming in stripped-down editors like Atom, Vim and VS Code that lack support for automated refactoring, I'm putting together a series of How-To blog posts for script kiddies that need to refactor their code the old-fashioned way - i.e., by hand.

The most important message is that manual refactoring requires extra discipline. (I often find when I'm refactoring by hand that things can get a bit sloppy, and I'm sure if I watched it back, the code would be broken for much longer periods of time.)

So far, I've done 12 posts covering some key refactoring basics:

Rename

Introduce Local Variable

Introduce Field

Inline Variable & Simple Method

Inline Complex Method

Introduce Parameter

Extract Method

Move Instance Method

Extract Class

Extract Superclass

Dependency Injection

Convert Static Method to Instance Method

In coming days, I'll be adding to this list, as well as putting together my definitive guide for manual refactoring, which may become an e-book, or a commemorative plate, or something like that.