August 3, 2018

Learn TDD with Codemanship

Keyhole APIs - Good for Microservices, But Not for Unit Testing

I've been thinking a lot lately about what I call keyhole APIs.

A keyhole API is the simplest API possible, that presents the smallest "surface area" to clients for its complete use. This means there's a single function exposed, which has the smallest number of primitive input parameters - ideally one - and a single, simple output.

To illustrate, I had a crack at TDD-ing a solution to the Mars Rover kata, writing tests that only called a single method on a single public class to manipulate the rover and query the results.

You can read the code on my Github account.

This produces test code that's very loosely coupled to the rover implementation. I could have written test code that invokes multiple methods on multiple implementation classes. This would have made it easier to debug, for sure, because tests would pinpoint the source of errors more closely.

If we're writing microservices, keyhole APIs are - I believe - essential. We have to hide as much of the implementation as possible. Clients need to be as loosely coupled to the microservices they use as possible, including microservices that use other microservices.

I encourage developers to create these keyhole APIs for their components and services more and more these days. Even if they're not going to go down the microservice route, its helpful to partition our code into components that could be turned into microservices easily, shoud the need arise.

Having said all that, I don't recommend unit testing entirely through such an API. I draw a distinction there: unit tests are an internal thing, a sort of grey-box testing. Especially important is the ability to isolate units under test from their external dependencies - e.g., by using mocks or stubs - and this requires the test code to know a little about those dependencies. I deliberately avoided that in my Mars Rover tests, and so ended up with a design where dependencies weren't easily swappable in ths way.

So, in summary: keyhole APIs can be a good thing for our architectures, but keyhole developer tests... not so much.


June 29, 2018

Learn TDD with Codemanship

.NET Code Analysis using NDepend

It's been a while since I used it in anger, but I've been having fun this week reaquainting myself with NDepend, the .NET code analysis tool.

Those of us who are interested on automating code reviews for Continuous Inspection have a range of options for .NET - ranging from tools built on the .NET Cecil decompiler - e.g., FxCop - to compiler-integrated tools on the Roslyn platform.

Ou of all them, I find NDepend to be by far the most mature. It's code model is much more expressive and intuitive (oh, the hours I've spent trying to map IL op codes on to source code!), it integrates out of the box with a range of popular build and reporting tools like VSTS, TeamCity, Excel and SonarQube. And in general, I find I'm up and running with a suite of usable quality gates much, much faster.



Under the covers, I believe we're still in Cecil/IL territory, but all the hard work's been done for us.

Creating analysis projects in NDepend is pretty straightforward. You can either select a set of .NET assemblies to be analysed, or a Visual Studio project or solution. It's very backwards-compatible, working with solutions as far back as VS 2005 (which, for training purposes, I still use occasionally).



I let it have a crack at the files for the Codemanship refactoring workshop, which are deliberately riddled with tasty code smells. My goal was to see how easy it would be to use NDepend to automatically detect the smells.



It found all the solution's assemblies, and crunched through them - building a code model and generating a report - in about a minute. When it's done, it opens a dashboard view which summarises the results of the analysis.



There's a lot going on in NDepend's UI, and this would be a very long blog post if I explored it all. But my goal was to use NDepend to detect the code smells in these projects, so I've focused on the features I'd use to do that.

First of all, out of the box with the code rules that come with NDepend, it has not detected any of the smells I'm interested in. This is tyicaly of any code analysis tool: the rules are not your rules. They're someone else's interpretation of code quality. FxCop's developers, for example, evidently have a way higher tolerance for complexity than I do.

The value in these tools is not in what they do out of the box, but in what you can make them do with a bit of thought. And for .NET, NDepend excels at this.

In the dialog at the bottom of the NDepend window, we can explore the code rules that it comes with and see how they've been implemented using NDepend's code model and some LINQ.



I'm interested in methods with too many parameters, so I clicked on that rule to bring up its implementation.



I happen to think that 5 parameters is too many, so could easily change the threshold where this rule is triggered in the LINQ. When I did, the results list immediately updated, showing the methods in my solution that have too many parameters.



This matches my expectation, and the instant feedback is very useful when creating custom quality gates - really speeds up the learning process.

To view the offending code, I just had to double click on that method in the results list, and NDepend opened it in Visual Studio. (You can use NDepend from within Visual Studio, too, if you want a more seamless experience.)



The interactive and integrated nature of NDepend makes it a useful tool to have in code reviews. I've always found going through the code inspecting source files by eye looking for issues hard work and really rather time-consuming. Being able to search for them interatively like this can help a lot.

Of course, we don't just want to look for code smells in code reviews - that's closing the stable door after the horse has bolted a lot of the time. It's quite fashionable now for dev teams to include code reviews as part of their check-in process - the dreaded Pull Request. It makes sense, as a last line of defence, to try to prevent issus being checked into the code respository. What I'm seeing more and more, though, is that pull requests can become a bottleneck for the team. Like any manual testing, it slows us down and hampers Continuous Delivery.

The command-line version of NDepend can easily be integrated into your build pipeline, allowing for some pretty comprehensive code reviews that can be performed automatically (and therefore quickly, alleviating the bottleneck).

I decided to turn this code rule into a quality gate that coud be used in a build, and set a policy that it should fail the build if more than 5 examples of long parameter lists are found.



So, up and running with a simple quality gate in no time. But what about more complex code smells, like message chains and feature envy? In the next blog post I'll go deeper into NDepend's Code Query Language and explore the kinds of queries we can create with more thought.






June 20, 2018

Learn TDD with Codemanship

Design Principles Are The Key To A Testing Pyramid

On the 3-day Codemanship TDD workshop, we discuss the testing pyramid and why optimising your test suites for fast execution is critical to achieving continuous delivery.



The goal with the pyramid is to be able to test as much of our software as possible as quickly as possible, so we can re-test and reassure ourselves that our code is shippable very frequently (i.e., continuously).

If our tests take hours to run, then we can only run them every few hours. Those are hours during which we don't know if the software's shippable.

So the bulk of our automated tests - the base of testing pyramid - should be fast-running "unit" tests. This typically means tests that have no external dependencies. (That's my working definition of "unit" test, for the purposes of making the argument for excluding file systems, databases, web services and the like from the majority of our tests.)

The purpose of our automated tests is to detect when code is broken. Every time we change a line of code, it can break the software. Therefore we need a test to catch every potential broken LOC.

The key to a good testing pyramid is to minimise the tests that have external dependencies, and the key to that is minimising the amount of code that has external dependencies.

I explain in the workshop how our design principles help us achieve this - and three in particular:

* Single Responsibility
* Don't Repeat Yourself
* Dependency Inversion

Take the example of a module that has a method which:

1. Formats a SQL string using data from a business object
2. Connects to a database to execute that query
3. Unpacks the response (recordset or array) into a business object

To test any part of this logic, we must include a trip to the database. If we break it up into 3 methods, each with a distinct responsibility, then it becomes possible to test 1. and 3. without including 2. That's a third as many "integration" tests.

In a similar vein, imagine we have data access objects, each like our module above. Each can format a SQL string using an object's data - e.g., CustomerDAO, InvoiceDAO, OrderDAO. Each connects to the database for fetch and save that object type's data. Each knows how to unpack the database response into the corresponding object type.

There's repetition in this design: connecting to the database. If we consolidate that code into a single module, we again reduce the number of integration tests we need.

Finally, we have to consider the call stack in which database connections are being made. Consider this poor design for a video rental system:



When we examine the code, we see that the methods that have direct external dependencies are not swappable within the overall call stack.



We cannot test pricing a video rental without paying a visit to the external video ratins service. We cannot test rentals without trips to the database, either.

To exclude these external dependencies from a set of tests for Rental, we have to turn those dependencies upside-down (make them swappable by dependency injection, basically).



This is often what people mean when they talk about "testable" code. In effect, it means there's enough "swappability" in our design to allow us to test the bulk of the logic by mocking or stubbing external dependencies. The win-win here is that we not only get a better-proportioned testing pyramid, we also get a more flexible design that can more readily accommodate change. e.g., getting video ratings from Rotten Tomatoes instead.)


March 9, 2018

Learn TDD with Codemanship

S.O.L.I.D. C# - Online Training Event, Sat Apr 14th

Details of another upcoming 1-day live interactive training workshop for C# developers looking to take their design skills to the next level.

I'll be helping you get to grips with S.O.L.I.D. and much more besides with practical hands-on tutorials and exercises.

Places are limited. You can find out more and grab your place at https://www.eventbrite.co.uk/e/solid-c-tickets-44018827498


December 1, 2017

Learn TDD with Codemanship

Don't Succumb To "Facebook Envy". Solve The Problem In Front Of You

A trend that's been growing for some time now is what I call "Facebook envy". Dev teams working on bread-and-butter problems seem almost embarrassed not to be solving problems on the scale Facebook have to.

99.9% of developers are not working at this scale, and are never likely to. And yet I see a strange obsession with scale that too often distracts teams from more pressing problems.

I use the analogy of a rock band obsessing over how their songs can be arranged for a 90-piece orchestra for a performance at the massive O2 Arena in London, and failing to prepare for their upcoming gig in the bowling alley at the back of the local pub.

Of course, we hear the stories about tech start-ups who didn't prepare for greatness and discovered that their architecture didn't scale. We hear those stories precisely because of the pervasiveness of those businesses in our lives after the fact. Just like all the stories we read about how bands became mega-successful, because who wants to read about bands that didn't? History is written by the winners.

What we don't hear about is the other 999/1000 tech start-ups who did prepare for greatness and wasted their time and their money doing it.

Before a tech start-up needs to scale, it needs to survive. Surviving means solving the problems that are in front of you. By all means, keep one eye on the future - a sense of direction's important. But not both eyes.

It's a similar thing to cash flow. Sure, your product may be super-profitable eventually. But if you can't pay your staff and keep the lights on in the meantime, you won't be there to collect.

The best way to scale-proof your start-up is to solve today's problems in a way that doesn't block you from adapting to tomorrow's. This is why I work so hard to persuade teams to focus on the factors that make software and systems hard to change, instead of on trying to anticipate those changes at the start. It tends to make the end product far more complicated than it needed to be, and things rarely turn out the way you planned.

Some common technologies are inherently scalable, too. Indeed, these days, most technology stacks are, even if it takes a bit more imagination to achieve it. Facebook are the best example of this. Who'd have thought, 12 years ago, that PHP and MySQL would scale to a billion users? Facebook solved the problems that were in front of them. They didn't adapt those technologies speculatively... just in case they ended up with a billion users.

If you use scalable technologies, design your architectures in such a way that they would be easy to partition if needed (i.e., separate concerns cleanly from the get-go), and - most importantly - deliver code that can be changed when new times require it, then you'll be able to solve today's tangible problems and keep the door open to tomorrow's intangible possibilities.



July 22, 2017

Learn TDD with Codemanship

Code Analysis for Dependency Inversion


As work continues on the next book and training course, I'm thinking about how we could analyse our code for adherence to the Dependency Inversion Principle (the "D" in S.O.L.I.D.)

The DIP states that "High-level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should not depend upon details, details should depend upon abstractions."

This is a roundabout way of saying dependencies should be swappable. The means by which we make them swappable is dependency injection (often confused with Dependency Inversion, and the two are very closely related.)

Dependency injection is simply passing an objects collaborators in (e.g., through a constructor) instead of that object instantianting them itself. When we directly instantiate an object, we bind ourselves to its exact type. This makes it impossible to swap that collaborator with a different implementation without modifying the client code, making our design inflexible and difficult to adapt or extend.

In practice, what this means is that most of our objects are composed from the outside.



For example, in my Reading Ease calculator, the Program class - the entry point for this console app - creates all of the objects involved in doing the calculation and "plugs" them together via constructors.

I've used the analogy of Russian dolls to describe how we compose simpler collaborations into more complex collaborations (collaborations within collaborations). This means that the lowest-level objects in the call stack typically get created first.

Inside those lower-level classes, there's no direct instantiation of collaborators.



So, when we analyse the dependencies, we should find that classes that have clients in our code - classes that are further down the call stack - don't directly instantiate their collaborators.

More simply, if things depend on you, then don't use new.

There are, of course, exceptions. Factories and Builders are designed to instantiate and hide the details. Integration code - e.g., opening database connections - is also designed to hide details. We can't very well pass our database connections into those, or we'd be spreading that knowledge. Typically what we're talking about here is dependencies on our own classes. And what a kerfuffle it would be to try to apply DIP to strings and ints and collections and other core library types all the time. Though, again, there are situations where that may be called for.

If I was measuring adherence to the Dependency Inversion Principle, then, I'd look at a class and ask "Do any other of my classes depend on this?" If the answer is "yes", then I'd check to see if it creates instances of any other of my classes. I might also check - and this would be language-dependent - if those dependencies are on abstract types (abstract classes, interfaces).


July 10, 2017

Learn TDD with Codemanship

Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams



One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.

For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.

Choose from:

  • Test-Driven Development workshops

    • Introduction to TDD

    • Specification By Example/BDD

    • Stubs, Mocks & Dummies

    • Outside-In TDD


  • Refactoring workshops

    • Refactoring 101

    • Refactoring To Patterns


  • Design Principles workshops

    • Simple Design & Tell, Don’t Ask

    • S.O.L.I.D.

    • Clean Code Metrics




To find out more, visit http://www.codemanship.co.uk/bitesized.html



May 19, 2017

Learn TDD with Codemanship

20 Dev Metrics - 18. External Dependencies

18th in my series 20 Dev Metrics is External Dependencies.



If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)

In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.

I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.

It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.

The simplest metric is just to count the number of dependencies. The more there are, the more unstable our code will become.

It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.

You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.


(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")



May 5, 2017

Learn TDD with Codemanship

20 Dev Metrics - 15. Backwards Compatibility

Metric No. 15 in my 20 Dev Metrics series is short and sweet - Backwards Compatibility.

If you've heard of the Liskov Substitution Principle (the "L" in "SOLID"), which states that an instance of any class can be replaced with an instance of any of its subclasses... Well, let me introduce you to the Gorman Substitution Principle

"A version of any API can be replaced with a later version"

Or, to put it more bluntly: thou shalt not break client shit that was working.

For a published component or service (reusable code with an API), run new releases against the tests for previous releases. How many releases back can you go before tests start to break?



This is a particular bug-bear of mine; we're just a bit too change-happy with our APIs. So much so, that I wonder how many billions of dollars are wasted every year fixing client code that didn't need to be broken.





May 4, 2017

Learn TDD with Codemanship

20 Dev Metrics - 14. Interface Specificity

The 14th in my series of 20 Dev Metrics is Interface Specificity, which measures the extent to which interfaces are made to be client or usage-specific. That is to say, the extent to which interfaces only include methods that specific clients need to use.

This helps us to observe the interface segregation principle (the "I" in "SOLID"), and reminds us that interfaces are for collaborating through, and therefore should be designed from the client's perspective.

Imagine we have a class Book, which has methods for getting the ISBN of a publication, and the rating. A class Library uses the ISBN to search for books, and a different class BookStats uses the rating to calculate statistics about the book.



The Library doesn't need to know about a book's rating, and BookStats doesn't need to know its ISBN. Generally speaking, we should seek to limit the knowledge classes have about other classes in the system, so we can limit the chances of it being broken by changes. So instead of binding both Library and BookStats to the same general Book class, instead we can split Book's interface and expose them only to the method they need to use.



Interface Specificity is calculated thus: divide the number of methods used by a client class by the total number of methods exposed by the supplier type. If the supplier only exposes methods used by that client, then Interface Specificity is 100%. If the supplier has 4 methods, and the client only uses 2, then it's 50%. And so on.

An average of Interface Specificity across the software could serve as an indicator of how we're doing generally on this front. It would rarely reach 100%, but 80% or above would suggest we're probably doing okay.