July 22, 2017

Learn TDD with Codemanship

Code Analysis for Dependency Inversion


As work continues on the next book and training course, I'm thinking about how we could analyse our code for adherence to the Dependency Inversion Principle (the "D" in S.O.L.I.D.)

The DIP states that "High-level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should not depend upon details, details should depend upon abstractions."

This is a roundabout way of saying dependencies should be swappable. The means by which we make them swappable is dependency injection (often confused with Dependency Inversion, and the two are very closely related.)

Dependency injection is simply passing an objects collaborators in (e.g., through a constructor) instead of that object instantianting them itself. When we directly instantiate an object, we bind ourselves to its exact type. This makes it impossible to swap that collaborator with a different implementation without modifying the client code, making our design inflexible and difficult to adapt or extend.

In practice, what this means is that most of our objects are composed from the outside.



For example, in my Reading Ease calculator, the Program class - the entry point for this console app - creates all of the objects involved in doing the calculation and "plugs" them together via constructors.

I've used the analogy of Russian dolls to describe how we compose simpler collaborations into more complex collaborations (collaborations within collaborations). This means that the lowest-level objects in the call stack typically get created first.

Inside those lower-level classes, there's no direct instantiation of collaborators.



So, when we analyse the dependencies, we should find that classes that have clients in our code - classes that are further down the call stack - don't directly instantiate their collaborators.

More simply, if things depend on you, then don't use new.

There are, of course, exceptions. Factories and Builders are designed to instantiate and hide the details. Integration code - e.g., opening database connections - is also designed to hide details. We can't very well pass our database connections into those, or we'd be spreading that knowledge. Typically what we're talking about here is dependencies on our own classes. And what a kerfuffle it would be to try to apply DIP to strings and ints and collections and other core library types all the time. Though, again, there are situations where that may be called for.

If I was measuring adherence to the Dependency Inversion Principle, then, I'd look at a class and ask "Do any other of my classes depend on this?" If the answer is "yes", then I'd check to see if it creates instances of any other of my classes. I might also check - and this would be language-dependent - if those dependencies are on abstract types (abstract classes, interfaces).


July 10, 2017

Learn TDD with Codemanship

Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams



One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.

For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.

Choose from:

  • Test-Driven Development workshops

    • Introduction to TDD

    • Specification By Example/BDD

    • Stubs, Mocks & Dummies

    • Outside-In TDD


  • Refactoring workshops

    • Refactoring 101

    • Refactoring To Patterns


  • Design Principles workshops

    • Simple Design & Tell, Don’t Ask

    • S.O.L.I.D.

    • Clean Code Metrics




To find out more, visit http://www.codemanship.co.uk/bitesized.html



May 19, 2017

Learn TDD with Codemanship

20 Dev Metrics - 18. External Dependencies

18th in my series 20 Dev Metrics is External Dependencies.



If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)

In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.

I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.

It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.

The simplest metric is just to count the number of dependencies. The more there are, the more unstable our code will become.

It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.

You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.


(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")



May 5, 2017

Learn TDD with Codemanship

20 Dev Metrics - 15. Backwards Compatibility

Metric No. 15 in my 20 Dev Metrics series is short and sweet - Backwards Compatibility.

If you've heard of the Liskov Substitution Principle (the "L" in "SOLID"), which states that an instance of any class can be replaced with an instance of any of its subclasses... Well, let me introduce you to the Gorman Substitution Principle

"A version of any API can be replaced with a later version"

Or, to put it more bluntly: thou shalt not break client shit that was working.

For a published component or service (reusable code with an API), run new releases against the tests for previous releases. How many releases back can you go before tests start to break?



This is a particular bug-bear of mine; we're just a bit too change-happy with our APIs. So much so, that I wonder how many billions of dollars are wasted every year fixing client code that didn't need to be broken.





May 4, 2017

Learn TDD with Codemanship

20 Dev Metrics - 14. Interface Specificity

The 14th in my series of 20 Dev Metrics is Interface Specificity, which measures the extent to which interfaces are made to be client or usage-specific. That is to say, the extent to which interfaces only include methods that specific clients need to use.

This helps us to observe the interface segregation principle (the "I" in "SOLID"), and reminds us that interfaces are for collaborating through, and therefore should be designed from the client's perspective.

Imagine we have a class Book, which has methods for getting the ISBN of a publication, and the rating. A class Library uses the ISBN to search for books, and a different class BookStats uses the rating to calculate statistics about the book.



The Library doesn't need to know about a book's rating, and BookStats doesn't need to know its ISBN. Generally speaking, we should seek to limit the knowledge classes have about other classes in the system, so we can limit the chances of it being broken by changes. So instead of binding both Library and BookStats to the same general Book class, instead we can split Book's interface and expose them only to the method they need to use.



Interface Specificity is calculated thus: divide the number of methods used by a client class by the total number of methods exposed by the supplier type. If the supplier only exposes methods used by that client, then Interface Specificity is 100%. If the supplier has 4 methods, and the client only uses 2, then it's 50%. And so on.

An average of Interface Specificity across the software could serve as an indicator of how we're doing generally on this front. It would rarely reach 100%, but 80% or above would suggest we're probably doing okay.


May 3, 2017

Learn TDD with Codemanship

20 Dev Metrics - 13. Swappability of Dependencies

The 13th in my series 20 Dev Metrics is Swappability of Dependencies.

Swappability lies at the core of object oriented and component-based design, and so we should take a keen interest on how easy it would be to replace an object's collaborators without it having to change. For example, we might want to swap a data access object with a stub for testing, or swap a payment processing service when the customer is in a specific country.

Swappability as a general concept is pretty much universal, but differs in its implementation depending on the language. To make a dependency swappable in C++, we must do more than we would need to in, say, Ruby and other dynamically-typed languages.

I'll illustrate with a Java example.



Here we're depending directly on a static method of a class ImdbService to get information about a video the customer wants to rent. If we wanted to get that information from a different source (e.g., Amazon), there's no easy way to do it.



In our refactored design, we've made that dependency swappable by 3 steps:

1. We made the static method an instance method, so it can be overridden

2. We passed the instance into the constructor ("dependency injection"), so instantiation happens outside of Pricer. i.e., someone else decides what implementation to use

3. We extracted an interface for ultimate swappability ("dependency inversion"). Pricer can use any service that implements that interface.

In dynamically-typed languages, we may not need an interface - technically speaking - but many programmers get into the habit of creating classes with empty methods to represent an interface, mostly because it makes more sense than extending an implementation (e.g., is an AmazonVideoService really a kind of ImdbService?).

In C++, we would absolutely need an interface, as we can only readily override methods declared as virtual. And other languages like Java are somewhere in between.

Measuring swappability in Java would be a matter of analysing references to other objects and determining where those references are instantiated. If they're instantiated inside the client class, then they're not swappable. If they're passed in as a method parameter, they're swappable - but only if all of the methods used are overrideable. Hence, binding to a pure interface gives ultimate swappability. And, of course, if static methods are used, then that's zero swappability.

How I would I calculate swappability for a Java class?

I'd calculate swappability for each individual reference, and then divide the total for all of them by the maximum possible swappability.

If a reference is static, then it has 0% swappability.

If a reference isn't dependency-injected, it has 0% swappability.

If a reference is dependency-injected, it's swappability will depend on which of its methods are being used:

a. If a method used is abstract, that counts as 100% swappable

b. If a method used has an implementation, but is overrideable, that counts as partially swappable - 50%

c. If a method used cannot be overriden, that has 0% swappability.

For each reference, swappability is the average swappability of methods used. For the class as a whole, swappability is the average swappability of references. And at a package or system level, it's the average swappability across all of the classes

So, when Pricer uses ImdbInfo.fetchVideo(), it has zero swappability because it's a static reference. When Pricer uses a dependency-injected VideoInfoService.fetchVideo(), it has 100% swappability because that method is abstract.

You'll no doubt be delighted to learn that there are no automated tools for calculating this metric at present for any languages. So this is some tooling you would need to rig up yourself. For now, though, I find it a very useful conceptual tool for reasoning about swappability of dependencies.

A cruder approach would be to calculate what proportion of references are to interfaces, and from a tooling perspective this is much simpler, but arguably a bit of a blunt instrument... And very language-specific. For example, a field may be of an interface type, but if it's instantiated inside the constructor of that class, then it's not swappable.




April 28, 2017

Learn TDD with Codemanship

20 Dev Metrics - 11. Coupling

Number 11 in my series 20 Dev Metrics helps us to predict the potential impact of changing one part of our software on the rest of the software. Coupling is a measure of how interrelated code is, at various levels of organisation (classes, components, systems, services etc).

It's simply a matter of counting references in, say, one class to other classes (or features of other classes), or classes in one component to classes in other components. And so on.



When software is tightly coupled, changes can "ripple" out along the dependencies, breaking other parts of the code, like the ripples that cascade outwards when we throw a pebble into a pond. One of the goals of good modular software design is to localise those ripples and therefore minimise the impact of changes. So we aim for modules that are loosely coupled, and know as little about each other as possible.



Some people mistakenly believe this is an object oriented design principle. But it actually applies to modular software and systems of any kind. If we were writing our software in Pascal or COBOL, it would be just as true. However the technology allows us to modularise code, those modules need to be loosely coupled.

Many tools exist that can do this counting for us, thankfully.



December 21, 2016

Learn TDD with Codemanship

"Our Developers Don't Do Any Design". Yes They Do. They Have To.

A complaint I hear often from managers about their development teams is "they don't do any design".

This is a nonsense, of course. Designedness - is that a word? It is now - is a spectrum, with complete randomness at one end and zero randomness at the other. i.e., completely unintentional vs. nothing unintentional.

Working code is very much towards the zero randomness end of the spectrum. Code with no design wouldn't even compile, let alone kind of sort of work.



To look at it another way, working code is a tiny, tiny subset of possible combinations of alphanumeric characters. The probability of accidentally stumbling on a sequence of random characters that makes working code is so vanishingly remote, we can dismiss it as obvious silliness.



Arguably, software design is a process of iteratively whittling down the possibilities until we arrive at something that ticks the right boxes, of which there will be so very many if the resulting software is to do what the customer wants.

It's clear, though, that this tiny set of possible working code configurations contains more than one choice. And when you say "they don't do any design", what you really mean is "I don't like the design that they've chosen". They've done lots and lots of design, making hundreds and thousands (and possibly millions) of design choices. You would just prefer they made different design choices.

In which case, you need to more clearly define the properties of this tiny subset that would satisfy your criteria. Should they require modules in their design to be more loosely coupled, for example? If so, then add that to the list of requirements; the tests their design needs to pass.

Finally, in some cases, when managers claim their development teams "don't do any design", what they really mean is they don't follow a prescribed design process, producing the requisite artefacts as proof that design was done.

The finished product is the ultimate design artefact. If you want to know what they built, look at the code. The design is in there. And if you can't understand the code, maybe you should let someone who can worry about design.




September 30, 2016

Learn TDD with Codemanship

Software Development Doesn't Scale. Dev Culture Does

For a couple of decades now, the Standish Group have published an annual "CHAOS" reported, detailing the results of surveys taken by IT managers about the outcomes of IT projects.

One clear trend that emerged - and remains as true today as in 1995 - is that the bigger they are, the harder they fall. The risk of an IT project failing outright rises rapidly with project size and cost. When they reach a certain size - and it's much smaller than you may think - failure is almost guaranteed.

The reality of software development is that, once we get above a dozen or so people working for a year or two on the same product or system, the prognosis does not look good at all.

This is chiefly because - and how many times do we need to say this, folks? - software development does not scale.

If that's true, though, how do big software products come into existence?

The answer lies in city planning. A city is made up of hundreds of thousands of buildings, on thousands of streets, with miles of sewers and underground railways and electrical cabling and lawns and trees and shops and traffic lights and etc etc.

How do such massively complex structures happen? Is a city planned and constructed by a single massive team of architects and builders as a single project with a single set of goals?

No, obviously not. Rome was not built in a day. By the same guys. Reporting to one boss. With a single plan.

Cities appear over many, many decades. The suburbs of London were once, not all that long ago, villages outside London. An organic process of development, undertaken by hundreds of thousands of people and organisations all working towards their own unique goals, and co-operating or compromising when goals aligned or conflicted, produced the sprawling metropolis that is now London.

Trillions of pounds has been spent creating the London of today. Most of that investment is nowhere to be seen any more, having been knocked down (or bombed) and built over many times. You could probably create a "London" for a fraction of the cost in a fraction of the time, if it were possible to coordinate such a feat.

And that's my point: it simply isn't possible to coordinate such a feat, not on that scale. An office complex? Sure. A housing estate? Why not? A new rail line with new train stations running across North London? With a few tens of billions and a few decades, it's do-able.

But those big projects exist right the edge of what is manageable. They invariably go way over budget, and are completed late. If they were much bigger, they'd fail altogether.

Cities are a product of many lifetimes, working towards many goals, with no single clear end goal, and with massive inefficiency.

And yet, somehow, London mostly looks like London. Toronto mostly looks like Toronto. European cities mostly look like European cities. Russian cities mostly look like Russian cities. It all just sort of, kind of, works. A weird conceptual cohesion emerges from the near-chaos.

This is the product of culture. Yes, London has hundreds of thousands of buildings, designed by thousands of people. But those people didn't work in bubbles, completely oblivious to each others' work. They could look at other buildings. Read about their design and their designers. Learn a thousand and one lessons about what worked and what didn't without having to repeat the mistakes that earned that knowledge.

And knowledge is weightless. It travels fast and travels cheaply. Hence, St Petersburg looks like the palaces of Versailles, and that area above Leicester Square looks like 19th century Hong Kong.

Tens of thousands of architects and builders, guided by organising principles plucked from the experience of others who came before.

Likewise, with big software products. Many teams, with many goals, building on top of each other, cooperating when it makes sense, compromising when there are conflicts. But, essentially, each team is doing their own thing for their own reasons. Any attempt to standardise, or impose order from above, fails. Every. Single. Time.

Better to focus on scaling up developer culture, which - those of us who participate in the global dev community can attest - scales beautifully. We have no common goal, no shared boss; but, somehow, I find myself working with the same tools, applying the same practices and principles, as thousands of developers around the world, most of whom I've never met.

Instead of having an overriding architecture for your large system, try to spread shared organising principles, like Simple Design and S.O.L.I.D. It's not a coincidence that hundreds of thousands developers use dependency injection to make external dependencies swappable. We visit the same websites, watch the same screencasts, read the same books. On a 10,000-person programme, your architect isn't the one who sits in the Big Chair at head office drawing UMLL diagrams. Your architect is Uncle Bob. Or Michael Feathers. Or Rebecca Whirfs-Brock. Or Barbara Liskov. Or Steve Freeman. Or even me (a shocking thought!)

But it's true. I probably have more influence over the design of some systems than the people getting paid to design it. And all I did was blog, or record a screencast, or speak at a conference. Culture - in this web age - spreads fast, and scales rapidly. You, too, can use these tools to build bridges between teams, share ideas, and exert tacit influence. You just have to let go of having explicit top-down control.

And that's how you scale software development.




July 17, 2016

Learn TDD with Codemanship

Oodles of Free Legacy UML Tutorials

See how we used to do things back in Olden Times by visiting the legacy UML tutorials section of the Codemanship website (the content from the highly-popular-with-your-granddad-back-in-the-day parlezuml.com).



I maintain that:

a. Visual modeling & UML is still useful and probably due for a comeback, and

b. Visual modelling and Agile Software Development can work well together when applied sparingly and sensibly

Check it out.