December 21, 2016

Learn TDD with Codemanship

"Our Developers Don't Do Any Design". Yes They Do. They Have To.

A complaint I hear often from managers about their development teams is "they don't do any design".

This is a nonsense, of course. Designedness - is that a word? It is now - is a spectrum, with complete randomness at one end and zero randomness at the other. i.e., completely unintentional vs. nothing unintentional.

Working code is very much towards the zero randomness end of the spectrum. Code with no design wouldn't even compile, let alone kind of sort of work.



To look at it another way, working code is a tiny, tiny subset of possible combinations of alphanumeric characters. The probability of accidentally stumbling on a sequence of random characters that makes working code is so vanishingly remote, we can dismiss it as obvious silliness.



Arguably, software design is a process of iteratively whittling down the possibilities until we arrive at something that ticks the right boxes, of which there will be so very many if the resulting software is to do what the customer wants.

It's clear, though, that this tiny set of possible working code configurations contains more than one choice. And when you say "they don't do any design", what you really mean is "I don't like the design that they've chosen". They've done lots and lots of design, making hundreds and thousands (and possibly millions) of design choices. You would just prefer they made different design choices.

In which case, you need to more clearly define the properties of this tiny subset that would satisfy your criteria. Should they require modules in their design to be more loosely coupled, for example? If so, then add that to the list of requirements; the tests their design needs to pass.

Finally, in some cases, when managers claim their development teams "don't do any design", what they really mean is they don't follow a prescribed design process, producing the requisite artefacts as proof that design was done.

The finished product is the ultimate design artefact. If you want to know what they built, look at the code. The design is in there. And if you can't understand the code, maybe you should let someone who can worry about design.




September 30, 2016

Learn TDD with Codemanship

Software Development Doesn't Scale. Dev Culture Does

For a couple of decades now, the Standish Group have published an annual "CHAOS" reported, detailing the results of surveys taken by IT managers about the outcomes of IT projects.

One clear trend that emerged - and remains as true today as in 1995 - is that the bigger they are, the harder they fall. The risk of an IT project failing outright rises rapidly with project size and cost. When they reach a certain size - and it's much smaller than you may think - failure is almost guaranteed.

The reality of software development is that, once we get above a dozen or so people working for a year or two on the same product or system, the prognosis does not look good at all.

This is chiefly because - and how many times do we need to say this, folks? - software development does not scale.

If that's true, though, how do big software products come into existence?

The answer lies in city planning. A city is made up of hundreds of thousands of buildings, on thousands of streets, with miles of sewers and underground railways and electrical cabling and lawns and trees and shops and traffic lights and etc etc.

How do such massively complex structures happen? Is a city planned and constructed by a single massive team of architects and builders as a single project with a single set of goals?

No, obviously not. Rome was not built in a day. By the same guys. Reporting to one boss. With a single plan.

Cities appear over many, many decades. The suburbs of London were once, not all that long ago, villages outside London. An organic process of development, undertaken by hundreds of thousands of people and organisations all working towards their own unique goals, and co-operating or compromising when goals aligned or conflicted, produced the sprawling metropolis that is now London.

Trillions of pounds has been spent creating the London of today. Most of that investment is nowhere to be seen any more, having been knocked down (or bombed) and built over many times. You could probably create a "London" for a fraction of the cost in a fraction of the time, if it were possible to coordinate such a feat.

And that's my point: it simply isn't possible to coordinate such a feat, not on that scale. An office complex? Sure. A housing estate? Why not? A new rail line with new train stations running across North London? With a few tens of billions and a few decades, it's do-able.

But those big projects exist right the edge of what is manageable. They invariably go way over budget, and are completed late. If they were much bigger, they'd fail altogether.

Cities are a product of many lifetimes, working towards many goals, with no single clear end goal, and with massive inefficiency.

And yet, somehow, London mostly looks like London. Toronto mostly looks like Toronto. European cities mostly look like European cities. Russian cities mostly look like Russian cities. It all just sort of, kind of, works. A weird conceptual cohesion emerges from the near-chaos.

This is the product of culture. Yes, London has hundreds of thousands of buildings, designed by thousands of people. But those people didn't work in bubbles, completely oblivious to each others' work. They could look at other buildings. Read about their design and their designers. Learn a thousand and one lessons about what worked and what didn't without having to repeat the mistakes that earned that knowledge.

And knowledge is weightless. It travels fast and travels cheaply. Hence, St Petersburg looks like the palaces of Versailles, and that area above Leicester Square looks like 19th century Hong Kong.

Tens of thousands of architects and builders, guided by organising principles plucked from the experience of others who came before.

Likewise, with big software products. Many teams, with many goals, building on top of each other, cooperating when it makes sense, compromising when there are conflicts. But, essentially, each team is doing their own thing for their own reasons. Any attempt to standardise, or impose order from above, fails. Every. Single. Time.

Better to focus on scaling up developer culture, which - those of us who participate in the global dev community can attest - scales beautifully. We have no common goal, no shared boss; but, somehow, I find myself working with the same tools, applying the same practices and principles, as thousands of developers around the world, most of whom I've never met.

Instead of having an overriding architecture for your large system, try to spread shared organising principles, like Simple Design and S.O.L.I.D. It's not a coincidence that hundreds of thousands developers use dependency injection to make external dependencies swappable. We visit the same websites, watch the same screencasts, read the same books. On a 10,000-person programme, your architect isn't the one who sits in the Big Chair at head office drawing UMLL diagrams. Your architect is Uncle Bob. Or Michael Feathers. Or Rebecca Whirfs-Brock. Or Barbara Liskov. Or Steve Freeman. Or even me (a shocking thought!)

But it's true. I probably have more influence over the design of some systems than the people getting paid to design it. And all I did was blog, or record a screencast, or speak at a conference. Culture - in this web age - spreads fast, and scales rapidly. You, too, can use these tools to build bridges between teams, share ideas, and exert tacit influence. You just have to let go of having explicit top-down control.

And that's how you scale software development.




July 17, 2016

Learn TDD with Codemanship

Oodles of Free Legacy UML Tutorials

See how we used to do things back in Olden Times by visiting the legacy UML tutorials section of the Codemanship website (the content from the highly-popular-with-your-granddad-back-in-the-day parlezuml.com).



I maintain that:

a. Visual modeling & UML is still useful and probably due for a comeback, and

b. Visual modelling and Agile Software Development can work well together when applied sparingly and sensibly

Check it out.






April 15, 2016

Learn TDD with Codemanship

Compositional Coverage

A while back, I blogged about how the real goal of OO design principles is composability of software - the ability to wire together different implementations of the same abstractions to make our code do different stuff (or the same stuff, differently).

I threw in an example of an Application that could be composed of different combinations of database, external information service, GUI and reporting output.



This example design offers us 81 unique possible combinations of Database, Stock Data, View and Output for our application. e.g., A Web GUI with an Oracle database, getting stock data from Reuters and writing reports to Excel files.

A few people who discussed the post with me had concerns, though. Typically, in software, more combinations means more ways for our software to be wrong. And they're quite right. How do we assure ourselves that every one of the possible combinations of components will work as a complete whole?

A way to get that assurance would be to test all of the combinations. Laborious, potentially. Who wants to write 81 integration tests? Not me, that's for sure.

Thankfully, parameterised testing, with an extra combinatorial twist, can come to the rescue. Here's a simple "smoke test" for our theoretical design above:



This parameterised test accepts each of the different kind of component as a parameter, which it plugs into the Application through the constructor. I then use a testing utility I knocked up to generate the 81 possible combinations (the code for which can be found here - provided with no warranty, as it was just a spike).

When I run the test, it checks the trade price calculation using every combination of components. Think of it like that final test we might do for a car after we've checked all the individual components work correctly - when we bolt them all together, and turn the key in the ignition, does it go?.




The term I'm using for how many possible combinations of components we've tested is compositional coverage. In this example, I've achieved 100% compositional coverage, as every possible combination is tested.

Of course, this is a dummy example. The components don't really do anything. But I've simulated the possible cost of integration tests by building in a time delay, to illustrate that these ain't your usual fast-running unit tests. In our testing pyramid, these kinds of tests would be near the top, just below acceptance and system tests. We wouldn't run them after, say, every refactoring, because they'd be too slow. But we might run them a few times a day.

More complex architectures may generate thousands of possible combinations of components, and lead to integration tests (or "composition tests") that take hours to run. In these situations, we could probably buy ourselves pretty decent compositional coverage by doing pairwise combinations (and, yes, the testing utility can do that, too).

Changing that test to use pairwise combinations reduces the number of tests run to just 9.





April 11, 2016

Learn TDD with Codemanship

Intensive S.O.L.I.D. - London, Sat June 4th



Just a quick note to mention that I'll be running a Codemanship Intensive S.O.L.I.D. training workshop in London on Saturday June 4th at the amazingly low price of £59 for a jam-packed day of OO and refactoring goodness.



April 7, 2016

Learn TDD with Codemanship

Exceptions Are Not Events

Something I catch myself doing - and should really know better - is committing the sin of using exceptions as events in my code.

For example, I recently threw together a basic Java unit testing framework to test how long it might take as a mini-project for SC2016 (we're still keen to hear you ideas for those, BTW). Look at how the assertion mechanism works:



When an assertion fails, it throws a TestFailureException with an optional message, which is caught be the class responsible for invoking tests dynamically through reflection.



If running the test causes an exception - of any kind - to be thrown, then the test has failed, and the reasons for that failure are reported.

Now, the reason I used this mechanism is because I was lazy. How do I wire up callbacks on an implicit method invocation, without making the listener a parameter of every test method or test fixture constructor? "I know. I'll just throw an exception when it fails."

In actual fact, this is how some xUnit implementations really do it.

But let's be clear; this isn't what exceptions are for.

An exception should be thrown when something has gone wrong with our program. Tests failing isn't "something going wrong". Tests can fail: that's part of the normal functioning of the testing tool. A test failing is an event, not an exception.

Had I wanted to invest a little bit more time, I could have used the Observer pattern (and, indeed, did as much when I redid this exercise in C#.)

The rule of thumb for exceptions should be as follows:

If it's part of the normal functioning of your code, then it's not an exception.

If your ATM user interface allows cardholders to select an amount to withdraw that's not actually available, we don't handle them selecting unavailable amounts by throwing an exception. Your application allows that input, and should handle it meaningfully.

It pisses me off no end when edge cases like these are handled with exceptions. Displaying an error message when the user has done something your design let them do is poor UX etiquette. If need be, check that input, and if it's not valid, raise an event that has a proper event handler, all part of the UX flow.

Better still, don't offer the user the opportunity to perform invalid actions or choose invalid options. If £10 notes aren't available, don't offer them the choice of withdrawing £10, £30 or £50. An InvalidWithdrawalAmount exception is just wasting their time.

One of the best summaries of what exceptions are really for, and how our code should properly handle them is Bertrand Meyer's Discipline Exceptions. Read it with your eyes, and then learn it with your mind.

I'm off to do some much-needed refactoring...



March 14, 2016

Learn TDD with Codemanship

Composability - The Real Goal of S.O.L.I.D.

A brief note on the topic of composability and OO design principles...

When we look at our S.O.L.I.D. principles, we can see - if we choose to - that their ultimate end goal is to give us to ability to change and adapt our software by re-composing the different parts, swapping out one object with another that, for the outside, looks the same and behaves in a way that's compatible with the original.

Take this high level design for a stock trading application:



By splitting the work up into separate modules, each of which has a specific responsibility, and then wiring the different modules into the application using dependency injection - binding the Application class to abstractions - we buy ourselves the ability to achieve many combinations of app logic with different user interfaces, different reporting outputs, different databases for persistence and using different external sources of stock price data.



e.g., we could have a Windows UI, outputting reports to Excel, storing the data in a Neo4J database, and getting our stock prices from Bloomberg. There are actually 3x3x3x3 possible combinations of these different modules, all - if we've observed the Liskov Substitution Principle - totally valid and working.

So, from 13 implementation modules (including the Application class), we can squeeze out 81 possible systems. All we have to do is write a line or two of code to wire the components together:



Notice, also, how this pattern is repeated - Mandelbrot-style - for the implementation of the Excel output. It, too, has options: here we choose to use the JdbcWriter to actually write the Excel files. A different writer implementation might automated MS Excel itself through the API. And so on.

Conversely, if we'd designed our Application like this:



Then the system would offer us only one possible easily accessible configuration. We'd need a different Application class for every desired configuration.

Doing it the S.O.L.I.D. way, our configured system is wired together, injecting objects into objects that in turn get injected into other objects - rather in the style of Russian dolls - from the top of the call stack. A web version of this system might wire together a different combination in the start-up script.

And adding new implementation modules (e.g., a Crystal Reports output) is a doddle, because the rest of the system just sees that Output interface and expects generic Output behaviour.

I thought I should mention it, because composability is all too often overlooked as the real goal of S.O.L.I.D.




February 16, 2016

Learn TDD with Codemanship

Software Craftsmanship 2016 - London workshop open for registration

I'm still working on the official international web page (with more workshops in your area TBA), but if you're in the London area, you can now register for the primary workshop happening in South Wimbledon. It's happening on Saturday May 14th, so no need to ask for a day off.

It's going to be a hell of a thing. No sessions, just one big breakout area full of passionate coders doing what passionate coders do best (coding passionately!)

It's completely free, and paid for by my company Codemanship. So if you want to show your appreciation, beg the boss for some training ;)

The day will be followed by the Software Craftsman's Ball at a local hostelry. And, yes, that is drinking, in case you were wondering.




August 10, 2015

Learn TDD with Codemanship

A Hierarchy Of Software Design Needs

Design is not a binary proposition. There is no clear dividing line between a good software design and bad software design, and even the best designs are compromises that seek to balance competing forces like performance, readability, testability, reuse and so on.

When I refactor a design, it can sometimes introduce side-effects - namely, other code smells - that I deem less bad than what was there before. For example, maybe I have a business object that renders itself as HTML - bad, bad, bad! Right?

The HTML format is likely to change more often than the object's data schema, and we might want to render it to other formats. So it makes sense to split out the rendering part to a separate object. But in doing so, we end up creating "feature envy" - an unhealthy high coupling between our renderer and the business object so it can get the data in needs - in the process.

I consider the new feature envy less bad than the dual responsibility, so I live with it.

In fact, there tends to be a hierarchy of needs in software design, where one design issue will take precedence over another. It's useful, when starting out, to know what that hierarchy of needs is.

Now, the needs may differ depending on the requirements of our design - e.g., on a small-memory device, memory footprint matters way more than it does for desktop software usually - but there is a fairly consistent pattern that appears over and over in the majority of applications.

There are, of course, a universe of qualities we may need to balance. But let's deal with the top six to get you thinking:

1. The Code Must Work

Doesn't matter how good you think the design is if it doesn't do what the customer needs. Good design always comes back to "yes, but does it pass the acceptance tests?" If it doesn't, it's de facto a bad design, regardless.

2. The Code Must Be Easy To Understand

By far the biggest factor in the maintainability of code is whether or not programmers can understand it. I will gladly sacrifice less vital design goals to make code more readable. Put more effort into this. And then put even more effort into it. However much attention you're paying to readability, it's almost certainly not enough. C'mon, you've read code. You know it's true.

But if the code is totally readable, but doesn't work, then spend more time on 1.

3. The Code Must Be As Simple As We Can Make It

Less code generally means a lower cost of maintenance. But beware; you can take simplicity too far. I've seen some very compact code that was almost intractable to human eyes. Readability trumps simplicity. And, yes, functional programmers, I'm particularly looking at you.

4. The Code Must Not Repeat Itself

The opposite of duplication is reuse. Yes it is: don't argue!

Duplication in our code can often give us useful clues about generalisations and abstractions that may be lurking in there that need bringing out through refactoring. That's why "removing duplication" is a particular focus of the refactoring step in Test-driven Development.

Having said that, code can get too abstract and too general at the expense of readability. Not everything has to eventually turn into the Interpreter pattern, and the goal of most projects isn't to develop yet another MVC framework.

In the Refuctoring Challenge we do on the TDD workshops, over-abstracting often proves to be a sure-fire way of making code harder to change.

5. Code Should Tell, Not Ask

"Tell, Don't Ask" is a core pillar of good modular -notice I didn't say "object oriented" - code. Another way of framing it is to say "put the work where the knowledge is". That way, we end up with modules where more dependencies are contained and fewer dependencies are shared between modules. So if a module knows the customer's date of birth, it should be responsible for doing the work of calculating the customer's current age. That way, other modules don't have to ask for the date of birth to do that calculation, and modules know a little bit less about each other.

It goes by many names: "encapsulation", "information hiding" etc. But the bottom line is that modules should interact with each other as little as possible. This leads to modules that are more cohesive and loosely coupled, so when we make a change to one, it's less likely to affect the others.

But it's not always possible, and I've seen some awful fudges when programmers apply Tell, Don't Ask at the expense of higher needs like simplicity and readability. Remember simply this: sometimes the best way is to use a getter.

6. Code Should Be S.O.L.I.D.

You may be surprised to hear that I put OO design principles so far down my hierarchy of needs. But that's partly because I'm an old programmer, and can vaguely recall writing well-designed applications in non-OO languages. "Tell, Don't Ask", for example, is as do-able in FORTRAN as it is in Smalltalk.

Don't believe me? Then read the chapter in Bertrand Meyer's Object Oriented Software Construction that deals with writing OO code in non-OO languages.

From my own experiments, I've learned that coupling and cohesion have a bigger impact on the cost of changing code. A secondary factor is substitutability of dependencies - the ability to insert a new implementation in the slot of an old one without affecting the client code. That's mostly what S.O.L.I.D. is all about.

This is the stuff that we can really only do in OO languages that directly support polymorphism. And it's important, for sure. But not as important as coupling and cohesion, lack of duplication, simplicity, readability and whether or not the code actually works.

Luckily, apart from the "S" in S.O.L.I.D. (Single Responsibility), the O.L.I.D. is fairly orthogonal to these other concerns. We don't need to trade off between substitutability and Tell, Don't Ask, for example. They're quite compatible, as are the other design needs - if you do it right.

In this sense, the trade off is more about how much time I devote t thinking about S.O.L.I.D. compared to other more pressing concerns. Think about it: yes. Obsess about it: no.


Like I said, there are many, many more things that concern us in our designs - and they vary depending on the kind of software we're creating - but I tend to find these 6 are usually at the top of the hierarchy.

So... What's your hierarchy of design needs?









April 25, 2015

Learn TDD with Codemanship

Non-Functional Tests Can Help Avoid Over-Engineering (And Under-Engineering)

Building on the topic of how we tackle non-functional requirements like code quality, I'm reminded of those times where my team has evolved an architecture that developers taking over from us didn't understand the reasons or rationale for.

More than once, I've seen software and systems scrapped and new teams start again from scratch because they felt the existing solution was "over-engineered".

Then, months later, someone on the new team reports back to me that, over time, their design has had to necessarily evolve into something similar to what they scrapped.

In these situations it can be tricky: a lot of software really is over-engineered and a simpler solution would be possible (and desirable in the long term).

But how do we tell? How can we know that the design is the simplest thing that a team could have done?

For that, I think, we need to look at how we'd know that software was functionally over-complicated and see if we can project any lessons we leearn on to non-functional complexity.

A good indicator of whether code is really needed is to remove it and see if any acceptannce tests fail. You'd be surprised how many features and branches in code find their way in there without the customer asking for them. This is especially true when teams don't practice test-driven development. Developers make stuff up.

Surely the same goes for the non-functional stuff? If I could simplify the design, and my non-functional tests still pass, then it's probable that the current design is over-engineered. But in order to do that, we'd need a set of explicit non-functional tests. And most teams don't have those. Which is why designs can so easily get over-engineered.

Just a thought.