March 27, 2015

Component-based & Microservice Architecture: Swappability Happens On The Client Side

Lunch time teleconference looms; just enough time to spew out these thoughts about distributed component architectures (you young hop folk may know them as "microservices", which is the trendy cool Dubstep name for them).

The key to distributed components is swappability - actually, that's kind of the whole point of components generally, distributed or in-process, so take this as generally applicable advice. Or don't. see if I care.

What our bearded young hipster friends often forget to mention is that the swappability in component-based design really happens on the client side of component-to-component (or service-to-service) collaborations. Not, as you may have been led to believe, on the server side.

Sure, we could make pretty much any kind of component present the same, say, REST API. But we still run the risk of binding our client to that API, and to the details of how to consume RESTful services. (Which can be, ironically, anything but to work with.)

Nope. To make that service truly swappable, we have to hide all of the details from the client.

UML 2.0, that leviathan of the component-based era, introduced the notion of components and connectors. It's a pretty neat idea: basically, we cleanly separate the logical conversation held between two components from the dirty business of the medium through which that conversation takes place.

To connect the idea with the real world, let's say I ask the prime minister if he's ever eaten three Shredded Wheat and he replied "Yes". Now, I didn't say how I asked him. Maybe I went to Number 10 and asked him to his face. maybe I emailed him. Maybe I had the words branded onto a poor person and paraded said pauper through the streets so that the PM would see the question when they reported the news on his tellybox.

What matters is that I asked, and he replied.

In component architectures, we seek to separate the logic of component interactions from the protocols through which they physically connect.

Let's say we have a Video Library web application that wants to display 3rd-party reviews of movies.

We could look for reviews on IMDB, or on Rotten Tomatoes (or on both). We want to ask the logical question: what reviews have been written for this movie (identified by the movie's title and year)?

We can codify the logic of that interact with interfaces, and package those interfaces in a component the client knows about - in this case, the VideoReviews .NET assembly, which contains the interfaces IReviewService and IReview.



The VideoTitle class that consumes these services doesn't need to know where the reviews are coming form, or how they're being marshalled. It just wants to ask the logical question. So we present it with a logical interface through which to do that.



By injecting the review service into the VideoTitle, it becomes possible to dynamically bind implementations of those interfaces that know how to connect and interact with the remote server (e.g., the Rotten Tomatoes API), and unpacks the data that comes back, translating into a form that VideoTitle can use easily.

All of that is done behind the scenes: VideoTitle knows nothing about the details. And because it knows nothing about the details, and because we're injecting it into the VideoTitle from outside - e.g., in its constructor, or as a method parameter when VideoTitle is told to get reviews - it becomes possible to swap in different connectors to get reviews from other services.



All of this can be wired together from above (e.g., we could instantiate an implementation of IReviewService when the application starts up), or with a dependency injection framework, and so on. The possibilities are many and varied for runtime hi-jinks and dynamic larks.

The component dependencies are crucial: our app logic (in the VideoLibary component) only depends on the abstractions in the VideoReviews library. It depends in no way on the external services. It does not know there are even external services involved.

All component dependencies point towards the abstractions, satisfying the Stable Abstractions package design principle.

It now becomes possible to do clever things with swappability, like pooling connectors that point to different service instances to provide basic load-balancing or fail-over, or giving the end user the choice of which service to use at runtime.

Most importantly, though, it gives us the ability to vary the logic of our applications independently of the details of how it connects to external services independently of each other. If a new movie review site came along, we would simply have to write a new connector for it, and wouldn't have to rely on them implementing the same web API as our existing providers. Because, that my friends, is beyond our control.

So, succeeding with components is about swappability, and swappability is about programming the logic of our applications against clean interfaces that we control.

The REST is details.




March 19, 2015

Requirements 2.0 - Make It Real

This is the second post in a series to float radical ideas for changing the way we handle requirements in software development. The previous post was Ban Feature Requests

In my previous post, I put forward the idea that we should ban customers from making feature requests so that we don't run the risk of choosing a solution too early. For example, in a user story, we'd get rid of most of the text, just leaving the "So that..." clause to describe why the user wants the software changed.

Another area where there's great risk of pinning our colours to a specific solution is in the collaboration between a customer and a UI/UX designer. The issue here is that things like wireframes and UI mock-ups tend to be the first concrete discussion points we put in front of customers. Up to this point, it's all very handwavy and vague. But seeing a web page with a text box and a list and some buttons on it can make it real enough to have a more meaningful discussion about the problem we're trying to solve.

This would be fine if we didn't get so attached to those designs. But, let's face it, we do. We get very attached to them, and then the goal of development transforms into "what must we do in order to realise that design?", when in reality, we're still exploring the problem space.

So, we need some way to make our ideas concrete, so we can have meaningful discussions about the problem, without presenting the customer with a design for a solution.

Here's what I do, when the team and the customer are willing to play ball:



I make it real by... well... making it real. I call this Tactile Modeling. (No doubt by tomorrow afternoon, some go-getting young hipster will have renamed it "Illustrating Requirements Using Things You Can See and Hold In Your Hand-driven Development". But for now, it's Tactile Modeling.)

Now, I'm old enough to remember when we were all so young and stupid we really thought that visual models in notations like UML would serve this purpose. Yeah, I know. It's like watching old movies of women smoking next to their babies. Boy, were we dumb!

But the idea of being able to concretely explore examples and business scenarios in a practical way can carry real power to break down the communication barriers; far more effectively than our current go-to techniques like agreeing acceptance tests in some airless meeting room with a customer who is pulling domain facts out of thin air half the time.

So, if we're talking about a system for managing a video library, let's create a video library and explore real-world systems for managing it. Let's get some videos. Let's get some shelves to put them. Let's get some boxes and folders and sticky-tape and elastic bands and build a video library management system out of real actual atoms and stuff, and explore how it works in different scenarios.

And instead of drawing boxes and arrows and wireframes and wizardry up on the whiteboard or in a modelling tool (like PowerPoint, for example), let's whip out our camera phones and take snaps at key steps and take videos to show how a process works and stick them in the Wiki for everyone to see.

And let's not sit in meeting rooms going "blah blah blah must be scalable etc etc", let's have our discussions inside this environment we've created, so we're surrounded by the problem domain, and at any point requiring clarification, the clarifier can jump up and show us what they mean, so that we can all see it (using our eyes).

As our understanding evolves, and we start to create software to be used in some of these scenarios to help the end users in their work, we can deploy that software into this fake video library and gradually swap out the belt-and-braces information systems with slick software, all the while testing to see that we're achieving our goals.

Now, I know what some of you are thinking: "but our problem domain is all abstract concepts like 'currency', 'option' and 'ennui'. " Well, here's the good news. Movies are an abstract concept. Sure, they come in boxes sometimes, or on cassettes. But that's just the physical representation - the medium - through which that concept is expressed. It's the same movie whether we download it as a file, buy it on a disc or get someone to paint it as a mural. That's what separates us from the beasts of the jungle. Well, that and the electrified fence around our compound. But mostly, it's our ability to express abstract concepts like money, employment contract and stock portfolio that we've built our entire civilisation on. Money can be represented by little pieces of paper with numbers written on them. (A radical idea, I know, but worth a try sometime.) And so on.

There is always a way to make it practical: something we can pick up and look at and manipulate and move to model the information in the system, be it information about hospital patients, or about chemicals components in self-replicating molecules, or about single adults who are looking for love.

Of course, there's more to it than that. But you get the gist, I'm sure. And we'll look at some of that in th next post, no doubt. In particular, the idea of a model office: a simulated testing and learning environment into which we should be deploying our software to see how it fairs in something approaching the real world.

Wanna have a meaningful conversation about requirements? Then make it real.


Requirements 2.0 - Ban Feature Requests

This is the first post in a series that challenges the received wisdom about how we handle requirements in software development.

A lot of the problems in software development start with someone proposing a solution too early.

User stories in Agile Software Development are symptomatic of this: customers request features they want the software to have, qualifying them with a "so that..." clause that justifies the feature with a benefit.

Some pundits recommend turning the story format around, so the benefit comes first, a bit like writing tests by starting with the assertion and working backwards.

I'm going to suggest something more radical: I believe we should ban feature requests altogether.

My format for a user story would only have the "so that..." clause. Any mention of how that would be achieved in the design of the software would be excluded. The development team would figure out the best way to achieve that in the design, and working software would be iterated until the user's goal has been satisfied.

It's increasingly my belief that the whole requirements discipline needs to take a step back from describing solutions and their desired features or properties, to painting a vivid picture of what the user's world will look like with the software in it, with a blank space where the software actually goes.

Imagine trying to define a monster in a horror movie entirely through reaction shots. We see the fear, we hear the screams, but we never actually see the monster. That's what requirements specs, in whatever form they're captured, should be like. All reaction shots and no monster.

Why?

Well, three reasons:

1. All too often, we find ourselves building a solution to a problem that's never been clearly articulated. Iterating designs only really works when we iterate towards clear goals. Taking away the ability to propose solutions (features) early forces customers (and developers) to explicitly start by thinking about the problem they're trying to solve. We need to turn our thinking around.

2. The moment someone says "I want a mobile app that..." or "When I click on the user's avatar..." or even, in some essential way, "When I submit the mortgage application..." they are constraining the solution space unnecessarily to specific technologies, workflows and interaction designs. Keeping the solution space as wide open as possible gives us more choices about how to solve the user's problem, and therefore a greater chance of solving it in the time we have available. On many occasions when my team's been up against it time-wise, banging our heads against a particular technical brick wall, when we took a step back and asked "What are we actually trying to achieve here?" and the breakthrough came when we chose an easier route to giving the users what they really needed.

3. End users generally aren't software designers. For the exact same reason that it's not such a great idea to specify a custom car for me by asking "What features do you want?" or for my doctor to ask me "What drugs would you like?", it's probably best if we don't let users design the software. It's not their bag, really. They understand the problem. We do the design. We play to our strengths.

So there you have it. Ban feature requests.




March 11, 2015

Distributed Architecture - "Swappability" Is Enabled On The Client Side, Not The Server

It's a common mistake; developers building applications out of multiple distributed components often fall into this trap.

The trick with distributed component-based designs is to recognise that the protocols we use to wire components ("services") together is a detail, not an abstraction.

The goal is swappability, and we achieve this goal on the client's side of a distributed interaction, not on the supplier's side.

So, for example, the JSON interface of a "microservice" isn't the swappable abstraction that makes it possible for us to easily replace that component with a different implementation.

You will note how mature component technologies involve tools that generate clean client-side abstractions that we bind the client logic to. The details of how exactly that interaction takes place (e.g., via HTTP) is hidden behind it. When the details aren't hidden, we risk binding our client logic to a specific way of communicating, when it should be focusing on the meaning of the conversation.



In this example, our Trade object needs an up-to-date stock price. It does not need to know where this stock price comes from. By abstracting the conversation on the client side using a StockPriceService interface, it becomes possible for us dynamically substitute sources of stock prices - including test sources - without having to recompile, re-test and re-deploy our Trade object's component.

It could be that we want to switch to a different supplier of stock information - e.g., switch from Reuters to Bloomberg. Or, indeed, present the user with a choice at runtime. or we might want to test that the total price is calculated correctly by swapping in a stub StockPriceService implementation. Or write a better implementation of our StockPriceService.

Having this abstraction available on the client side makes that swappability much easier than through binding to the interface presented by the supplier (e.g., a web service.)

So remember, folks: real swappability is enabled at the client end, not on the server.





March 5, 2015

Public Workshop - Intensive Advanced Unit Testing, Sat May 9th, London

A quick note about an upcoming public workshop I'm running in London on Saturday May 9th.

Intensive Advanced Unit Testing crams in all the most useful bits of Codemanship's 2-day course into a single day, covering skills that can help us get higher assurance on the 10% of our code that needs to be especially reliable, as well as how to identify and target code critical code in our systems.

Hands-on exercises will include Design By Contract, advanced parameterised testing, mutation testing to find the gaps in our existing tests, and automated test case discovery using random test data generators, combinatorial input generators and tools that analyse our code with symbolic execution and constraint solvers to take the guesswork/luck out of things.

We'll also, of course, be looking at one of the most powerful and overlooked forms of white-box testing - inspections.

All for the very affordable price of £109 (plus EventBrite fees), about 80% cheaper than many of our competitors - though, for this workshop, we have none anywhere in the world! (It's true: nobody else is running a public course like this.)

If, like me, you believe that TDD is a great foundation for building reliable software, but that sometimes - on code that really matters - we need to go further, then join us on May 9th.



March 1, 2015

Continuous Inspection at NorDevCon

On Friday, I spent a very enjoyable day at the Norfolk developer's conference NorDevCon (do you see what they did there?) It was my second time at the conference, having given the opening keynote last year, and it's great to see it going from strength to strength (attendance up 50% on 2014), and to see Norwich and Norfolk being recognised as an emerging tech hub that's worthy of inward investment.

I was there to run a workshop on Continuous Inspection, and it was a good lark. You can check out the slides, which probably won't make a lot of sense without me there to explain them - but come along to CraftConf in Budapest this April or SwanseaCon 2015 in September and I'll answer your questions.

You can also take a squint at (or have a play with) some code I knocked up in C# to illustrate a custom FxCop code rule (Feature Envy) to see how I implemented the example from the slides in a test-driven way.

I'm new to automating FxCop (and an infrequent visitor to .NET Land), so please forgive any naivity. Hopefully you get the idea. The key things to take away are: you need a model of the code (thanks Microsoft.Cci.dll), you need a language to express rules against that model (thanks C#), and you need a way to drive the implementation of rules by writing executable tests that fail (thanks NUnit). The fun part is turning the rule implementation on its own code - eating your own dog food, so to speak. Throws up all sorts of test cases you didn't think of. It's a work in progress!

I now plan, before CraftConf, to flesh the project out a bit with 2-3 more example custom rules.

Having enjoyed a catch-up with someone who just happens to be managing the group at Microsoft who are working on code analysis tools, I think 2015-2016 is going to see some considerable ramp-up in interest as the tools improve and integration across the dev lifecycle gets tighter. If Continuous Inspection isn't on your radar today, you may want to put it on your radar for tomorrow. It's going to be a thing.

Right now, though, Continuous Inspection is very much a niche pastime. An unscientific straw poll on social media, plus a trawl of a couple of UK job sites, suggests that less than 1% of teams might even be doing automated code analysis at all.

I predicted a few years ago that, as computers get faster and code gets more complex, frequent testing of code quality using automated tools is likely to become more desirable and more do-able. I think we're just on the cusp of that new era today. Today, code quality is an ad hoc concern relying on hit-and-miss practices like pair programming, where many code quality issues often get overlooked by pair who have 101 other things to think about, and code reviews, where issues - if they get spotted at all in the to-and-fro - are flagged up long after anybody is likely to do anything about them.

In related news, after much discussion and braincell-wrangling, I've chosen the name for the conference that will be superceding Software Craftsmanship 20xx later this year (because craftsmanship is kind of done now as a meme). Watch this space.






February 17, 2015

Clean Code is a Requirements Discipline

Good morning, and welcome to my World of Rant.

Today's rant is powered by Marmite.

If you're one of those keeerrrraaazy developers who thinks that code should be reliable and maintainable, and have expressed that wrong-headed thought out loud and in public, then you've probably run into much more rational, right-thinking people - often who aren't developers, because who would want to do that for a living? (yuck!) - who counter that it's more important to satsify users' needs than to write Clean Code, and that a focus on the latter must necessarily be at the expense of the former.

i.e., people who focus on details must be losing sight of the Bigger Picture (TM).

I call bullshit.

First of all, the mentality that cares about making their software reliable and easy to understand and to change tends to just care, generally. I don't know about you, but I don't get much job satisfaction from writing beautiful code that nobody uses.

Secondly of all, it's a false dichotomy, like we have to choose between useful software and Clean Code, and it's not possible to take care of buiilding the right thing and building it right at the same time.

Good software developers are able to dive into the detail and then step back to look at the Bigger Picture (TM) with relative ease. The requirements discipline is part and parcel of software craftsmanship. It's misinformed to suggest that craftsmanship is all about code quality, just as it is to suggest that TDD is all about unit tests and internal design.

I'd go so far as to say that, not only are Clean Code and the Bigger Picture (TM) perfectly compatible, but in actuality they go together like Test-first and Refactoring. Teams really struggle to achieve one without the other.

Maintainable code requires, first and foremost, that the code be easy to read and understand. Literate programmig demands that our code clearly "tells the story" of what the software does in response to user input, and in the user's language. That is to say, to write code that makes sense, we must have sense to make of it.

And, more importantly, how clean our code is has a direct impact on how easy it will be for us to change that code based on user feedback.

People tend to forget that iterating is the primary requirements discipline: with the best will in the world, and all the requirements analysis and acceptance testing voodoo we can muster, we're still going to need to take a few passes at it. And the more passes we take, the more useful our software is likely to become.

Software that gets used changes. We never get it right first time. Even throwaway code needs to be iterated. (Which is why the "But this code doesn't need to last, so we don't need to bother making it easy to change" argument holds no water.)

Software development is a learning process, and without Clean Code we severely hinder that learning. What's the point in getting feedback if it's going to be too costly to act on it? I shudder to think how many times I've watched teams get stuck in that mud, bringing businesses to their knees sometimes.

Which is why I count Clean Code as primarily a requirements discipline. It is getting the details right so that we can better steer ourselves towards the Bigger Picture (TM).

To think that there's a dichotomy between the two disciplines is to fundamentally misunderstand both.

Mmmm. Marmite.


February 13, 2015

Intensive TDD, Continuous Inspection Recipes & Crappy Remote Collaboration Tools

A mixed bag for today's post, while I'm at my desk.

First up, after the Intensive TDD workshop on March 14th sold out (with a growing waiting list), I've scheduled a second workshop on Saturday April 11th, with places available at the insanely low price of £30. Get 'em while they're hot!

Secondly, I'm busy working on a practical example for a talk I'm giving at NorDevCon on Feb 27th about Continuous Inspection.

What I'm hoping to do is work through a simple example based on my Dependable Dependencies Principle, where I'll rig up an automated code analysis wotsit to find the most complex, most depended upon and least tested parts of some code to give early warning about where it might be most likely to be broken and might need better testing and simplifying.

To run this metric, you need 3 pieces of information:

* Cyclomatic Complexity of methods
* Afferent couplings per method
* Test coverage per method

Now, test coverage could mean different things. But for a short demonstration, I should probably keeep it simple and fairly brute force - e.g., % LOC reached by the tests. Not ideal, but in a short session, I don't want to get dragged into a discussion about coverage metrics. It's also a readily-available measure of coverage, using off-the-shelf tools, so it will save me time in preparing and allow viewers to try it for themselves without too much fuss and bother.

What's more important is to demonstrate the process going from identifying a non-functional requirement (e.g., "As the Architect, I want early warning about code that presents a highr risk of being unreliable so that I can work with the developers to get better assurance for it"), to implementing an executable quality gate using available tools in a test-driven manner (everybody forgets to agree tests for their metrics!), to managing the development process when the gate is in place. All the stuff that constitutes effective Continuous Inspection.

At time of writing, tool choice is split between a commercial code analysis tool called JArchitect, and SonarQube. It's a doddle to rig up in JArchitect, but the tool costs £££. It's harder to rig up in SonarQube, but the tools are available for free. (Except, of course, nothing's ever really free. Extra time taken to get what you want out of a tool also adds up to £££.) We'll see how it goes.

Finally, after a fairly frustrating remote pairing session on Wednesday where we were ultimately defeated by a combination of Wi-Fi, Skype, TeamViewer and generally bad mojo, it's occured to me that we really should be looking into remote collaboration more seriously. If you know of more reliable tools for collaboration, please tweet me at @jasongorman.





February 9, 2015

Mock Abuse: How Powerful Mocking Tools Can Make Code Even Harder To Change

Conversation turned today to that perennial question about mock abuse; namely that there are some things mocking frameworks enable us to do that we probably shouldn't ought to.

In particular, as frameworks have become more powerful, they've made it possible for us to substitute the un-substitutable in our tests.

Check out this example:



Because Orders invokes the static database access method getAllOrders(), it's not possible for us to use dependency injection to make it so we can unit test Orders without hitting the database. Boo! Hiss!

Along comes our mocking knight in shining armour, enabling me to stub out that static method to give a test-specific response:



Problem solved. Right?

Well, for now, maybe yes. But the mocking tool has not solved the problem that I still couldn't substitute CustomerData.getAllOrders() in the actual design if I wanted to (say, to use a different kind of back-end data store or a web service). So it's solved the "how do I unit test this?" problem, but not in a way that buys me any flexibility or solves the underlying design problem.

If anything, it's made things a bit worse. Now, if I want to refactor Orders to make the database back end swappable, I've got a bunch of test code that also depends on that static method (and in arguably a bigger way - more code depends on that internal dependency. If you catch my drift.)

I warn very strongly against using tools and techniques like these to get around inherent internal dependency problems, because - when it comes to refactoring (and what's the point in having fast-running unit tests if we can't refactor?) all that extra test code can actually bake in the design problems.

Multiply this one toy example by 1,000 to get the real scale I sometimes see this one in real code bases. This approach can make rigid and brittle designs even more rigid and more brittle. In the long term, it's better to make the code unit-testable by fixing the dependency problem, even if this means living with slow-running (or even - gasp! - manual) tests for a while.


February 4, 2015

Why Distribution & Concurrency Can Be A Lethal Cocktail For The Unwitting Dev Team

Picture the scene: it's Dec 31st 1990, a small town in upper state New York. I'm at a New Year's Eve party, young, stupid and eager to impress. The host mixes me my first ever Long Island Iced Tea. It tastes nice. I drink three large ones, sitting at their kitchen table, waxing eloquent about life, the universe and everything in my adorable English accent, and feeling absolutely fine. Better than fine.

And then, after about an hour, I get up to go to the bathroom. I'm not fine. Not fine at all. I appear to have lost the use of my legs, and developed an inner-ear problem that's affecting my normally balletically graceful poise and balance.

I proceed to be not-fine-at-all into the bathroom sink and several other receptacles, arguably none of which were designed for the purpose I'm now putting them to.

Long Island Iced Tea is a pretty lethal cocktail. Mixed properly, it tastes like a mildy alcoholic punch with a taste not dissimilar to real iced tea (hence the name), but one look at the ingredients puts pay to that misunderstanding: rum, gin, vodka, tequila, triple sec - ingredients that have no business being in the same glass together. It is a very alcoholic drink. Variants on the name, like "Three Mile Island" and "Adios Motherf***er", provide further clues that this is not something you serve at a child's birthday party.

I end the evening comatose on a water bed in a very hot room. This completes the effect, and Jan 1st 1991 is a day I have no memory of.

Vowing never to be suckered into a false sense of security by something that tastes nice and makes me feel better-than-fine for a small while, I should have known better than to get drawn like a lamb to the slaughter into the distributed components craze that swept software development in the late 1990's.

It went something like this:

Back in the late 1990's, aside from the let's-make-everything-a-web-site gold rush that was reaching a peak, there was also the let's-carve-up-applications-that-we-can't-even-get-working-properly-when-everything's-in-one-memory-address-space-and-there's-no-concurrency-and-distribute-the-bits-willy-nilly-adding-network-deficiencies-distributed-transactions-and-message-queues fad.

This was enabled by friendly technology that allowed us to componentise our software without the need to understand how all the undrelying plumbing worked. Nice in theory. You carve it, apply the right interfaces, deploy to your application server and everything's taken care of.

Except that it wasn't. It's very easy to get and up and running with these technologies, but we found ourselves continually having to dig down into the underlying detail to figure out why stuff wasn't working the way it was supposed to. "It just works" was a myth easily dispelled by looking at how many books on how this invisible glue worked were lying open on people's desktops.

To me, with the benefit of hindsight, object request brokers, remote procedure calls, message queues, application servers, distributed transactions, web services... these are the hard liquor of software development. The exponential increase in complexity - the software equivalent of alcohol units - can easily put unwitting development teams under the table.

I've watched so many teams merrily downing pints of lethal-but-nice-tasting cocktails of distribution and concurrency, feeling absolutely fine - better than fine - and then when it's time for the software to get up and walk any kind of distance... oh dear.

It turns out, this stuff is hard to get right, and the tools don't help much in that respect. They make it easy to mix these cocktails and easy to drink as much as you think you want, but they don't hold your hand when you need to go to the bathroom.

These tools are not your friends. They are the host mixing super-strength Long Island Iced Teas and ruining your New Year with a hangover that will never go away.

Know what's in your drink, and drink in moderation.