August 17, 2017
Your House, Your (Code Quality) Rules
Picking up where I left off on the custom FxCop rules for the Codemanship Code Craft "Driving Test" has reminded me of something that's vitally important.
This morning I wrote a class that enumerates a type's collaborators. The code currently looks like this:
Codified in this class is an understanding of what I mean by a "collaborator", for the purposes of the driving test.
First of all, I'm not including non-project types. This is a judgement call to keep design rules realistic. My rule will limit the number of collaborating types to 3. If that includes core library types etc, it's going to be really tough.
I'm including fields, parameters and local variables. I'm also including the declaring types of any bound members. So if I call a method that returns an object and then I call a method on that, it'll be included.
I'm also not counting base classes as collaborators. Again, it's a judgement call.
I'm working alone, so I get to make the rules. But in a team setting that absolutely should not happen. Don't send the "tools dev" away to work in isolation on quality gates for Continuous Inspection. Because what will happen is, when they return and unleash their rules on the rest of the team, there'll be tears before bedtime.
The whole team needs to be involved. This is a great candidate for mob programming, in my experience. While you're waiting for business requirements in the early stages of a project/product, here's what the team could be doing to get the delivery engine up and running.
It will require the team to have discussions about code quality with a level of precision they probably never have before. I think this is a good thing.
August 6, 2017
What *Exactly* Is "Feature Envy"?I'm currently writing some custom FxCop rules for the trial Codemanship Code Craft "driving test" on Sept 16th. The aim is that not only will I be able to automatically check candidate's code, but they will be able to while they're writing it, too. The power of Continuous Inspection!
One of the rules is that methods of one class must not display Feature Envy for another class. Typically, Feature Envy's defined as:
A method accesses the features of another class more than its own.
And this might seem trivial to check for using a tool like FxCop. Look at all the member bindings inside a method. If there are more bindings to members of other types then to members of the type on which this method's declared, then we've got Feature Envy. To fix it, we can just move the method to the focus of its envy.
But I'm not sure it's quite that simple. This example might be an open-and-shut case:
But how about this?
The majority of feature calls in this method are to methods of the same class. But that code smell we saw in the first example is still here, on lines 3 and 4. Proof? What if we extract those 2 lines into their own method?
The method obviousFeatureEnvy now completely satisfies our definition of Feature Envy and should be moved to the other class.
I think this leads me to a better definition of Feature Envy:
Feature Envy is when any unit of executable code - a method, a block, a statement or an expression - uses features of another class more than features of its own class
Basically, if you can extract any portion of code into a method that displays the original, "classic" definition of Feature Envy.
But wait; there's more. Take a look at this example:
Technically, only one of these methods satisfies our definition of Feature Envy, but if were to inline the call stack, we'd end up with one method with very obvious Feature Envy.
It's much more complex than I thought. But, for the driving test, I'll probably keep it simple and stick with the classic - and much easier - definition of Feature Envy.
But one day, when I've got time...
July 22, 2017
Code Analysis for Dependency Inversion
As work continues on the next book and training course, I'm thinking about how we could analyse our code for adherence to the Dependency Inversion Principle (the "D" in S.O.L.I.D.)
The DIP states that "High-level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should not depend upon details, details should depend upon abstractions."
This is a roundabout way of saying dependencies should be swappable. The means by which we make them swappable is dependency injection (often confused with Dependency Inversion, and the two are very closely related.)
Dependency injection is simply passing an objects collaborators in (e.g., through a constructor) instead of that object instantianting them itself. When we directly instantiate an object, we bind ourselves to its exact type. This makes it impossible to swap that collaborator with a different implementation without modifying the client code, making our design inflexible and difficult to adapt or extend.
In practice, what this means is that most of our objects are composed from the outside.
For example, in my Reading Ease calculator, the Program class - the entry point for this console app - creates all of the objects involved in doing the calculation and "plugs" them together via constructors.
I've used the analogy of Russian dolls to describe how we compose simpler collaborations into more complex collaborations (collaborations within collaborations). This means that the lowest-level objects in the call stack typically get created first.
Inside those lower-level classes, there's no direct instantiation of collaborators.
So, when we analyse the dependencies, we should find that classes that have clients in our code - classes that are further down the call stack - don't directly instantiate their collaborators.
More simply, if things depend on you, then don't use new.
There are, of course, exceptions. Factories and Builders are designed to instantiate and hide the details. Integration code - e.g., opening database connections - is also designed to hide details. We can't very well pass our database connections into those, or we'd be spreading that knowledge. Typically what we're talking about here is dependencies on our own classes. And what a kerfuffle it would be to try to apply DIP to strings and ints and collections and other core library types all the time. Though, again, there are situations where that may be called for.
If I was measuring adherence to the Dependency Inversion Principle, then, I'd look at a class and ask "Do any other of my classes depend on this?" If the answer is "yes", then I'd check to see if it creates instances of any other of my classes. I might also check - and this would be language-dependent - if those dependencies are on abstract types (abstract classes, interfaces).
November 8, 2016
Business Benefits of Continuous Delivery: We Need Hard DataSomething that's been bugging me for a while is our apparent lack of attention to the proclaimed business benefits of Continuous Delivery.
I'm not going to argue for one second that CD doesn't have business benefits; I'm a firm believer in the practice myself. But that's just it... I'm a believer in the business benefits of Continuous Delivery. And it's a belief based on personal and anecdotal experience, not on a good, solid body of hard evidence.
I had naturally assumed that such evidence existed, given that the primary motivation for CD, mentioned over and over again in the literature, is the reduced lead times on delivering feature and change requests. It is, after all, the main point of CD.
But where is the data that supports reduced lead times? I've looked, but not found it. I've found surveys about adopting CD. I've found proposed metrics, but no data. I've found largely qualitative studies of one or two organisations. But no smoking gun, as yet.
There's a mountain of data that backs up the benefits of defect prevention, but the case for CI currently rests on little more than smoke.
This, I reckon, we need to fix. It's a pillar on which so much of software craftsmanship and Agile rests; delivering working software sooner (and for longer).
Anything that supports the case for Continuous Delivery indirectly supports the case for Continuous Integration, TDD, refactoring, automation, and a bunch of other stuff we believe is good for business. And as such, I think we need that pillar to unassailably strong.
We need good data - not from surveys and opinion polls - on lead times that we can chart against CD practices so we can build a picture of what real, customer-visible impact these practices have.
To be genuinely useful and compelling, it would need to come from hundreds of places and cover the full spectrum of Continuous Delivery from infrequent manual builds with infrequent testing and no automation, to completely automated Continuous Deployment several times a day with high confidence.
One thing that would of particular interest to Agile mindsets would be how the lead times change over time. As the software grows, do lead times get longer? What difference does, say, automated developer testing make to the shape of the curve?
Going beyond that, can we understand what impact shorter lead times can have on a business? Shorter lead times, in of themselves have no value. The value is in what they enable a business to do - specifically, to learn faster. But what, in real terms, are the business benefits of learning faster? How would we detect them? Are businesses that do CD outperforming competitors who don't in some way? Are they better at achieving their goals?
Much to ponder on.
April 25, 2015
Non-Functional Tests Can Help Avoid Over-Engineering (And Under-Engineering)Building on the topic of how we tackle non-functional requirements like code quality, I'm reminded of those times where my team has evolved an architecture that developers taking over from us didn't understand the reasons or rationale for.
More than once, I've seen software and systems scrapped and new teams start again from scratch because they felt the existing solution was "over-engineered".
Then, months later, someone on the new team reports back to me that, over time, their design has had to necessarily evolve into something similar to what they scrapped.
In these situations it can be tricky: a lot of software really is over-engineered and a simpler solution would be possible (and desirable in the long term).
But how do we tell? How can we know that the design is the simplest thing that a team could have done?
For that, I think, we need to look at how we'd know that software was functionally over-complicated and see if we can project any lessons we leearn on to non-functional complexity.
A good indicator of whether code is really needed is to remove it and see if any acceptannce tests fail. You'd be surprised how many features and branches in code find their way in there without the customer asking for them. This is especially true when teams don't practice test-driven development. Developers make stuff up.
Surely the same goes for the non-functional stuff? If I could simplify the design, and my non-functional tests still pass, then it's probable that the current design is over-engineered. But in order to do that, we'd need a set of explicit non-functional tests. And most teams don't have those. Which is why designs can so easily get over-engineered.
Just a thought.
Continuous Inspection ScreencastIt's been quite a while since I did a screencast. Here's a new one about Continuous Inspection, which is a thing. (Oh yes.)
March 1, 2015
Continuous Inspection at NorDevConOn Friday, I spent a very enjoyable day at the Norfolk developer's conference NorDevCon (do you see what they did there?) It was my second time at the conference, having given the opening keynote last year, and it's great to see it going from strength to strength (attendance up 50% on 2014), and to see Norwich and Norfolk being recognised as an emerging tech hub that's worthy of inward investment.
I was there to run a workshop on Continuous Inspection, and it was a good lark. You can check out the slides, which probably won't make a lot of sense without me there to explain them - but come along to CraftConf in Budapest this April or SwanseaCon 2015 in September and I'll answer your questions.
You can also take a squint at (or have a play with) some code I knocked up in C# to illustrate a custom FxCop code rule (Feature Envy) to see how I implemented the example from the slides in a test-driven way.
I'm new to automating FxCop (and an infrequent visitor to .NET Land), so please forgive any naivity. Hopefully you get the idea. The key things to take away are: you need a model of the code (thanks Microsoft.Cci.dll), you need a language to express rules against that model (thanks C#), and you need a way to drive the implementation of rules by writing executable tests that fail (thanks NUnit). The fun part is turning the rule implementation on its own code - eating your own dog food, so to speak. Throws up all sorts of test cases you didn't think of. It's a work in progress!
I now plan, before CraftConf, to flesh the project out a bit with 2-3 more example custom rules.
Having enjoyed a catch-up with someone who just happens to be managing the group at Microsoft who are working on code analysis tools, I think 2015-2016 is going to see some considerable ramp-up in interest as the tools improve and integration across the dev lifecycle gets tighter. If Continuous Inspection isn't on your radar today, you may want to put it on your radar for tomorrow. It's going to be a thing.
Right now, though, Continuous Inspection is very much a niche pastime. An unscientific straw poll on social media, plus a trawl of a couple of UK job sites, suggests that less than 1% of teams might even be doing automated code analysis at all.
I predicted a few years ago that, as computers get faster and code gets more complex, frequent testing of code quality using automated tools is likely to become more desirable and more do-able. I think we're just on the cusp of that new era today. Today, code quality is an ad hoc concern relying on hit-and-miss practices like pair programming, where many code quality issues often get overlooked by pair who have 101 other things to think about, and code reviews, where issues - if they get spotted at all in the to-and-fro - are flagged up long after anybody is likely to do anything about them.
In related news, after much discussion and braincell-wrangling, I've chosen the name for the conference that will be superceding Software Craftsmanship 20xx later this year (because craftsmanship is kind of done now as a meme). Watch this space.
February 13, 2015
Intensive TDD, Continuous Inspection Recipes & Crappy Remote Collaboration ToolsA mixed bag for today's post, while I'm at my desk.
First up, after the Intensive TDD workshop on March 14th sold out (with a growing waiting list), I've scheduled a second workshop on Saturday April 11th, with places available at the insanely low price of £30. Get 'em while they're hot!
Secondly, I'm busy working on a practical example for a talk I'm giving at NorDevCon on Feb 27th about Continuous Inspection.
What I'm hoping to do is work through a simple example based on my Dependable Dependencies Principle, where I'll rig up an automated code analysis wotsit to find the most complex, most depended upon and least tested parts of some code to give early warning about where it might be most likely to be broken and might need better testing and simplifying.
To run this metric, you need 3 pieces of information:
* Cyclomatic Complexity of methods
* Afferent couplings per method
* Test coverage per method
Now, test coverage could mean different things. But for a short demonstration, I should probably keeep it simple and fairly brute force - e.g., % LOC reached by the tests. Not ideal, but in a short session, I don't want to get dragged into a discussion about coverage metrics. It's also a readily-available measure of coverage, using off-the-shelf tools, so it will save me time in preparing and allow viewers to try it for themselves without too much fuss and bother.
What's more important is to demonstrate the process going from identifying a non-functional requirement (e.g., "As the Architect, I want early warning about code that presents a highr risk of being unreliable so that I can work with the developers to get better assurance for it"), to implementing an executable quality gate using available tools in a test-driven manner (everybody forgets to agree tests for their metrics!), to managing the development process when the gate is in place. All the stuff that constitutes effective Continuous Inspection.
At time of writing, tool choice is split between a commercial code analysis tool called JArchitect, and SonarQube. It's a doddle to rig up in JArchitect, but the tool costs £££. It's harder to rig up in SonarQube, but the tools are available for free. (Except, of course, nothing's ever really free. Extra time taken to get what you want out of a tool also adds up to £££.) We'll see how it goes.
Finally, after a fairly frustrating remote pairing session on Wednesday where we were ultimately defeated by a combination of Wi-Fi, Skype, TeamViewer and generally bad mojo, it's occured to me that we really should be looking into remote collaboration more seriously. If you know of more reliable tools for collaboration, please tweet me at @jasongorman.
December 17, 2014
Implicit Contracts - A Simple Technique For Code InspectionsJust a quick bedtime note about a simple code inspection technique I use when the milk and cookies have been keeping me awake.
To cut a long story short, all executable code - from a function all the way down to a single expression - has a contract.
Take this little example:
If I wanted to do a bit of what we call "whitebox testing", and inspect this code for potential bugs, I could break it down into all the little executable chunks that make up the whole. (Not sure if a chunk of code is executable? Hint: could you extract it into its own method?)
For example, the first line:
If we think about it, this line of code has an input, and it has an output, just like a function. The input is the array called items, as it's declared earlier as a method parameter and only referenced here. The output is the string joinedItems, since it's declared and assigned to here, and referenced in code further down.
It also has pre- and post-conditions, just like a function. The array items must have at least one element for the code to work, otherwise we'll get an exception thrown when we try to reference the first element. And, provided there is at least one element in items, the post-condition is that joinedItems will be assigned the value of that first integer element as a string.
We can see already, by considering the contract implicit in this first line of code, that there may be a problem. What if items is empty? We must determine if that could ever happen during the correct functioning of the program as a whole. Could join() ever be called with an empty array as a result of valid user input or other software events?
Let's imagine it could happen, and we need to handle that scenario meaningfully. Perhaps we decide that our method should return an empty string when the array is empty. In which case, our code needs to change.
That's one potential bug. Using this technique, can you spot any more?
November 25, 2014
Continuous Inspection II - Planning & Executing CInspIn this second blog post about Continuous Inspection (CInsp, for short), I want to look at how we might manage the CInsp process to get the most value from it.
While some develoment teams are now using CInsp tools to analyse their code to get early warnings about code quality problems when they're easier and cheaper to fix, it's fair to say that this area of the develoment discipline has to date evaded the principles that we apply to other kinds of requirements.
Typically, as a kind of work, CInsp is ad hoc, unplanned, untracked and most teams who do it have only a very vague idea of what kind of cost it has and what kind of benefits they're reaping from it.
CInsp is rarely prioritised, leaving the field wide open to waste a lot of time and effort on activities that add little or no value.
Non-functional requirements obey the same laws as functional ones, which is why we need to attack them using the same principles and techniques.
In this post, I want to examine how we plan and execute CInsp on projects starting from scratch. (In a future post, I'll talk about applying CInsp to existing code bases with a build-up of code quality issues.)
Continuous Inspection Requirements.
There are an infinite number of properties we could look for in our code, but some have value in finding and most don't. Rather than waste our time arbitrarily searching our code for "stuff", it's important we have a clear idea of what it is we're looking for and why.
Extreme Programming, for example, has a perfectly usable mechanism for describing the things we want to inspect for, and the benefits of catching those kinds of code quality problems early.
A Code Quality Story is a non-functional user story that briefly summarises a code quality "bug" we wish to avoid and the pay-off we might expect if we can avoid introducing it into our code.
Note first of all that I've chosen here to use a blue index card. This might be in a system where we write functional user stories on green cards, report bugs on red cards, and record other outcomes - "miscellaneous tasks", like setting up the build and implementing code quality gates - on blue cards.
Why do this? Well, I've found it very useful to know roughly how much of a team's time is split between delivering working features, fixing bugs (ideally, zero time), and "shaving yaks" when the yaks being shaved are sufficiently large and not part of the work of delivering specific features.
The importance of the effort split becomes apparent as time goes by and the software evolves. A healthy project is one where the proportion of effort devoted to delivering working features remains relatively constant. What typically happens on teams who set out at an unsustainable pace is that they begin development with their time devoted mostly to the green cards, and after a few months most of their time is spent tackling red cards and making a lot less progress on new features. This is a good indicator of the rising cost of change we're seeking to avoid, so we can sustain the pace of development and deliver value for longer. This information will help us better judge how well-spent the time devoted to things like CInsp is.
So we have a placeholder for our code quality requirement in the form of a blue index card. What next?
Planning Continuous Inspection
This is where I, and a lot of teams, have gone wrong in the past. What we should never, ever do is allow the customer to choose when and whether we tackle non-functional requirements. And in "customer" I include proxy customers like business analysts and project managers. The overwhelmingly common experience of development teams is that purely technical issues, like code quality, get sidelined by non-technical stakeholders.
We must not give them the chance to drop our Feature Envy story in favour of a story about, say, sorting columns in an HTML table if we strongly believe, as professionals, that avoiding Feature Envy is important. If, as the evidence suggests, care taken over code quality helps to maintain productivity and deliver greater value over time, then we risk presenting customers with a confusing false dichotomy between work that enhances quality and work that directly delivers working features.
The analogy I use is to pretend we're running a restaurant using the planning practices of Extreme Programming.
Every job that needs doing gets written on a card, and placed into a backlog of outstanding work. There will be user stories like "Take table 3's order" and "Serve french fries and beer to table 7" and "Get the bill for table 12". These are stories about work that will make the restaurant money.
There will also be stories like "Wash the dishes in the sink" and "Clean out pizza oven" and "Repaint sign over door". These are about tasks that cost money, but don't directly bring in revenue by themselves.
If we allowed our restaurant's shareholders - who themselves have never worked in a restaurant, but they have a stake in it as a business - to prioritise what stories get done at the expense of others in a world where backlogs always outweigh the available time and resources, then there's a very real danger that the kitchen will rarely get cleaned, the sign above the door will fade until nobody can see it, and we'll run out of clean plates halfway through service.
The temptation for teams who are driven solely by the priorities of non-technical stakeholders is that non-functional issues like code quality will only get tackled when a crisis emerges that blocks progress on functional requirements. i.e., we don't wash up until we run out of plates, or we don't clean the kitchen until the inspector shuts us down, or we don't repaint the sign until the customers have stopped coming in.
One thing we've learned about writing software is that it's cheaper and easier to tackle problems proactively and catch them earlier. Sadly, too many teams are left lurching from one urgent crisis to the next, never getting the chance to get ahead of the issues.
For this reason, I strongly advise against involving non-technical stakeholders in planning CInsp. (As well as other technical work.)
Now put yourself in the diner's shoes: you pick up the menu, and every dish lists all of the tasks restaurant staff have to do in order to deliver it. Let's say we charge £11 for fish and chips, with a clean grill, mopping the floors, cashing up that evening, doing the accounts, getting up early to take delivery of fresh fish, and so on.
1. If we hadn't told them, would the diner even care?
2. If we make it the diner's business, are we inviting them to negotiate the price of the fish and chips down by itemising what goes in to running the restaurant? ("I'll have the fish & chips, but I'm not paying for your trainee chef's college course" etc)
The world is full of work that needs doing, but nobody thinks they should pay for. In order for the world to keep turning, for fish & chips to appear on our dining tables, this work has to get done one way or another, and it has to be paid for.
The way a restaurant squares this circle is to build it into the cost of the meal and to not present diners with a choice. Their choice is simple: don't like the price, don't order the dish.
Likewise in software development, there's a universe of tasks that need doing that do not directly end with a working feature being delivered to the customer's table. We must build this work into the price ("feature X will take 3 days to deliver") and avoid presenting the customer with bewildering choices that, in reality, aren't choices at all.
So planning Continuous Inspection is something that happens within the team among technical stakeholders who understand the issues and will be doing the work. This is good advice for any non-functional requirements, be they about build automation, internal training or hiring developers. This is just "stuff that has to happen" so we can deliver working software reliably, economically and sustainably.
The key thing, to avoid teams disappearing up their own backsides with the technical stuff, is to make sure we're all absolutely clear about why we're doing it. Why are we automating the build? Why are we writing a tool that generates code? Why are we sending half the team to the Software Craftsmanship conference? (Some companies send entire teams.) And the answer should always be something of value to the customer, even if that value might not be realised for months or years.
In practice, we have planning meetings - especially in the early stages of a project - that are for technical stakeholders only. Lock the doors. Close the blinds. Don't tell the boss. (I have literally experienced running around offices looking for rooms where the developers can have these discussions in private, chased by the project manager who insists on sitting in. "Don't mind me. I won't interfere." Two seconds later...)
Such meetings give teams a chance to explicitly discuss code quality and to thrash out what they mean by "good code" and "bad code" and establish a shared set of priorities over code quality. It's far better to have these meetings - and all the inevitable disagreements - at the start, when we can take steps to prevent issues, than to have them later when we can only ask "what went wrong?"
Executing Continuous Inspection
On new software, the effort in Continuous Inspection tends to be front-loaded, and with good reason.
As I've mentioned a few times already, it tends to be far cheaper to tackle code quality "bugs" early - the earlier the better. This means that adding new code quality requirements later in development tends to catch problems when they're much more expensive to fix, so it makes sense to set the quality bar as high as we can at the start.
There's good news and there's bad news. First, the bad news: on a new project, from a standing start, it's going to take considerable effort to get automated code inspections in place. It will vary greatly, depending on the technology stack, availability of tools, experience levels in the team, and so on. But it's not going to take an afternoon. So you may be faced with having to hide a big chunk of effort from non-technical stakeholders if you attempt to start development (from their perspective, when they're actively involved) at the same time as putting CInsp in place. (Same goes for builds, CI, and a raft of other stuff that we need to get up and running early on.)
Another very strong recommendation from me: have at least one iteration before you involve the customer. Get the development engine running smoothly before you wind down the window and shout "Where to, guv'nor?" They may be less than impressed to discover that you just need to build the engine before you can set off. Delighting customers is as much about expectations as it is about actual delivery.
Going back to the restaurant analogy, consider why restaurants distinguish between "service" and "preparation". Service may start at 6pm, but the chefs have probably been there since 9am getting things ready for that. If they didn't, then those first orders might take hours to reach the table. Too many development teams attempt the equivalent of starting service as the ingredients are being delivered to the kitchen. We need to do prep, too, before we can start taking orders.
Now, for the good news: the kinds of code quality requirements we might have on one, say, JEE project are likely to be similar on another JEE project. CInsp practitioners tend to find that they can get a lot of reuse out of code quality gates they've already developed for previous projects. So, over months and years, the overall cost of getting CInsp up and running tends to decrease quite significantly. If your technology stack remains fairly stable over the years, you may well find that getting things up and running can eventually become an almost push-button process. It takes a lot of investment to get there, though.
Code Quality stories work the same way as user stories in their execution. We plan what stories we're going to tackle in the current timebox in the same way. We tackle them in pairs, if possible. We treat them purely as placeholders to have a conversation with the person asking for each story. And, most importantly, we agree...
Continuous Inspection Acceptance Tests
Going back to our Feature Envy code quality story, what does the developer who write that story mean by "Feature Envy"?
Here's the definition from Martin Fowler's Refactoring book:
"A classic [code] smell is a method that seems more interested in a class other than the one it is in. The most common focus of the envy is the data."
It's all a bit handwavy, as is usually the case with software design wisdom. A human being using their intelligence, experience and judgement might be able to read this, look at some code and point to things that seem to them to fit the description.
Programming a computer to do it, on the other hand...
This is where we can inhabit our customer's world for a little while. When we ask our customer to precisely decribe a business rule, we're putting them on the spot every bit as much as a computable definition of Feature Envy might put me and you on the spot. In cold, hard, computable terms: we don't quite know what we mean.
When the business problem we're solving is about, say, mortgages or video rentals or friend requests, we ask the customer for examples that illustrate the rule. Using examples, we can establish a shared vocabulary - a language for expressing the rule - explore the boundaries, and pin down a precise computable understanding of it (if there is one.)
We shouldn't be at all surprised that this technique also works very well for rules about our code. Ask the owner of a code quality story to track down some classic examples of code that breaks the rule, as well as code that doesn't (even if it looks at first glance like it might).
This is where the real skill in CInsp comes into play. To win at Continuous Inspection, development teams need to be skilled as reasoning about code. This is not a bad skill for a developer to have generally. It helps us communicate better, it helps us visualise better, it makes us better at design, at refactoring, at writing tools that work with code. Code is our domain model - the business objects of programming.
Using our code reasoning skills, applied to examples that will form the basis of acceptance tests, we can drive out the design of the simplest tool possible that will sound the alarm when the "bad" examples are considered, while silently allowing the "good" examples to pass through the quality gate.
As with functional user stories, we're not done until we have a working automated quality gate that satisfies our acceptance tests and can be applied to new code straight away.
In the next blog post, we'll be rolling up our sleeves with an example Continuous Inspection quality gate, implementing it using a variety of tools to demonstrate that there's often more than one way to skin the code quality cat.