June 29, 2018

Learn TDD with Codemanship

.NET Code Analysis using NDepend

It's been a while since I used it in anger, but I've been having fun this week reaquainting myself with NDepend, the .NET code analysis tool.

Those of us who are interested on automating code reviews for Continuous Inspection have a range of options for .NET - ranging from tools built on the .NET Cecil decompiler - e.g., FxCop - to compiler-integrated tools on the Roslyn platform.

Ou of all them, I find NDepend to be by far the most mature. It's code model is much more expressive and intuitive (oh, the hours I've spent trying to map IL op codes on to source code!), it integrates out of the box with a range of popular build and reporting tools like VSTS, TeamCity, Excel and SonarQube. And in general, I find I'm up and running with a suite of usable quality gates much, much faster.



Under the covers, I believe we're still in Cecil/IL territory, but all the hard work's been done for us.

Creating analysis projects in NDepend is pretty straightforward. You can either select a set of .NET assemblies to be analysed, or a Visual Studio project or solution. It's very backwards-compatible, working with solutions as far back as VS 2005 (which, for training purposes, I still use occasionally).



I let it have a crack at the files for the Codemanship refactoring workshop, which are deliberately riddled with tasty code smells. My goal was to see how easy it would be to use NDepend to automatically detect the smells.



It found all the solution's assemblies, and crunched through them - building a code model and generating a report - in about a minute. When it's done, it opens a dashboard view which summarises the results of the analysis.



There's a lot going on in NDepend's UI, and this would be a very long blog post if I explored it all. But my goal was to use NDepend to detect the code smells in these projects, so I've focused on the features I'd use to do that.

First of all, out of the box with the code rules that come with NDepend, it has not detected any of the smells I'm interested in. This is tyicaly of any code analysis tool: the rules are not your rules. They're someone else's interpretation of code quality. FxCop's developers, for example, evidently have a way higher tolerance for complexity than I do.

The value in these tools is not in what they do out of the box, but in what you can make them do with a bit of thought. And for .NET, NDepend excels at this.

In the dialog at the bottom of the NDepend window, we can explore the code rules that it comes with and see how they've been implemented using NDepend's code model and some LINQ.



I'm interested in methods with too many parameters, so I clicked on that rule to bring up its implementation.



I happen to think that 5 parameters is too many, so could easily change the threshold where this rule is triggered in the LINQ. When I did, the results list immediately updated, showing the methods in my solution that have too many parameters.



This matches my expectation, and the instant feedback is very useful when creating custom quality gates - really speeds up the learning process.

To view the offending code, I just had to double click on that method in the results list, and NDepend opened it in Visual Studio. (You can use NDepend from within Visual Studio, too, if you want a more seamless experience.)



The interactive and integrated nature of NDepend makes it a useful tool to have in code reviews. I've always found going through the code inspecting source files by eye looking for issues hard work and really rather time-consuming. Being able to search for them interatively like this can help a lot.

Of course, we don't just want to look for code smells in code reviews - that's closing the stable door after the horse has bolted a lot of the time. It's quite fashionable now for dev teams to include code reviews as part of their check-in process - the dreaded Pull Request. It makes sense, as a last line of defence, to try to prevent issus being checked into the code respository. What I'm seeing more and more, though, is that pull requests can become a bottleneck for the team. Like any manual testing, it slows us down and hampers Continuous Delivery.

The command-line version of NDepend can easily be integrated into your build pipeline, allowing for some pretty comprehensive code reviews that can be performed automatically (and therefore quickly, alleviating the bottleneck).

I decided to turn this code rule into a quality gate that coud be used in a build, and set a policy that it should fail the build if more than 5 examples of long parameter lists are found.



So, up and running with a simple quality gate in no time. But what about more complex code smells, like message chains and feature envy? In the next blog post I'll go deeper into NDepend's Code Query Language and explore the kinds of queries we can create with more thought.






May 25, 2018

Learn TDD with Codemanship

Ever-Decreasing Cycles - I Called It Right

I'm right about something roughly once in a decade, if I'm lucky. Looking back over 13 years of blog posts, I nominate this little gem as a candidate for "That Thing I Called Right", which predicted that - as our computers grew ever more powerful - continuous background code review would become a thing.

The progression seemed perfectly logical. At the time I wrote it, we'd seen the advent of continuous background code compilation, giving us instant feedback when we make silly syntax errors. Younger developers may not be aware of just what a difference that made to those of us who remember compiling the code involving going away to get a coffee (or lunch, or dinner and a show). So much time saved!

With less brain power dedicated to "does it run?", we were freed up to think about a higher question: does it work?. In 2008, continuous background testing tools like Infinitest and JUnitMax were becoming more popular. Today, I see them quite widely used, and can easily foresee a time when we're all using them within the next decade.

So we've progressed from "does it run?" to "does it work?" as our computers have increased their processing power, and the next evolution I predicted was to continuously ask "will it be easy to change?" At the time, the majority of code analysis tools took too long to do what they did to be running continuously in the background alongside compilation and functional testing. (There were one or two adventurous experimental tools, but we haven't heard much from them in the meantime.)

With Microsoft's Roslyn compiler, continuous background code review is now finally a thing. We can write code quality checks and build them into the compilation pipeline, creating feedback on things like variable names, method size and complexity, couplings, and all that stuff we care about for maintainability, in real time, as we type the code. I suspect such a capability will be added to other compiler platforms in the next decade or so.

Sure, it's still early days, and my experiments with it suggest computing power needs maybe one or two more iterations to rise to meet the number-crunching challenge, but in a practical form that we can begin using today - just like those plucky pioneers who ventured out with Infinitest in the early days it's here. There'll be a learning curve. Start climbing it now, is my recommendation.

My hope for continuous background code review is that it will yet again free up our minds to focus on more important questions, like "is this what they really need?"

And that will be a great day for software.


* And, yes, I had hoped I'd been right about high-integrity software becoming mainstream, but interest in that has flat-lined these past 20 years. Maybe next year... Ho hum.



April 6, 2018

Learn TDD with Codemanship

Could Refactoring (& Refuctoring) Help Us Test Claims About Benefits of Clean Code

One of the more frustrating things about teaching developers about code craft and "Clean Code" is the lack of credible hard evidence from respectable sources about the claimed benefits of it.

Not only does this make code craft a tougher sell to skeptics - and there was a time when I was one of them, decades ago - but it also calls into question whether the alleged benefits are real.

The biggest barrier to doing research in this area has been twofold:

1. The lack of data points. Most software engineering academic studies take data from a handful of projects. If this were, say, medical research, we'd never get our medicines on to the market.

2. The problem of comparing apples with apples. There are so many factors in software development that it's pretty much impossible to isolate one and rule out all others. Studies into the effects of adopting TDD can't account for the variations in experience and ability, for example. Teams new to TDD tend to have to deal with a steep learning curve before they become productive again.

When I consider some of the theories about what makes code harder to change - the central plank of the code craft thesis - some we have strong evidence to back them up, others... not so much.

I've had a bit of a brainwave in this area that might help researchers. Take a code base, then specifically vary it along a single dimension. e.g., refactor to remove duplication, or "refuctor" to introduce duplication (by inlining functions and modules). The resulting variants should all be functionally equivalent, but you could fine-grain the levels of variation. Then ask developers to make changes to the logic, and measure how much code had to be edited to achieve those changes. Automated acceptance tests would ensure that every change was logically equivalent.

I can easily envisage how refactoring (and it's evil twin, refuctoring) could be used to vary readability, complexity, duplication, coupling and cohesion (e.g., by moving methods between classes to introduce or eliminate feature envy), "swabbability" (e.g., by introducing dependency injection, or by reversing the dependency inversion by using explicit references to concrete implementations of interfaces) and a range of other code qualities. Automated tests could ensure that every variant still works exactly the same way on the outside.

And the tests themselves could be varied. For example, you could manipulate test suite execution time so that in some cases developers had to wait an hour for feedback, while others only need wait seconds for the same feedback.

I think I might be on to something. What do you think?


December 5, 2017

Learn TDD with Codemanship

Automating Code Reviews

This comes up quite often these days, so I thought I'd scribble my thoughts down, for posterity if nothing else.

I increasingly come across dev teams who have adopted a policy where every check-in needs to be reviewed before it can be accepted. In many cases, this has created a bottleneck while developers waiting to get a green build are stuck on the availability of their peers to do the reviewing.

Imagine, every time you want to check your code in, you have to wait for a tester to put your code through its paces. We knew that was a major bottleneck, so we started automating our tests. If the tester would normally check to see what happens if a customer cancels an order, we would write a unit test for the cancel() function of an order.

It's really not much different for code inspections. If a reviewer would normally check that no classes are too big (say, having more than 200 lines of code), we could write a bit of code to inspect every class and report any that exceed our limit.

A pretty comprehensive code inspection could cover a large amount of code, checking for a whole range of issues, in a tiny fraction of the time it takes a human. More importantly, those checks could be run any time. No need to wait for Jenny to get off the phone, or Rajesh to come back from lunch. You'd no longer be blocked.

This, of course, takes some considerable investment early on to develop the right suite of automated quality checks. But I see more and more teams struggling to maintain the pace of development and high code quality, and such an investment really pays for itself many times over, even on relatively short timescales.

It's for this reason that I'm going to be giving Continuous Inspection a big push in 2018. I think most teams should seriously consider it.


August 21, 2017

Learn TDD with Codemanship

Codemanship Code Craft FxCop Rules

So, here they are. Hot from the oven, my FxCop code rules for the upcoming Codemanship Code Craft "Driving Test".


Some rubbish code, yesterday.

If you're signed up to be one of our valiant guinea pigs for the trail driving test on Sept 16th, I heartily recommend you download them and get a bit of practice. Try writing code that breaks each of the 11 rules, and then refactoring that code to make the nasty messages go away.

There's versions for Visual Studio 2013, 2015 and 2017, plus instructions on installing and suing the rules with your own projects.

And even if you're not doing the driving test on Sept 16th, have a go anyway. Your code may not be as clean as you think ;)

Any bugs or false positives, drop me a line.



August 6, 2017

Learn TDD with Codemanship

What *Exactly* Is "Feature Envy"?

I'm currently writing some custom FxCop rules for the trial Codemanship Code Craft "driving test" on Sept 16th. The aim is that not only will I be able to automatically check candidate's code, but they will be able to while they're writing it, too. The power of Continuous Inspection!

One of the rules is that methods of one class must not display Feature Envy for another class. Typically, Feature Envy's defined as:

A method accesses the features of another class more than its own.


And this might seem trivial to check for using a tool like FxCop. Look at all the member bindings inside a method. If there are more bindings to members of other types then to members of the type on which this method's declared, then we've got Feature Envy. To fix it, we can just move the method to the focus of its envy.

But I'm not sure it's quite that simple. This example might be an open-and-shut case:



But how about this?



The majority of feature calls in this method are to methods of the same class. But that code smell we saw in the first example is still here, on lines 3 and 4. Proof? What if we extract those 2 lines into their own method?



The method obviousFeatureEnvy now completely satisfies our definition of Feature Envy and should be moved to the other class.

I think this leads me to a better definition of Feature Envy:

Feature Envy is when any unit of executable code - a method, a block, a statement or an expression - uses features of another class more than features of its own class


Basically, if you can extract any portion of code into a method that displays the original, "classic" definition of Feature Envy.

But wait; there's more. Take a look at this example:



Technically, only one of these methods satisfies our definition of Feature Envy, but if were to inline the call stack, we'd end up with one method with very obvious Feature Envy.

It's much more complex than I thought. But, for the driving test, I'll probably keep it simple and stick with the classic - and much easier - definition of Feature Envy.

But one day, when I've got time...



August 10, 2015

Learn TDD with Codemanship

A Hierarchy Of Software Design Needs

Design is not a binary proposition. There is no clear dividing line between a good software design and bad software design, and even the best designs are compromises that seek to balance competing forces like performance, readability, testability, reuse and so on.

When I refactor a design, it can sometimes introduce side-effects - namely, other code smells - that I deem less bad than what was there before. For example, maybe I have a business object that renders itself as HTML - bad, bad, bad! Right?

The HTML format is likely to change more often than the object's data schema, and we might want to render it to other formats. So it makes sense to split out the rendering part to a separate object. But in doing so, we end up creating "feature envy" - an unhealthy high coupling between our renderer and the business object so it can get the data in needs - in the process.

I consider the new feature envy less bad than the dual responsibility, so I live with it.

In fact, there tends to be a hierarchy of needs in software design, where one design issue will take precedence over another. It's useful, when starting out, to know what that hierarchy of needs is.

Now, the needs may differ depending on the requirements of our design - e.g., on a small-memory device, memory footprint matters way more than it does for desktop software usually - but there is a fairly consistent pattern that appears over and over in the majority of applications.

There are, of course, a universe of qualities we may need to balance. But let's deal with the top six to get you thinking:

1. The Code Must Work

Doesn't matter how good you think the design is if it doesn't do what the customer needs. Good design always comes back to "yes, but does it pass the acceptance tests?" If it doesn't, it's de facto a bad design, regardless.

2. The Code Must Be Easy To Understand

By far the biggest factor in the maintainability of code is whether or not programmers can understand it. I will gladly sacrifice less vital design goals to make code more readable. Put more effort into this. And then put even more effort into it. However much attention you're paying to readability, it's almost certainly not enough. C'mon, you've read code. You know it's true.

But if the code is totally readable, but doesn't work, then spend more time on 1.

3. The Code Must Be As Simple As We Can Make It

Less code generally means a lower cost of maintenance. But beware; you can take simplicity too far. I've seen some very compact code that was almost intractable to human eyes. Readability trumps simplicity. And, yes, functional programmers, I'm particularly looking at you.

4. The Code Must Not Repeat Itself

The opposite of duplication is reuse. Yes it is: don't argue!

Duplication in our code can often give us useful clues about generalisations and abstractions that may be lurking in there that need bringing out through refactoring. That's why "removing duplication" is a particular focus of the refactoring step in Test-driven Development.

Having said that, code can get too abstract and too general at the expense of readability. Not everything has to eventually turn into the Interpreter pattern, and the goal of most projects isn't to develop yet another MVC framework.

In the Refuctoring Challenge we do on the TDD workshops, over-abstracting often proves to be a sure-fire way of making code harder to change.

5. Code Should Tell, Not Ask

"Tell, Don't Ask" is a core pillar of good modular -notice I didn't say "object oriented" - code. Another way of framing it is to say "put the work where the knowledge is". That way, we end up with modules where more dependencies are contained and fewer dependencies are shared between modules. So if a module knows the customer's date of birth, it should be responsible for doing the work of calculating the customer's current age. That way, other modules don't have to ask for the date of birth to do that calculation, and modules know a little bit less about each other.

It goes by many names: "encapsulation", "information hiding" etc. But the bottom line is that modules should interact with each other as little as possible. This leads to modules that are more cohesive and loosely coupled, so when we make a change to one, it's less likely to affect the others.

But it's not always possible, and I've seen some awful fudges when programmers apply Tell, Don't Ask at the expense of higher needs like simplicity and readability. Remember simply this: sometimes the best way is to use a getter.

6. Code Should Be S.O.L.I.D.

You may be surprised to hear that I put OO design principles so far down my hierarchy of needs. But that's partly because I'm an old programmer, and can vaguely recall writing well-designed applications in non-OO languages. "Tell, Don't Ask", for example, is as do-able in FORTRAN as it is in Smalltalk.

Don't believe me? Then read the chapter in Bertrand Meyer's Object Oriented Software Construction that deals with writing OO code in non-OO languages.

From my own experiments, I've learned that coupling and cohesion have a bigger impact on the cost of changing code. A secondary factor is substitutability of dependencies - the ability to insert a new implementation in the slot of an old one without affecting the client code. That's mostly what S.O.L.I.D. is all about.

This is the stuff that we can really only do in OO languages that directly support polymorphism. And it's important, for sure. But not as important as coupling and cohesion, lack of duplication, simplicity, readability and whether or not the code actually works.

Luckily, apart from the "S" in S.O.L.I.D. (Single Responsibility), the O.L.I.D. is fairly orthogonal to these other concerns. We don't need to trade off between substitutability and Tell, Don't Ask, for example. They're quite compatible, as are the other design needs - if you do it right.

In this sense, the trade off is more about how much time I devote t thinking about S.O.L.I.D. compared to other more pressing concerns. Think about it: yes. Obsess about it: no.


Like I said, there are many, many more things that concern us in our designs - and they vary depending on the kind of software we're creating - but I tend to find these 6 are usually at the top of the hierarchy.

So... What's your hierarchy of design needs?









July 31, 2015

Learn TDD with Codemanship

Triangulating Your Test Code

While we're triangulating our solutions in TDD, our source code ought to be getting more general with each new test case.

But it's arguably not just the solution that should be getting more general; our test code could probably be generalised, too.

Take a look at this un-generalised code for the first two tests in a TDD'd implementation of a Fibonacci sequence generator:



Jumping in at this point, we see that our solution is still hard-coded. The trick to triangulation is to spot the pattern. The pattern for the first two Fibonacci numbers is that they are the same as their index in the sequence (assuming a zero-based array).

We can generalise our list into a loop that generates the list using the pattern (see Bob Martin's post on the Transformation Priority Premise, or, what I more simply call triangulation patterns).

But we can also generalise our test code into a single parameterised test, using the pattern as the test name, so it reads more like the specification we hope our tests in TDD will become:



Now, because all subequent tests are going to follow the same pattern (we provide an index and check what the expected Fibonacci number is at that index), we could carry on reusing this parameterised test for the rest of the problem.

BUT...

Then we'd have to generalise the name of the test - a key part of our test-driven specification - to the point where every single patterm (every rule) is summarised in one test. I no likey. It's much harder to read, and when a test case fails, it's not entirely clear which rule was broken.

So, what I like to do is keep a bit of duplication in order to have one generalised test for each patterm/rule in the specification.

So, continuing on, I might end up with:



Notice that, although these two test methods are duplication, I've taken the step of refactoring out the duplicated knowledge of how to create and interact with the object being tested. This kind of duplication in test code tends to hurt us most. Many teams report how tight coupling between tests and objects under test led to interfaces being much more expensive to change. So I feel this is a small compromise that aid readability while not sacrificing too much to duplication.




April 25, 2015

Learn TDD with Codemanship

Continuous Inspection Screencast

It's been quite a while since I did a screencast. Here's a new one about Continuous Inspection, which is a thing. (Oh yes.)





March 1, 2015

Learn TDD with Codemanship

Continuous Inspection at NorDevCon

On Friday, I spent a very enjoyable day at the Norfolk developer's conference NorDevCon (do you see what they did there?) It was my second time at the conference, having given the opening keynote last year, and it's great to see it going from strength to strength (attendance up 50% on 2014), and to see Norwich and Norfolk being recognised as an emerging tech hub that's worthy of inward investment.

I was there to run a workshop on Continuous Inspection, and it was a good lark. You can check out the slides, which probably won't make a lot of sense without me there to explain them - but come along to CraftConf in Budapest this April or SwanseaCon 2015 in September and I'll answer your questions.

You can also take a squint at (or have a play with) some code I knocked up in C# to illustrate a custom FxCop code rule (Feature Envy) to see how I implemented the example from the slides in a test-driven way.

I'm new to automating FxCop (and an infrequent visitor to .NET Land), so please forgive any naivity. Hopefully you get the idea. The key things to take away are: you need a model of the code (thanks Microsoft.Cci.dll), you need a language to express rules against that model (thanks C#), and you need a way to drive the implementation of rules by writing executable tests that fail (thanks NUnit). The fun part is turning the rule implementation on its own code - eating your own dog food, so to speak. Throws up all sorts of test cases you didn't think of. It's a work in progress!

I now plan, before CraftConf, to flesh the project out a bit with 2-3 more example custom rules.

Having enjoyed a catch-up with someone who just happens to be managing the group at Microsoft who are working on code analysis tools, I think 2015-2016 is going to see some considerable ramp-up in interest as the tools improve and integration across the dev lifecycle gets tighter. If Continuous Inspection isn't on your radar today, you may want to put it on your radar for tomorrow. It's going to be a thing.

Right now, though, Continuous Inspection is very much a niche pastime. An unscientific straw poll on social media, plus a trawl of a couple of UK job sites, suggests that less than 1% of teams might even be doing automated code analysis at all.

I predicted a few years ago that, as computers get faster and code gets more complex, frequent testing of code quality using automated tools is likely to become more desirable and more do-able. I think we're just on the cusp of that new era today. Today, code quality is an ad hoc concern relying on hit-and-miss practices like pair programming, where many code quality issues often get overlooked by pair who have 101 other things to think about, and code reviews, where issues - if they get spotted at all in the to-and-fro - are flagged up long after anybody is likely to do anything about them.

In related news, after much discussion and braincell-wrangling, I've chosen the name for the conference that will be superceding Software Craftsmanship 20xx later this year (because craftsmanship is kind of done now as a meme). Watch this space.