August 21, 2017

Learn TDD with Codemanship

Codemanship Code Craft FxCop Rules

So, here they are. Hot from the oven, my FxCop code rules for the upcoming Codemanship Code Craft "Driving Test".


Some rubbish code, yesterday.

If you're signed up to be one of our valiant guinea pigs for the trail driving test on Sept 16th, I heartily recommend you download them and get a bit of practice. Try writing code that breaks each of the 11 rules, and then refactoring that code to make the nasty messages go away.

There's versions for Visual Studio 2013, 2015 and 2017, plus instructions on installing and suing the rules with your own projects.

And even if you're not doing the driving test on Sept 16th, have a go anyway. Your code may not be as clean as you think ;)

Any bugs or false positives, drop me a line.



August 17, 2017

Learn TDD with Codemanship

Your House, Your (Code Quality) Rules


Picking up where I left off on the custom FxCop rules for the Codemanship Code Craft "Driving Test" has reminded me of something that's vitally important.

This morning I wrote a class that enumerates a type's collaborators. The code currently looks like this:



Codified in this class is an understanding of what I mean by a "collaborator", for the purposes of the driving test.

First of all, I'm not including non-project types. This is a judgement call to keep design rules realistic. My rule will limit the number of collaborating types to 3. If that includes core library types etc, it's going to be really tough.

I'm including fields, parameters and local variables. I'm also including the declaring types of any bound members. So if I call a method that returns an object and then I call a method on that, it'll be included.

I'm also not counting base classes as collaborators. Again, it's a judgement call.

I'm working alone, so I get to make the rules. But in a team setting that absolutely should not happen. Don't send the "tools dev" away to work in isolation on quality gates for Continuous Inspection. Because what will happen is, when they return and unleash their rules on the rest of the team, there'll be tears before bedtime.

The whole team needs to be involved. This is a great candidate for mob programming, in my experience. While you're waiting for business requirements in the early stages of a project/product, here's what the team could be doing to get the delivery engine up and running.

It will require the team to have discussions about code quality with a level of precision they probably never have before. I think this is a good thing.



August 11, 2017

Learn TDD with Codemanship

Update: Code Craft "Driving Test" FxCop Rules



I've been continuing work on a tool to automatically analyse .NET code submitted for the Code Craft "Driving Test".

Despite tying myself in knots for the first week trying to build a whole code analysis and reporting framework - when will I ever learn?! - I'm making good progress with Plan B this week.

Plan B is to go with the Visual Studio code analysis infrastructure (basically, FxCop). It's been more than a decade since I wrote FxCop rules, and it's been an uphill battle wrapping my head around it again (along with all the changes they've made since 2006).

But I now have 8 code quality rules that are kind of sort of working. Some work well. Some need more thought.

1. Methods can't be longer than 10 LOC

2. Methods can't have > 2 branches

3. Identifiers must be <= 20 characters. (Plan is to exempt test fixture/method names. TO-DO.)

4. Classes can't have > 8 methods (so max class size is 80 LOC)

5. Methods can't use more than one feature of another class. (My very basic interpretation of "Feature Envy". Again, TO-DO to improve that. Again, test code may be exempt.)

6. Boolean parameters are not allowed

7. Methods can't have > 3 parameters

8. Methods can't instantiate project types, unless they are factory or builder methods that return an abstract type. (The beginning of my Dependency Inversion "pincer movement". 2 more rules to come preventing invocation of static project methods, and methods that aren't virtual or abstract. Again, factories and builders will be exempt, as well as test code.)

What's been really fun about the last couple of weeks has been eating my own dog food. As each new rule emerges, I've been applying it frequently to my own code. I'm a great believer in the power of Continuous Inspection, and this has been a timely reminder of just how powerful it can be.

Red-Green-INSPECT-Refactor


After passing every test, and performing every refactoring, I run a code analysis that will eventually systematically check all my code for 15 or so issues. I fix any problems it raises there and then. I don't commit or push code that fails code analysis.



In Continuous Inspection, this is the equivalent of all my tests being green. Of course, as with functional tests, the resulting code quality may only be as good as the code quality tests. And I'm refining them with more and more examples, and applying them to real code to see what designs they encourage. So far, not so shabby.

And for those inevitable occasions when blindly obeying the design rules would make our code worse, the tool will have a mechanism for opting out of a rule. (Probably a custom attribute that you can apply to classes and fields and methods etc, specifying which rule you're breaking and - most importantly - documenting why. Again, a TO-DO.) In the Driving Test, I'm thinking candidates will get 3 such "hall passes".

If you want to see the work so far, and try it out for yourself, the source code's at https://github.com/jasongorman/CodeCraft.FxCop

And I've made a handful more tickets available for the trial Code Craft "Driving Test" for C# developers on Sept 16th. It's free this time, in exchange for your adventerous and forgiving participation in this business experiment :)

Powered by Eventbrite






August 6, 2017

Learn TDD with Codemanship

What *Exactly* Is "Feature Envy"?

I'm currently writing some custom FxCop rules for the trial Codemanship Code Craft "driving test" on Sept 16th. The aim is that not only will I be able to automatically check candidate's code, but they will be able to while they're writing it, too. The power of Continuous Inspection!

One of the rules is that methods of one class must not display Feature Envy for another class. Typically, Feature Envy's defined as:

A method accesses the features of another class more than its own.


And this might seem trivial to check for using a tool like FxCop. Look at all the member bindings inside a method. If there are more bindings to members of other types then to members of the type on which this method's declared, then we've got Feature Envy. To fix it, we can just move the method to the focus of its envy.

But I'm not sure it's quite that simple. This example might be an open-and-shut case:



But how about this?



The majority of feature calls in this method are to methods of the same class. But that code smell we saw in the first example is still here, on lines 3 and 4. Proof? What if we extract those 2 lines into their own method?



The method obviousFeatureEnvy now completely satisfies our definition of Feature Envy and should be moved to the other class.

I think this leads me to a better definition of Feature Envy:

Feature Envy is when any unit of executable code - a method, a block, a statement or an expression - uses features of another class more than features of its own class


Basically, if you can extract any portion of code into a method that displays the original, "classic" definition of Feature Envy.

But wait; there's more. Take a look at this example:



Technically, only one of these methods satisfies our definition of Feature Envy, but if were to inline the call stack, we'd end up with one method with very obvious Feature Envy.

It's much more complex than I thought. But, for the driving test, I'll probably keep it simple and stick with the classic - and much easier - definition of Feature Envy.

But one day, when I've got time...



August 1, 2017

Learn TDD with Codemanship

Codemanship Code Craft Driving Test - Trial Run



In my last blog post, I talked about the code quality criteria for our planned "driving test" for code crafters.

If you're interested in taking the test, I'm planning a trial run on Saturday Sept 16th. Initially, it will be just for C# programmers.

Entry is free (this time). You'll need a decent Internet connection, Visual Studio (2013 or later, Community is fine), and a GitHub account. Resharper is highly recommended (you can download a trial version from https://www.jetbrains.com/resharper/download/ - but don't do it now, because it will time out before the trial!) You'll be writing tests with NUnit 2.6.x. You can use whichever .NET mocking framework you desire. No other third-party libraries should be required.

The project we'll set you should take 4-8 hours, and you'll have 24 hours to submit your solution and your screencast link after we begin.

If you've been on Codemanship training courses, the quality criteria should make sense to you, and you'll know what we're looking for in your screencast.

If you haven't, then I recommend twisting the boss's arm and sending them a link to http://www.codemanship.com :)

You can register using the form below.

Powered by Eventbrite



July 31, 2017

Learn TDD with Codemanship

Codemanship Code Craft "Driving Test" - Code Quality Critera

Much pondering today about the code quality standards that should be applied in the Codemanship Code Craft Driving Test we'll be trialling at the end of the summer.

Since it's a test, I think it's fair to apply more rigorous and unyielding standards, provided that these are laid out unambiguously in advanced.

We can divide it up into 7 key areas, with some slightly different criteria for test code to allow for more verbose method names and a bit more code duplication:

1. It Works

* Your solution passes all of the customer tests we'll give you
* Your solution also survives a more exhaustive suite of tests to hunt for any lurking bugs

2. It's Readable

* The Conceptual Correlation between your code and the requirements is > 80%
* Non of your identifiers contain > 20 characters (except for test method names)
* No line of your code contains > 100 characters

3. It's Low in Duplication

* A check using Simian will reveal no more than 15% code duplication (We'll give you the precise Simian options so you can check for yourself)

4. It's Made of Simple Parts

* No method will contain > 10 LOC
* No method will have > 2 branches
* No method will have > 3 parameters
* No method will have Boolean parameters
* No class will have > 6 methods

5. It's Made of Swappable Parts

* Excluding in your tests, all dependencies on other classes in your solution will be swappable by dependency injection. Use of DI frameworks will also be forbidden. (Dependency Inversion)
* No non-test class will invoke any method on an instance of another class in the solution that can't be easily extended or swapped (e.g., in C#, only methods on interfaces or virtual methods can be invoked)

6. The Parts Are Loosely Coupled

* No class will depend on > 3 other classes in your solution
* No method will exhibit Feature Envy (when a method of one class uses more than one method of another) for other classes in your solution (Tell, Don't Ask)
* No class or interface will expose features to another class that it doesn't use (Interface Segregation)
* No class will invoke methods on solution classes which are not direct collaborators (i.e., fields or parameters) (Law of Demeter)

7. Test Code Quality

* No unit test will make more than one assertion (or mock object equivalent)
* There will be exactly one unit test method per requirements rule, the name of the test will clearly describe the rule
* All of the unit tests will pass without any external dependencies
* There will be a maximum of 10% integration test code, packaged separately
* The tests will run in < 10 seconds
* Tests will contain < 25% code duplication


Now, even though there's quite a lot of meat on these bones, these criteria may change, of course. But probably not much.

In the trial, I'll be verifying many of them by hand. This will give ma chance to validate them and iron out any conceptual kinks.

The long-term intention is that most - if not all - of these checks will be automated. Initially, I'm working on doing that in C# for the .NET developer community.

The code quality criteria will form half the score for the driving test. To pass it, you'll also need to demonstrate your practices and habits, and explain why you're doing them, so we can evaluate how much insight you have into code craft and the reasons for it. This will be done by recording a 30-minute screencast at some point during the test that we can assess.

More news soon.



July 30, 2017

Learn TDD with Codemanship

Clean# - My Imaginary C# Compiler Extension


Almost a decade ago, I wrote an article for the now-defunct itarchitect.co.uk that predicted the rise of real-time code quality analysis as we type.

What we were waiting for was more number-crunching power, and I think perhaps that day has arrived. So, too, has the technology we'd need to achieve it. Microsoft's Roslyn compiler platform offers the ability to perform real-time ("live") code analysis and report it in much the same way editors of old reported syntax errors.

This raises the prospect for detecting common code smells as they appear. and even fix some of them automatically.

Consider an example: Long Parameter Lists. As we type our method declaration, we type one parameter, then another parameter, then another, and then... as type the fourth parameter, a squiggly line appears underneath it. Hovering over that shows a warning: "Too many parameters". And then potential fixes would drop down, like introducing a parameter object, or accessing data through a private getter, if that's possible, or even splitting the method into two methods to be called in succession (assuming the client code is in the same class, otherwise we'd be introducing Feature Envy.)

Or how about this? We type a call to a method on an instance of a collaborating class. Then we type a call to a second method on the same object. At which point that whole expression, statement or code block (or entire method, if all calls are to this other object) lights up with a message warning "Feature Envy". We're then offered fixes like extracting the offending code into a separate method and moving it to the other class.

I can see a range of code smells being detected as we type: long methods, too many branches/loops, too many dependencies, large classes, and many more. I can even envision a strict mode, where we not only get warnings about certain code smells, but the code actually won't compile until they've been fixed.

Let's call this imaginary C# compiler extension Clean#, because everything has to have a cool name even when it doesn't exist yet.

And as computing power continues to increase (16-core, 32-core, etc etc), I can see the analysis getting much more sophisticated. As we type, code could be analysed to check if it's reachable from unit tests, for example.

The future is here. Now, where's my hover car?






July 26, 2017

Learn TDD with Codemanship

Gold Standard Teams do Continuous Inspection




Working with so many dev teams across a diverse range of industries gives me a great insight into the common features of teams that consistently do great work.

I call these the Gold Standard development teams. And they write gold standard code.

Gold standard code, to me, is code that maybe only 1% of teams are writing. It's more reliable and more maintainable than 99% of the code I see.

Importantly, Gold Standard dev teams don't usually take any longer to deliver working software than their Silver and Bronze Standard peers. Often, they deliver sooner. And it's no coincidence. The practices that lead to gold standard code also tend to make earlier and more frequent delivery easier.

One common feature I see in Gold Standard dev teams is that they do continuous inspection of their code to ensure that it's of a high quality.

Some teams review code quality once in a blue moon. Some review code in every iteration. Some review code with every check in.

Gold Standard dev teams review code quality as often as they test their code, every few minutes. This means they pick up code quality "bugs" almost as soon as they're introduced.

Like functional bugs, code quality bugs are much easier to fix the sooner they're caught. Catching issues immediately enables Gold Standard dev teams to produce very high quality code at little to no extra cost. Often, it saves time.

Continuous testing of software requires a high level of test automation. It's not really possible any other way. Continuous code inspection is no different. Rigorously examining the code for a multitude of problems every few minutes is extremely difficult to do by eye alone. Pre-check-in reviews, and even pair programming, are not enough.

Luckily, there's an ever-growing supply of tools we can use to automate our code inspections. Unsurprisingly, demand for the specialist skillset required is growing rapidly, too.

What kind of questions should we ask in our continuous code inspections? To some extent, that will depend on your specific code quality goals. But there are themes that are common to most code:

1. Complexity - How much "stuff" is there in our code? How many lines of code in this function or module? How many branches? How many operators and operands?

Gold Standard dev teams know the price of increasing code entropy, and fight it vigorously. Is this method getting a bit big? Let's split it into a composed method. Is this class doing too much? Let's split it into multiple classes that each have one distinct job.

2. Duplication - When we duplicate code - and concepts - we end up having to maintain the same logic in multiple places, multiplying the cost of making changes to it.

Gold Standard dev teams monitor the code closely for similarities. Are these two blocks of code kind of the same? Let's turn it into one parameterised method. Are these two classes sort of doing the same thing? Maybe extract the common elements into their own class and have both of them reuse that.

3. Dependencies - If we change a line of code in one place, how much of the rest of the code will break? Unmanaged dependencies - couplings - in our code can amplify the "ripple effect" and multiply the cost of even the smallest changes.

Gold Standard teams are continually aware of, and constantly managing, the dependencies in their code. When a class depends too much on other classes, they reshape the code and redistribute the responsibilities to localise the impact of change.

They also make sure that as many of their dependencies as possible are swappable: that is, one function or class or component can be easily swapped with a different implementation with no impact on the calling code.

4. Readability - Code that's difficult to understand is difficult to change without breaking it.

You may be thinking that it's not possible to automate the checking of code readability, and to some extent that's true. But studies have shown a very strong correlation between the complexity of code and how easy it is to read and understand. For example, code that has a lower Maintainability Index tends to score lower for readability, too.

As well as monitoring code for complexity, Gold Standard dev teams can also monitor the semantic content of their code using an increasingly sophisticated array of Natural Language Processing tools.

Simple applications of well-established readability metrics, like Flesch Reading Ease, can also help draw our attention to difficult code.

And even something as simple as calculating the conceptual correlation between names in our code and words used by our customers to describe the problem domain can help highlight code that may not be "speaking the customer's language".

5. Test assurance - If we made a change that broke the code, how soon would we know? How easy would it be to pinpoint the problem?

This is a crucial factor in the cost of changing code, and one that's usually overlooked in discussions about maintainability. Bugs cost exponentially more to fix the later they're discovered. The goal of automated regression tests is to discover them sooner, which saves us a lot of time.

Gold Standard teams continuously monitor how good their automated tests are at catching regressions. The way they do this is to deliberately introduce errors into their code, and see if any of the tests catch them. This is called mutation testing, and there are tools available for many programming languages that will do it automatically, alerting devs to any gaps in their automated test suites and giving them increased confidence when making changes.

Another key component of test assurance is how often we can run our automated tests. The longer they take to run, the less often we can run them and the longer we have to wait for feedback. Gold Standard teams monitor test execution time and optimise their test suites to run as fast as they need.

And finally, when a test does fail, Gold Standard teams are able to pinpoint the problem and fix it sooner, because their tests typically ask less questions. A Gold Standard automated test suite might cover 100,000 lines of production code with 10-20,000 tests, each having only one reason to fail. LOC/test is a key metric for Gold Standard teams, and they rarely let it go beyond 10 LOC/test.



PS. Shameless plug: If you'd like to learn more about Continuous Inspection, I run a bite-sized training workshop on code craft metrics.




July 22, 2017

Learn TDD with Codemanship

Code Analysis for Dependency Inversion


As work continues on the next book and training course, I'm thinking about how we could analyse our code for adherence to the Dependency Inversion Principle (the "D" in S.O.L.I.D.)

The DIP states that "High-level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should not depend upon details, details should depend upon abstractions."

This is a roundabout way of saying dependencies should be swappable. The means by which we make them swappable is dependency injection (often confused with Dependency Inversion, and the two are very closely related.)

Dependency injection is simply passing an objects collaborators in (e.g., through a constructor) instead of that object instantianting them itself. When we directly instantiate an object, we bind ourselves to its exact type. This makes it impossible to swap that collaborator with a different implementation without modifying the client code, making our design inflexible and difficult to adapt or extend.

In practice, what this means is that most of our objects are composed from the outside.



For example, in my Reading Ease calculator, the Program class - the entry point for this console app - creates all of the objects involved in doing the calculation and "plugs" them together via constructors.

I've used the analogy of Russian dolls to describe how we compose simpler collaborations into more complex collaborations (collaborations within collaborations). This means that the lowest-level objects in the call stack typically get created first.

Inside those lower-level classes, there's no direct instantiation of collaborators.



So, when we analyse the dependencies, we should find that classes that have clients in our code - classes that are further down the call stack - don't directly instantiate their collaborators.

More simply, if things depend on you, then don't use new.

There are, of course, exceptions. Factories and Builders are designed to instantiate and hide the details. Integration code - e.g., opening database connections - is also designed to hide details. We can't very well pass our database connections into those, or we'd be spreading that knowledge. Typically what we're talking about here is dependencies on our own classes. And what a kerfuffle it would be to try to apply DIP to strings and ints and collections and other core library types all the time. Though, again, there are situations where that may be called for.

If I was measuring adherence to the Dependency Inversion Principle, then, I'd look at a class and ask "Do any other of my classes depend on this?" If the answer is "yes", then I'd check to see if it creates instances of any other of my classes. I might also check - and this would be language-dependent - if those dependencies are on abstract types (abstract classes, interfaces).


July 21, 2017

Learn TDD with Codemanship

Calculating Reading Ease for Code

More deliberating this week about program readability.

This week I've thought a lot about how to make code easy to read.

According to a popular measure of reading ease, one of these sentences is easier to read.

The Flesch Reading Ease Score (FRES) was developed by author and Plain English advocate Rudolph Flesch in 1948.

The formula uses the average length of sentences in the text and the average number of syllables in the words to estimate the required educational level (in US education, the "grade level") of the audience.


FRES = 206.835 - (1.015 x Average Words Per Sentence) - (84.6 x Average Syllables Per Word)


This produces a number typically between 0 - 100, with higher values meaning better reading ease. The text of an advert aimed at a wide consumer audience might have a FRES of about 80-89 (meaning it could be understood by your average 8th-9th grader). The text of a peer-reviewed scientific paper might have a FRES less than 20, meaning that only college graduates might be able to understand it.

As you can see from the formula, the greatest emphasis is placed on the number of syllables in words. So, although "More deliberating this week about program readability" is a shorter sentence than "This week I've thought a lot about how to make code easy to read", its reading ease score is much lower.

This week I've thought a lot about how to make code easy to read.

- words per sentence = 14, syllables per word = 1.14

=> FRES = 93.9

Very easy to read


More deliberating this week about program readability

- words per sentence = 7, syllables per word = 2.43

=> FRES = - 10.7

Very difficult to read


I've been wondering for some time if the FRES formula could be applied to code. For sure, in code we have words. And those words have syllables. It would be straightforward to find all the words in a block of code or a source file and count them.

A very simple regex can find all the alphanumeric strings in a block of code, and I'd already figured out how to parse identifiers written in PascalCase and camelCase. (Sorry, underscores lovers. I'll add that soon.)

Counting syllables is less straightforward, but if you're prepared to accept a compromise, it's only 3 lines of code.

First we count the incidences of a, e, i, o, u and y in the word. e.g., Orbit has 2, and an estimate of 2 syllables would be right.

But what about diphthongs, where two vowels appear consecutively (e..g, boot, meet)? So, we eliminate the dipthongs from our count.

And then there are words that end with "e", or "es" or "ed": like "more", "shoves" and "bowed". We eliminate those vowels from our count.

It's not a perfect system, but it's way simpler - and more performant - than more accurate alternatives that require Natural Language Processing. And it's good enough for my purposes.

So, we know how many words there are, and we know (roughly) how many syllables per word. But what's the code equivalent of sentences? In my implementation, I started by interpreting statements as sentences. But in OO code especially, we can write a lot of code that contains few actual executable statements. In my first pass at a prototype tool, I counted how often ";" appeared in the code. Yes, this would only work in C-like languages...

In my second pass, I interpret sentences as lines of code that contain actual text. The results I got from this were a bit more convincing, and also this had the distinct advantage of working on pretty much all programming languages.

And so, it's done: a rough and ready .NET console app*, which you can download from http://www.codemanship.co.uk/files/Codemanship.Readability.zip (requires .NET 4.5 framework).

It's very simple, only one command-line argument, which is the text you want it to calculate the FRES for.

e.g.,

> Codemanship.Readability.exe "public int foo() { return 100 * 100; }"


I've rigged mine up as an external tool in Visual Studio that processes any selected text.



Just select the text you want it to analyse, and then run the tool.



Easy as peas.

You can find the source code at https://github.com/jasongorman/Readability



* Provided with no warranty or support