June 20, 2017

Learn TDD with Codemanship

Are.Fluent(Assertions).Really.Easier.To(Understand)?

I'm currently updating a slide deck for an NUnit workshop I run (all the spiffy new library versions, because I'm down with the yoof), and got to the slide on fluent assertions.

The point I make with this slide is that - according to the popular wisdom - fluent assertions are easier to understand because they're more expressive than classic NUnit assertions.

So, I took a bunch of examples of classic assertions from a previous slide and redid them as fluent assertions, and ended up with this.



Compared next to each other like this, suddenly somehow my claim that fluent assertions are easier to understand looks shaky. Are they? Are they really?

A client of mine, some time back now, ran a little internal experiment on this with fluent assertions written with Hamcrest and JUnit. They rigged up a dozen or so assertions in fluent and classic styles, and timed developers while they decided if those tests would pass or fail. It was noted - with some surprise - that people seemed to grok the classic assertions faster.

What do you think?


June 10, 2017

Learn TDD with Codemanship

Could You Be A Mentor To An Aspiring Software Developer?

I've been beavering away these last few weeks putting together the basis for an initiative that will enable experienced software developers to mentor new programmers looking to become developers one day.

It'll take the form of a Software Developers' Guild - a sort of clearing house that helps talented new programmers find old hands who can provide "light-touch" guidance over the long-term (4-6 years).

I see it working along similar lines to what I've been doing with my "apprentice" Will Price (who's just finished his final exams for his CS degree, and has turned out pretty spiffy as a developer, too). I've been pairing with Will regularly for a couple of hours every fortnight or so, working on the skills formal education tends to leave out (using version control, test automation, TDD< refactoring, design principles and other practical aspects of code craft).

I've also been nudging him towards certain sources of information: books, blogs, conferences, and so forth, and generally giving him a steer on what he would find most useful to know as a software developer.

Reflecting on how it's gone, both Will and I feel it's been of immense value - and not just for Will. Mentoring someone new to this field has spurred me to learn new things, too (like Python, for example) and reinvigorated my enthusiasm for learning. So, after twelvety-stupid years as a developer, I feel renewed. And looking forward to doing it again.

The industry is also up on the deal by one potentially great developer.

My thinking of late has been that this could be a workable route to avoiding the Groundhog Day that our profession seems stuck in, where new developers have to go through the same long process of rediscovery, with all the false leads and dead ends I wasted years on.

And so, this year, I tentatively begin the process of trying to scale this approach up. You can find out a bit more by visiting the Software Developers' Guild holding page. And, maybe, you'd be interested in becoming a mentor?

I'm looking for experienced developers who've "been around the block at least twice" ( call it my Rule of Two), and who'd be willing and able to provide a similar kind of light-touch guidance to someone at university, or from a code club, or returning to work after raising children or caring for a relative, or retraining for a career change, etc.

Could that be you?




June 5, 2017

Learn TDD with Codemanship

The Codemanship TDD "Driving Test" - Initial Update

A question that gets asked increasingly frequently by folk who've been on a Codemanship TDD workshop is "Do we get a certificate?"

Now, I'm not a great believer in certification, especially when the certificates are essentially just for turning up. For example, a certificate that says you're an "agile developer", based on sitting an exam at the end of a 2-3 day training course, really doesn't say anything meaningful about your actual abilities.

Having said all that, I have pioneered programs in the past that did seem to be a decent indicator of TDD skills and habits. First of all, to know if a juggler can juggle, we've got to see them juggle.

A TDD exam is meaningless in most respects, except perhaps to show that someone understands why they're doing what they're doing. Someone may be in the habit of writing tests that only ask one question, but I see developers doing things all the time that they "read in a book" or "saw their team doing" and all they're really doing is parroting it.

Conversely, someone may understand that tests should ideally have only one reason to fail so that when they do fail, it's much easier to pinpoint the cause of the problem, but never put that into practice. I also see a lot of developers who can talk the talk but don't walk the walk.

So, the top item on my TDD certification wish-list would be that it has to demonstrate both practical ability and insight.

In this respect, the best analogy I can think of is a driving test; learner drivers have to demonstrate a practical grasp of the mechanics of safe driving as well as a theoretical grasp of motoring and the highway code. In a TDD "driving test", people would need to succeed at both a practical and a theoretical component.

The practical element would need to be challenging enough - but not too challenging - to get a real feel for whether they're good enough at TDD to scale it to non-trivial problems. FizzBuzz just won't vut it, in my experience. (Although you can weed out theose who obviously can't even do the basics in a few minutes.)

The Team Dojo I created for the Software Craftsmanship conference seems like a viable candidate. Except it would be tackled by you alone (which you may actually find easier!) In the original dojo, developers had to tackle requirements for a fictional social network for programmers. There were a handful of user stories, accompanied by some acceptance tests that the solution had to pass to score points.

In a TDD driving test, I might ask developers to tackle a similar scale of problem (roughly 4-8 hours for an individual to complete). There would be some automated acceptance tests that your solution would need to pass before you can complete the driving test.

Once you've committed your finished solution, a much more exhaustive suite of tests would then be run against it (you'd be asked to implement a specific API to enable this). I'm currently pondering and consulting on how many bugs I might allow. My instinct is to say that if any of these tests fail, you've failed your TDD driving test. A solution of maybe 1,000 lines of code should have no bugs in it if the goal is to achieve a defect density of < 0.1/KLOC. I am, of course, from the "code should be of high integrity" school of development. We'll see how that pans out after I trial the driving test.

So, we have two bars that your solution would have to clear so far: acceptance tests, and exhaustive testing.

Provided you successfully jump those hurdles, your code would then be inspected or analysed for key aspects of maintainability: readability, simplicity, and lack of duplication. (The other 3 goals of Simple Design, basically.)

As an indicator, I'd also measure your code coverage (probably using mutation testing). If you really did TDD it rigorously, I'd expect the level of test assurance to be very high. Again, a trial will help set a realistic quality bar for this, but I'm guessing it will be about 90%, depending on which mutation testing I use and which mutations are switched on/off.

Finally, I'd be interested in the "testability" of your design. That's usually a euphamism for whether or not dependencies betwreen your modules are easily swappable (by dependency injection). The problem would also be designed to require the use of some test doubles, and I'd check that they were used appropriately.

So, you'd have to pass the acceptance tests to complete the test. Then your solution would be exhaustively tested to see if any bugs slipped through. If no bugs are found, the code will be inspected for basic cleanliness. I may also check the execution time of the tests and set an upper limit for that.

First and foremost, TDD is about getting shit done - and getting it done right. Any certification that doesn't test this is not worth the paper it's printed on.

And last, but not least, someone - initially me, probably - will pair with you remotely for half an hour at some random time during the test to:

1. Confirm that it really is you who's doing it, and...

2. See if you apply good TDD habits, of which you'd have been given a list well in advance to help you practice. If you've been on a Codemanship TDD course, or seen lists of "good TDD habits" in conference talks and blog posts (most of which originated from Codemanship, BTW), then you'll already know what many of these habits are

During that half hour of pairing, your insights into TDD will also be randomly tested. Do you understand why you're running the test to see it fail first? Do you know the difference between a mock and stub and a dummy?

Naturally, people will complain that "this isn't how we do TDD", and that's fair comment. But you could argue the same thing in a real driving test: "that's not how I'm gonna drive."

The Codemanship TDD driving test would be aimed at people who've been on a Codemanship TDD workshop in the last 8 years and have learned to do TDD the Codemanship way. It would demonstrate not only that you attended the workshop, but that you understood it, and then went away and practiced until you could apply the ideas on something resembling a real-world problem.

Based on experience, I'd expect developers to need 4-6 months of regular practice at TDD after a training workshop before they'd be ready to take the driving test.

Still much thinking and work to be done. Will keep you posted.




May 30, 2017

Learn TDD with Codemanship

Do You Write Automated Tests When You Spike?

So, I've been running this little poll on Twitter asking devs if they write automated tests when they're knocking up a prototype (or a "spike", as Extreme Programmers call it).




The responses so far have been interesting, if not entirely unexpected. About two thirds of rarely or never write automated tests for a spike.

Behind this is the ongoing debate about the limits of usefulness of such tests (and of TDD, if we take that a step further). Some devs believe that when a problem is small, or when they expect to throw away the code afterwards, automated tests add no value and just slow us down.

My own experience has been a slow but sure transition from not bothering with unit tests for spikes 15 years ago, to almost always writing some unit tests even on small experiments. Why? Because I've found - and I've measured myself doing it, so it's not just a feeling - I get my spike done faster when I have a bit of test scaffolding holding it up.

For sure, I'm not as rigorous about it as when I'm working on production code. The tests tends to be at a higher level, and there are fewer of them. I may break a few of my own TDD rules and have tests that ask more than one question, or I may not refactor the test code quite as fastidiously. But the tests are there, nevertheless. And I'm usually really grateful that I wrote some, as the experiment grows and maybe makes some unexpected twists and turns.

And if - as can happen - the experiment becomes part of the production code, I'm confident that what I've produced is just about good enough to be released and maintained. I'm not in the business of producing legacy code... not even by accident.

An example of one of my spikes, for a utility that combines arrays of test data for use with parameterised tests, gives you an idea of the level of discipline I might usually apply. Not quite production quality, but not that far off.

The spike took - in total - maybe a couple of days, and I was really grateful for the tests by the second day. In timed experiments, I've seen me tackle much smaller problems faster when I wrote automated tests for them as I went along. Which is why, for me, that seems to be the way to go. I get done sooner, with something that could potentially be released. It leaves the door open.

Other developers may find that they get done sooner without writing automated tests. With TDD, I'm very much in my comfort zone. They may be outside it. In those instances, they probably need to be especially disciplined about throwing that code away to remove the temptation of releasing unreliable, unmaintainable code.

They could rehabilitate it, writing tests after the fact and refactoring the code to give it a production sparkle. Some people refer to this process as "spike & stabilise". But, to me, it does rather sound like "code and fix". Because, technically, that's exactly what it is. And experience - not just mine, but a mountain of hard data going back decades - strongly suggests that code and fix is the slow route to delivery.

So I'm a little skeptical, to say the least.




May 29, 2017

Learn TDD with Codemanship

Software Craftsmanship 2017

It's early days, but planning is underway for this year's Software Craftsmanship conference.



Since it returned last year, SC20xx has evolved from a conference where talks were banned, to a conference with no fixed sessions at all. We stripped it down to the bare essentials of what folk said made previous SC conferences fun and worthwhile. Basically, we want a chance to meet likeminded code crafters, socialise, exchange ideas, and - most of all - code.

Instead of scheduled sessions, we'll have the space and the resources needed to tackle interesting and challenging programming projects. Work in pairs, work in groups, work by yourself (although, really, why come all that way to work alone?) Build a bot. Write a tool. Start a tech business. Put your Clean Code skills to the test. It's all good.

SC2017 will be on Saturday Sept 16th, and last year we had events being hosted in London, Manchester, Bristol, Munich and Atlanta, GA. This year, we're hoping for even more hosted events all around the world. It doesn't require much organisation, so drop me a line if you're interested in running one where you are.

Unofficially, the London event is already open for registration (there's a small fee to cover costs - SC2017 is a non-profit event), and more details will be posted soon.

Follow @softcraftconf for updates.





May 22, 2017

Learn TDD with Codemanship

20 Dev Metrics - 19. Progress

Some folk have - quite rightly - asked "Why bother with a series on metrics?" Hopefully, I've vindicated myself with a few metrics you haven't seen before. And number 19 in the series of 20 Dev Metrics is something that I have only ever seen used on teams I've led.

When I reveal this metric, you'll roll your eyes and say "Well, duh!" and then go back to your daily routine and forget all about it, just like every other developer always has. Which is ironic, because - out of all the things we could possibly measure - it's indisputably the most important.



The one thing that dev teams don't measure is actual progress towards a customer goal. The Agile manifesto claimed that working software is the primary measure of progress. This is incorrect. The real measure of progress is vaguely alluded to with the word "value". We deliver "value" to customers, and that has somehow become confused with working software.

Agile consultants talk of the "flow of value", when what they really mean is the flow of working software. But let's not confuse buying lottery tickets with winning jackpots. What has value is not the software itself, but what can be achieved using the software. All good software development starts there.

If an app to monitor blood pressure doesn't help patients to lower their blood pressure, then what's the point? If a website that matches singles doesn't help people to find love, then why bother? If a credit scoring algorithm doesn't reduce financial risk, it's pointless.

At the heart of IT's biggest problems lies this failure of almost all development teams to address customers' end goals. We ask the customer "What software would you like us to build?", and that's the wrong question. We effectively make them responsible for designing a solution to their problem, and then - at best - we deliver those features to order. (Although, let's face it, most teams don't even do that.)

At the foundations of Agile Software Development, there's this idea of iterating rapidly towards a goal. Going back as far as the mid 1970's, with the germ of Rapid Development, and the late 80's with Tom Gilb's ideas of an evolutionary approach to software design driven by testable goals, the message was always there. But it got lost under a pile of daily stand-ups and burndown charts and weekly show-and-tells.

So, number 19 in my series is simply Progress. Find out what it is your customer is trying to achieve. Figure out some way of regularly testing to what extent you've achieved it. And iterate directly towards each goal. Ditch the backlog, and stop measuring progress by tasks completed or features delivered. It's meaningless.

Unless, of course, you want the value of what you create to be measured by the yard.




May 19, 2017

Learn TDD with Codemanship

A Clean Code Language?

Triggered by a tweet about the designers of Python boasting that functions can now accept 255 parameters - I mean, why, really? - my magpie mind has been buzzing with the notion of how language designs (and compilers) could enforce clean code?



For example, what if the maximum number of parameters was just three? Need more data for your method than that? Maybe a parameter object is required. Or maybe that method does too much, and that's why it has so many parameters. You would have to fix the underlying problem.

And what if the maximum number of branches or loops in a method was one? Need another branch? You'd have to create another method for it and compose your conditionals that way. Or maybe replace conditionals with a polymorphic solution.

And what if objects that aren't at the top of the call stack weren't allowed to instantiate other objects? You'd have to pass its collaborators in through a constructor or other method.

And what if untested code caused a compile error?

And so on. Hopefully, you get the picture. Now, I'm just thinking out loud, but I think this could be at the very least a valuable thought experiment. By articulating design rules in ways that a compiler (or pre-compiler) might be able to enforce, I'm clarifying in my own mind what those rules really are.

What rules would your Clean Code language have?




Learn TDD with Codemanship

20 Dev Metrics - 18. External Dependencies

18th in my series 20 Dev Metrics is External Dependencies.



If our code relies too much on other people's APIs, we can end up wasting a lot of time fixing things that are broken when the contracts change. (Anyone who's written code that consumes the Facebook API will probably know exactly what I mean.)

In an ideal world, APIs would remain backwards-compatible. But in the real world, where 3rd-party developers aren't as disciplined as we are, they change all the time. So our code has to keep changing to continue to work.

I would argue that, with the way our tools have evolved, it's too easy these days to add external dependencies to our software.

It helps to be aware of the burden we're creating as we suck in each new library or web service, lest we fall prey to the error of buying the whole Mercedes just for the cigarette lighter.

The simplest metric is just to count the number of dependencies. The more there are, the more unstable our code will become.

It's also worth knowing how much of our code has direct dependencies on external APIs. Maybe we only depend on JDBC, but if 50% of our code directly references JDBC interfaces, we still have a problem.

You should aim to have as little of your code directly depend on 3rd-party APIs as possible, and as few different APIs as you can use to build the software you need to.


(And, yes, I'm including GUI frameworks etc in my definition of "external dependencies")



May 18, 2017

Learn TDD with Codemanship

20 Dev Metrics - 17. Test Execution Time

The 17th in my 20 Dev Metrics series can have a profound effect on our ability to sustain the pace of development - Test Execution Time.



When it takes too long to get feedback from tests, we have to test less often, which means more changes to the code in between test runs. The economics of defect removal are stark: the longer it is before a problem is detected, exponentially the more expensive it is to fix. If we break the code and discover it minutes later, then fixing the problem is quick and easy. If we break the code and discover hours later, that cost goes up. Days later and we're into code-and-fix territory.

So it's in our interest to make the tests run as fast as possible. Teams who strive for a testing pyramid, where the base of the pyramid - the bulk of the tests - is made up of fast-running unit tests can usually get good test feedback in minutes or even seconds. Teams whose testing pyramid is upside-down, with the bulk of their tests being slow-running system or integration tests, tend to find test execution a barrier to progress.

Teams should be putting continual effort into performance engineering their test suites as they grow from dozens to hundreds to thousands of tests. Be aware of how long test execution takes, and when it's too long, optimise the test architecture or execution environment. My 101 TDD Tips e-book contains a tip about optimising test performance that you might find useful.

Basically, the more often you want to run a test suite, the faster it needs to run. Simples.


May 17, 2017

Learn TDD with Codemanship

20 Dev Metrics - 16. Dev Pay Market Percentile

The 16th metric in my series 20 Dev Metrics is a simple but powerful one, which can determine how easy (or hard) it could be for you to hire and retain the developers you need.

As someone who gets asked to helped clients find developers, I can attest that Dev Pay Market Percentile is an accurate predictor of how long you'll have to search to find the right person.

Online sources of advertised salaries and contract rates, like itjobswatch.co.uk can show you how much your competitors are offering for specific skills like Java and TDD, in specific locations and specific industries (e.g., London, banking).



You'd be amazed how many employers scratch their heads wondering why they can't find the ace-whizzo developer they need for their dev team, whilst offering an average (or below average) salary.

Want good developers? Aim for the upper quartile on pay. Want great developers who'll stay? Aim for the upper tenth percentile.

I've lost count of the times the "skill shortage" mysteriously disappeared when the employer upped the offer. And I've also lost count of the times that demand for a skill quietly ramped up, leaving employers wondering why all of their long-serving devs suddenly upped and left.

And, of course, if you are a developer, it's worth knowing where you sit in the pay range. If your bosses are poor payers, they may be relying on you not knowing.