July 18, 2017

Learn TDD with Codemanship

Why Don't We Eat Our Own Dog Food?

You may already know I'm a big advocate of developers eating our own dog food.

Using the software we create as real end users, in the real world, provides insights that no amount of meetings or documentation can provide.

A quick straw poll I ran on Twitter suggests 2 out of 3 of us haven't actually used the software we're working on for real.



I've been banging this drum for years, for a largely reluctant audience. Managers say "Our users aren't happy with the latest release" and ask me what can be done. And I say "Walk a mile in your customer's shoes". And they say "Thanks for that. But no."

It's such a simple thing to do, and yet somehow so very hard. There's something psychological going on, I reckon. The same reason most hotel chain employees have no idea what it's like to be a customer in their own hotels...

I urge you to try it. Now.



July 16, 2017

Learn TDD with Codemanship

Developers & Employers Prefer to Connect Through Networks

I ran Twitter polls on developers' preferences for finding work, and employers' preferences for finding developers.






There's significant alignment. Both devs and employers prefer to connect through their networks - employers apparently more so.

This makes a lot of sense. There's the benefit of "it takes one to know one". And hiring devs who already know your devs may mean a better team fit.

It raises the question of how people who aren't in those networks find opportunities. This has implications for diversity. Hiring entirely through networks could lead to an effectively closed shop.

We should think about how to extend our networks to include people who might not normally be included. For that, we probably need outreach.

Noted also just how little devs and employers think of agencies. This comes as no surprise, when you've been in the industry as long as I have and heard many complaints from both sides.

Handing responsibility for hiring highly skilled workers to someone with no understanding has obvious drawbacks. You may as well ask your greengrocer to do it..

How IT recruitment agencies survive - and thrive - in a market that sees little value in them? It's a mystery for sure. There are just so many better routes now.

I think it boils down to the interpersonal. Sales people in these agencies build relationships with people working in client organisations. It's that simple.

As rational as we like to paint ourselves, we are - in essence - just shaved apes. "Yes, there may be better ways of hiring. But Phil from Computer Bastards Ltd bought me dinner. So they get the job."

That may well change, though. Social media's making it easier for these communities to find each other. As with so many other markets, the Web could end up cutting out the middle man.



July 15, 2017

Learn TDD with Codemanship

Finding Load-Bearing Code - Thoughts On Implementation

I've been unable to shake this idea about identifying the load-bearing code in our software.

My very rough idea was to instrument the code and then run all our system or customer tests and record how many times methods are executed. The more times a method gets used (reused), the more critical it may be, and therefore may need more of our attention to make sure it isn't wrong.

This could be weighted by estimates for each test scenario of how big the impact of failure could be. But in my first pass at a tool, I'm thinking method call counts would be a simple start.

So, the plan is to inject this code into the beginning of the body of every method in the code under test (C# example), using something like Roslyn or Reflection.Emit:



The MethodCallCounter could be something as simple as a wrapper to a dictionary:



And this code, too, could be injected into the assembly we're instrumenting, or a reference added to a teeny tiny Codemanship.LoadBearing DLL.

Then a smidgen of code to write the results to a file (e.g., a spreadsheet) for further analysis.

The next step would be to create a test context that knows how critical the scenario is, using the customer's estimate of potential impact of failure, and instead of just incrementing the method call count, actually adds this number. So methods that get called in high-risk scenarios are shown as bearing a bigger load.



External to this would be a specific kind of runner (e.g., NUnit runner, FitNesse, SpecFlow etc) that executes the tests while changing the FailureImpact value using information tagged in each customer test somehow.

Thoughts?



(PS. This is also kind of how I'd add logging to a system, in case you were wondering.)




July 13, 2017

Learn TDD with Codemanship

Do You Know Where Your Load-Bearing Code Is?

Do you know where your load-bearing code is?

In any system, there's some code that - if it were to fail - would be a big deal. Identifying that code helps us target our testing effort to where it's really needed.

But how do we find our load-bearing code? I'm going to propose a technique for measuring the "load-beariness" of individual methods. Let's call it criticality.

Working with your customer, identify the potential impact of failure of specific usage scenarios. It's about like estimating the relative value of features, only this time we're not asking "what's it worth?". We're asking "what's the potential cost of failure?" e.g., applying the brakes in an ABS system would have a relatively very high cost of system failure. Changing the font on a business report would have a relatively low cost of failure. Maybe it's a low-risk feature by itself, but will be used millions of times every day, greatly amplifying the risk.

Execute a system test case. See which methods were invoked end-to-end to pass the test. For each of those methods, assign the estimated cost of failure.

Now rinse and repeat with other key system test cases, adding the cost of failure to every method each scenario hits.

A method that's heavily reused in many low-risk scenarios could turn out to be cumulatively very critical. A method that's only executed once in a single very high-risk scenario could also be very critical.

As you play through each test case, you'll build a "heat map" of criticality in your code. Some areas will be safe and blue, some areas will be risky and red, and a few little patches of code may be white hot.

That is your load-bearing code. Target more exhaustive testing at it: random, data-driven, combinatorial, whatever makes sense. Test it more frequently. Inspect it carefully, many times with many pairs of eyes. Mathematically prove it's correct if you really need to. And, of course, do whatever you can to simplify it. Because simpler code is less likely to fail.

And you don't need code to make a start. You could calculate method criticality from, say, sequence diagrams, or from CRC cards, to give you a heads-up on how rigorous you may need to be implementing the design.





July 10, 2017

Learn TDD with Codemanship

Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams



One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.

For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.

Choose from:

  • Test-Driven Development workshops

    • Introduction to TDD

    • Specification By Example/BDD

    • Stubs, Mocks & Dummies

    • Outside-In TDD


  • Refactoring workshops

    • Refactoring 101

    • Refactoring To Patterns


  • Design Principles workshops

    • Simple Design & Tell, Don’t Ask

    • S.O.L.I.D.

    • Clean Code Metrics




To find out more, visit http://www.codemanship.co.uk/bitesized.html



July 9, 2017

Learn TDD with Codemanship

Why You Should Put Learning Opportunities Front-And-Centre In Dev Recruitment



A little Twitter poll I ran under the Codemanship acccount seems to confirm something that many of us have been saying for years. 41% of those polled said they could be lured from their current job by greater opportunities to learn.

Software development is a career that involves lifelong learning, and lot's of it. That's how we progress. So it doesn't come as a surprise that it was ranked significantly higher than "More money".

It's surprising, then, that learning opportunities don't figure higher in dev recruitment campaigns. I've banged this drum with clients many, many times. Want to attract and retain great developers? Make learning - mentoring, conferences, training, time to read, time to share - a greater part of the job. No, scratch that. Accept that learning is the job, and build your team culture around that inescapable fact.

When you're hiring, don't just look for what they know now. Look for their potential to learn. And their potential to teach (e.g. by example) the stuff they know so others can learn from them. And clear the way for that to happen. A lot.

Sadly, such employers are too few and far between. The unreasonable and unrealistic attitude that developers should arrive knowing everything they need to know, and no learning should go on on company time, is a leading cause of developer attrition.

It's also the reason why you've been searching in vain these last 6 months for a fluent Mandarin and Dutch-speaking full-stack JS/Node/Java/Clojure/Ruby/NoSQL/SQL/Docker/COBOL/Eiffel/Vim/Linux/Windows developer who has an HGV license and is licensed to practice medicine (salary: market rate).

Software developers tend to be highly educated, but the most important thing we learn is how to learn and it's one of the most important skills your money can buy. In return, one of the most valuable perks you can offer them is more opportunities to learn.

As a professional trainer and mentor, I am of course biased. I tell devs what I do, and they say "Wow, you must be really busy!" and I say "You'd think so, wouldn't you?" But the reality is that the majority of employers don't offer their devs any training at all, let alone time to, say, read a book.

The kind of bosses I run training for are unfortunately very much in the minority. Although, interestingly, they seem to have a lot less trouble hiring good developers.

Funny, that...


July 6, 2017

Learn TDD with Codemanship

Conceptual Correlation - Source Code + How To Build Your Own

Although it's only rough and ready, I've published the source code for my Conceptual Correlation calculator so you can get a feel for how it works and how you might implement your own in whichever language you're interested in.

It's actually only about 100 lines of code (not including tests), and if I put my brain in gear, it could well be signiicantly less. It's a pretty simple process:

1. Parse the code (or the IL code, in this case) using a parse, compiler, decompiler - whatever will get you the names used in the code

2. Tokenize those code names into individual words (e.g., thisMethodName becomes "this" "method" "name"

3. Tokenize the contents of a requirements text file

4. Filter stop words (basically, noise - "the", "at", "we", "I" etc) from these sets of words. You can find freely available lists of stop words online for many languages

5. Lemmatize the word sets - meaning to boil down different inflections of the same word ("report", "reports", "reporting" to a single dictionary root)

6. Optionally - just for jolly - count the occurances of each word

7. Calculate what % of the set of code words are also contained in the requirements words

8. Output the results in a usable format (e.g., console)

No doubt someone will show us how it can be done in a single line of F#... ;)


July 5, 2017

Learn TDD with Codemanship

A Little Test for My Conceptual Correlation Metric

Here's a little test for my prototype .NET command line tool for calculating Conceptual Correlation. Imagine we have a use case for booking seats on flights for passengers.


The passenger selects the flight they want to reserve a seat on. They choose the seat by row and seat number (e.g., row A, seat 1) and reserve it. We create a reservation for that passenger in that seat.


We write two implementations: one very domain-driven...



And one... not so much.



We run Conceptual.exe over our first project's binary to compare against the use case text, and get a good correlation.



Then we run it over the second project's output and get zero correlation.



QED :)

You can download the prototype here. What will it say about your code?




Learn TDD with Codemanship

Conceptual Correlation - Prototype Tool for .NET

With a few hours spare time over the last couple of days, I've had a chance to throw together a simple rough prototype of a tool that calculates the Conceptual Correlation between a .NET assembly (with a .pdb file in the same directory, very important that!) and a .txt file containing requirements descriptions. (e.g., text copied and pasted from your acceptance tests, or use case documents).

You can download it as a ZIP file, and to use it, just unzip the contents to a folder, and run the command-line Conceptual.exe with exactly 2 arguments: the first is the file name of the .NET assembly, the second is the file name of the requirements .txt.

e.g.,

Conceptual.exe "C:\MyProjects\FlightBooking\bin\debug\FlightBooking.dll" "C:\MyProjects\FlightBooking\usecases.txt"


I've been using it as an external tool in Visual Studio, with a convention-over-configuration argument of $(BinDir)\$(TargetName)$(TargetExt) $(ProjectDir)\requirements.txt



I've tried it on some fair-sized assemblies (e.g., Mono.Cecil.dll), and on some humungous text files (the entire text of my 200-page TDD book - all 30,000 words), and it's been pretty speedy on my laptop and the results have been interesting and look plausible.

Assumes code names are in PascalCase and/or CamelCase.

Sure, it's no Mercedes. At this stage, I just want to see what kind of results folk are getting from their own code and their own requirements. Provided with no warranty with no technical support, use at own risk, your home is at risk if you do not keep up repayments, mind the gap, etc etc. You know the drill :)

Conceptual.exe uses Mono.Cecil to pull out code names, and LemmaSharp to lemmatize words (e,g, "reporting", "reports" become "report"). Both are available via Nuget.

Have fun!






July 4, 2017

Learn TDD with Codemanship

Are We Only Pretending To Care About Cost of Change?

Wise folk have occasionally told me - when I've claimed that "I really wanted X" - that, if X was within my control, then I couldn't have wanted it badly enough or I'd have X.

You know, like when someone says "I really wish I knew Spanish"? Obviously, they really don't. Or they'd know Spanish.

Likewise when development teams say "I really wish we understood our end users better". Evidently not. Or we'd understand our end users better.

And, talking about it today with colleagues, there's a nice little list of things development teams are only pretending to care about. If they did, they'd have done something about it.

Take the cost of changing code. Is your team tracking that? Do you know how much it cost to add, change or delete a line of code for your last release? Do you know how the cost of changing is, well, changing?

The vast majority of teams don't keep those kinds of records, even though the information is almost always available to figure it out. Got version control? You can get a graph of code churn. Got project management or accounts? Then you know how much money was spent during those same periods. Just divide the latter by the former, and - bazinga! - cost of changing a line of code.

The fact that most of us don't have that number to hand strongly suggests that, despite our loudest protestations, we don't really care about it.

And what's very interesting is that it's no different within the software craftsmanship community. We talk about cost of change a great deal, but I've yet to meet a self-identifying software craftsperson who knows the cost of changing their own code.

This seems, to me, to be like a club for really serious golf enthusiasts in which nobody knows what their handicap is. At the very least, should we not be building a good-sized body of data to back up our claims that code craft really does reduce the cost of change? It's been nearly a decade since the software craftsmanship manifesto. What have we been doing with our time that's more important than verifying its central premise?