March 17, 2014

...Learn TDD with Codemanship

Blind Pair Programming Interviews - Early Experience Report

A week ago, I undertook an experiment for a client. We'd both been discussing the problem of interviewer bias in the recruitment process, and wanted to see what would happen if the interviewer had nothing to go on except for the candidate's code.

Blind interviews are a challenge to undertake. It's very difficult to learn enough about a candidate's employable qualities without personal details filtering in that might reveal, directly or indirectly, their probable age, their gender, their ethnicity, and so on.

Certainly, upon seeing the candidate, all sorts of personal prejudices may come into play. It's very hard in a face-to-face meeting to disguise the fact that you are, say, black or that you are a woman, or that you are middle-aged.

So, in a blind interview, we do not meet the candidate face-to-face.

The voice, too, can come loaded with information that plays to our prejudices. An accent may reveal that they are not from around here. It may reveal that they are possibly of a certain "class". It would almost certainly reveal their gender.

So, in a blind interview, we do not hear the candidate's voice.

This leaves us with the written word. In this modern age, it's possible to hold a conversation in real time using just text. Instant Messaging opens a door to interviewers that allows us to find out more about the candidate without seeing their face or hearing their voice. But even through that medium, personal details and clues about a candidate might be inadvertantly revealed. Use of certain colloquilisms might place them from a particular part of the world. A casual mention of a particular movie they like, or a pop record they listen to, might place them in the typical demographic for those products.

This is also true of CV's. If the interviewer gets to read even a CV, even if it's been redacted to remove their name, age, address and so on, it can at least reveal how long they might have been working in the industry and their probable age.

CVs also risk other biases; like whether or not they went to university (possibly one of the poorest indicators of ability as a software developer, we've found.)

The whole point of a blind interview is to know none of these things; not who they are, where they come from, what colour their skin is, which god or gods they pray to, what their favourite Pixar movie is... non of these things. We are attempting to remove reasons not to favour a candidate other than how good they are at writng software.

It's unfair to reject a candidate because they're too young or too old. But it's entirely fair to reject them because they used public setters to initialise an object instead of a constructor, or because they never run the tests after they've refactored, (or because they never refactor unless you prompt them to.)

It's also entirely fair if they fail to comprehend requirements, provided you are sure those requirements are expressed clearly enough (which I advise you to test beforehand.) Or if they're argumentative and always find reasons to not to what you're asking them to do. Or if they won't accept feedback, provided it's constructive (e.g., "do you thinnk we should write a test for this?")

And so we set sail on strange waters with a round of 6 pair programming sessions with 6 candidates about whom I knew absolutely nothing, except for what programming languages the candidates claimed to know. (Although even that can be revealing - if they say "FORTRAN, C and Lisp", they may be my age or older.)

Luckily, in this case, they all had to know Java, and I didn't need to know any more than that.

Each pairing session started exacty the same way. They IM'd me on Skype, through an account we'd set up for these interviews, to indicate readiness: "Ready".

I then ask them for details to log into a TeamViewer session hosted on their desktop, so I can see their screen.

I copied and pasted (gasp!) a set preamble to introduce the exercise we were about to do. In this experiment, I chose the Codemanship Refactoring Assault Course, as I tend to find refactoring a good indicator of general programming ability.

I ask them to find and refactor away as many code smells as they can in the assault course code within the next 30 minutes. That's really all the time I need to get the measuree of them as refactorers of code.

In the experiment, 2 of our candidates went offline at this point and I didn't hear from them again. This is to be expected. In typical interviews, a sizeable proportion of developers who claim refactoring experience on their CV don't even know what the word means.

I asked the remaining 4 to download the Java code and import it into their development environment.

Of the four remaining, 2 ran the tests as soon as they'd imported the code: an early indicator that proved accurate as to what was to follow.

The rest of the pairing session was pretty normal. I kept the conversation completely focused on the code in front of us, and why the candidates were doing what they were doing.

One candidate seemed unaware of "code smells" and proceeded to add copious comments to make the code more "readable". They also inlined a few methods, because they felt there were "too many methods". AT no point did they run the tests. Or even look at the tests.

Two candidates did pretty well, one of them in particular did very well and even tackled the switch statement successfully.

In the end, I wanted to keep things as objective as I could. So, instead of writing up my feelings about each candidate, I simply offered a summary of the code smells they found, whether or not they applied meaningful refactorings to them, and in what ways the code was improved. I also scored them for how frequently they ran the tests, for checking code was tested before attempting a refactoring (e.g., with a code coverage tool, or simply by editing a line of code and seeing if any tests failed - none of them did that), and for their apparant fluency with tools they claim to use every day.

From the session, I recommended two of the candidates - one more highly than the other by a country mile, I have to say - and suggested the other four be rejected.

I have no idea who these candidates were. But the client, of course, does. It will be interesting to see what happens now.

I've yet to find out who got hired, if any of them did. But it will be interesting to see if my recommendation - based purely on how they performed in the pairing session - is followed through.

Posted 8 years, 6 months ago on March 17, 2014