March 20, 2018

...Learn TDD with Codemanship

Why I Won't Take Automated "Hacker Tests" To Get Job Interviews

I'm back on the contract market - give me a shout if you're in the London area (or looking for remote-workers) and could use a very experience Java and/or C# bod - and it's been a looong time since I looked for regular work.

Much seems to be the same as it was when I was last contracting: the junior recruiters randomly filtering out candidates because their CV doesn't specifically mention that version of Spring, the dispiriting job ads that effectively say "It's a shitstorm here, but you get foosball and free breakfast!", the banks who make us wait 6 weeks for an interview date, the ever-growing lists of languages, tools and frameworks we're expected to have 500 years experience of. Yep. It's all as I left it.

But there's something new among all this. More and more of us are apparently being asked to take some kind of automated online coding test before the employer will even consider speaking to us. I was asked this week to take a hackerrank test that lasted 90 minutes. The recruiter said the client was "very positive" about my CV. But, it turns out, this step in the recruitment process was non-negotiable.

I have no problem with being asked to demonstrate technical competence. I kind of do it for a living. I code in front of other developers on training workshops, at conferences, on YouTube, and via my Github account. I'm not hiding anything. If you want to pair program with me on a problem to see the cut of my jib, I'm okay with that.

But I draw the line at these online timed tests. The focus on them is necessarily very narrow, for a start. Maths puzzles and algorithms and "stuff about programming languages". That sort of thing. Is this the new "whiteboard interview"? (Flashbacks to interviews where someone wrote some Java on a board and asked "Will that compile?" I'm sorry, I wasn't aware we'd be compiling this software in our heads.)

I think the focus has to be narrow, because there's a limit to what can be scored automatically. Basically, "this is what we know how to measure". I understand that a lot of these tests focus on algorithms. You're asked to solve a problem, and then scored on passing acceptance tests (easy to automate) and execution time (again, easy to automate).

While I agree that passing acceptance tests is kind of important, I worry about the next biggest factor being Big O-style algorithmic efficiency. Maybe my solution is slower, but easier to understand, for example. And it's sneaky, too. If there are performance criteria, we should bee told what they are up-front. I'm not in the business of making code faster than it needs to be just as a matter of course.

I also worry about the competition element of some of these tests, especially given the narrow focus. I do not rank "hackers" by their ability to create efficient algorithms alone, or by their in-depth knowledge of Java syntax. Let's measure something else to illustrate what I mean; in my Team Dojo, developers have to work together to solve a set of non-trivial problems. They also score points for passing acceptance tests. And what I've learned from watching hundreds of teams take this test is that individual technical ability is a poor predictor of team performance. Teams of coding ninjas are routinely outperformed by teams of average devs who just worked together better. It's quite inspiring to watch.

My other objection to taking these tests is the time candidates are expected to invest speculatively, just to be considered for interview. If you're on the market for work, you may be making multiple applications every week. What if they all ask you to take one of these tests, just to be considered? This creates a big overhead for candidates. If you're coming to the end of a contract, have young kids at home, or are caring for a relative, or have other time commitments, where are you going to find 90 minutes in your day just to prove that you know LINQ. Every. Single. Time. You. Apply?

I would be in favour of a website where devs can once-and-for-all demonstrate their competence in something. Not every time an employer says "dance for me!" I thought this site was called "Github", but that shows what I know.

But I'm not in favour of this cookie-cutter-one-size-fits-all approach to filtering. I guess my real gripe about being asked by a well-known Agile consultancy to take a hackerrank test is not that they asked, but that there was simply no other way of demonstrating my coding chops that they'd consider.

In discussing this with other developers, it seems as if there's a "horses for courses" situation here. Not everyone codes in their spare time, not everyone has a portfolio of stuff (e.g., on Github) they can point to. Not everyone shines when they're put on the spot. Not everyone likes to take stuff away and work alone. There's no one single way that will give every developer a chance to show us what they can really do.

Perhaps what I'm saying here is that we should let the candidate choose how they demonstrate technical competence. I might say "take a look at my screencasts" or "let me fire up Zoom.us and pair with one of your devs" or "how about I come in and run a little hands-on workshop?" Someone else might want to do the hackerrank test, perhaps because they lack job experience and need to demonstrate some raw ability, or maybe they just get nervous with new people. Someone else might want to do a - gulp - whiteboard interview because they worry they'll mess up coding in front of other people, but can demonstrate how much they've learned.

The point is that I can tell shit from shinola any of these ways. If you suck, and you have a portfolio, I'll know it from looking at that. If you suck and we pair, I'll know soon enough. If you suck and take a hackerrank test... I'll still want to see the code. But eventually, I'll know. (So might as well look at your Github.) And if you suck and we get around a whiteboard, I'll get it from that, too.

It seems to me that these automated coding tests are an attempt to remove the "it takes one to know one" element from filtering candidates. My contention is that you can't. That kind of machine intelligence is still decades way. Meanwhile, we're stuck with people assessing other people. And it helps enormously if those people know what they're looking at (and what they're looking for.) That's what needs fixing here.

There's no economy of scale in software development. Why would we believe there's economy of scale in software developer recruitment? That's the problem these online tests claim to solve, but - evidently - they haven't. They just filter out experienced candidates like the many developers I've spoken to.

So we might as well let candidates put their best foot forward and let them decide which foot that is.

Otherwise the end result is you filter out a lot of good people who'd be great additions to your team, but who just don't fit in your recruiting process.

Perhaps we need a Dev Recruitement Manifesto?






Posted 7 months, 3 days ago on March 20, 2018