February 25, 2014

...Learn TDD with Codemanship

Why Code Inspections Need To Be Egalitarian

The debate rages on about Uncle Bob's blog post advocating what he calls a "foreman" on software development teams who takes responsibility for the quality of commits by team members.

Rob Bowley of 7digital goes on to suggest that he no longer needs to inspect code quality, choosing instead to measure the effects of code quality on the reliability, frequency and sustainability of releases. This is per a discussion I had with Rob a few years ago, when he showed me the suite of metrics he'd published - a brave move - on 7digital's software development performance. Would that other software organisations were prepared to be so transparent.

The assertion goes back to my "Software Craftsmanship Imperative" keynote that was doing the rounds in 2010-2011.



The point is this: your customer or boss is not going to care about "clean code", no matter how much you try to persuade him or her. I've always maintained that craftsmanship is not an end in itself. There's a reason why code quality is important, and I've learned from bitter experience that you need to focus on that reason with people who aren't writing the code.

Having said that, smart developers who understand the causal link will inspect their code continuously and be ever-vigilent to things that might hinder progress later. So it's quite common to find practices like pair programming, code inspections, static analysis and all that malarkey going on in high-functioning teams.

But let's be clear about who the audience is for these two distinct and closely-related pictures; code metrics etc are for the people writing the code to provide early warning about maintainability "bugs" and as a tool for learning and improving at writing more maintainable code. Release statistics are for everyone (including developers) who cares about the sustainability of innovation.

To use another broken metaphor, code quality is data about the working of your engine, whereas release statistics are about the progress of your journey. High-functioning teams build a picture that can show how tinkering with the engine can improve progress on a long journey, which most software development turns out to be (even when the boss insists it's just going to be a short trip to the shops.)

My own experiences of being asked by managers to impose the wrong kind of picture on the wrong kind of audience have made me extremely wary of doing that, especially in the last 4-5 years.

Most importantly, I've learned that - when it comes to inspections and code quality - you can lead a horse to water, but you can't make it report untested code. The developers have got to want the information, because they believe it will help them, and have got to seek it for themselves. This is perhaps exemplified by the experimental TDD "apprenticeship" scheme we ran at the BBC in their TV Platforms team.

The same applies if one person on the team (call him/her a "foreman" if you like) tries to impose such a regime on the other - probably less willing - members. It just doesn't work.

Not only are the team likely to resent having their code's pants pulled down in such a manner for all to see, but - if they've not been paying attention to code quality as much as they should - the picture revealed is likely to dishearten and impact team morale.

Once you've handed their code its arse, what then? So now they know it sucks. What are they going to do about it? Do they want to do anything about it? Can they do anything about it? Would they know what to do about it?

Much as I'd like to believe I have the power as a coach to make developers who don't care about code quality care about code quality, the reality is that the best I can do is to make them aware of its existence as a thing that some developers care about. And then we're back to the horse and the water and the drinking.

As a software development coach in the same organisation where Rob Bowley and I met, I kind of did both. I made the mistake of imposing code quality metrics on some teams. But I also discovered something that has completely changed my whole outlook on what it is I do for software organisations.

I made a deliberate choice right at the start of my time in that organisation to focus more on developer culture. I immediately instigated internal events - totally voluntary - aimed at developers, roped in some inspiring names to come in and rally the troops, and gradually encouraged the developers there to see themselves as a community. More importantly, as a community that cares about software development.

I was told unequivocally by the people who hired me that there was no point. These people were not motivated. They didn't care, and couldn't be made to care. And they were right. From above, or from the outside, you cannot make people care. But you can build a culture in which it's easier to care than it is not to care.

From that wellspring came much nascent talent that had been festering in a command-and-control culture. Some have become software development coaches themselves in the intervening years, others lead successful development organisations. So much for "don't care, won't care".

Ultimately, my point is this; as with all technical decisions that can be made for a development team, it works best when the team makes it. You can't force people, con people, bribe people or blackmail them into caring. And if they don't care, you can point out all their code quality shortcomings as much as you like, because they're not going to fix them.








Posted 2 weeks, 5 days ago on February 25, 2014