November 22, 2014

Continuous Inspection I - Why Do We Need It?

This is the first of a series of posts about Continuous Inspection. My goals here is to give you something to think about, rather than to present a complete hands-on guide. The range (and maturity) of tools and techniques we can apply to Continuous Inspection (I'll call it CInsp from now on to save a few keystrokes) is such that I could write 1,000 blog posts and still not cover it all. So here I'll just focus on general CInsp principles and illustrate with cherrypicked examples.

In this first post, I want to summarise what I mean by "Continuous Inspection" and argue that there's a real need for it on most software development teams.

Contininuous Inspection is the practice of - and stop me if I'm getting too technical here - continuously inspecting your code to detect non-functional issues in the software.

CInsp is just another kind of Continuous Testing, which is a cornerstone of Continuous Delivery. To have our software always in a shippable state, we must take steps to assure ourselves that the software is always working.

If we follow the thinking behind continuous testing (and re-testing) of our software to check that it still works, the benefit is that we never stray more than a few minutes from having something we could ship if the business wanted us to.

To date, the only practical way we've found to achieve Continuous Testing is to automate those tests as much as possible, so they can be run quickly and economically. If it takes you 2 weeks to re-test your software, then after each change you make to the code, you are at least 2 weeks away from knowing if the software still works. Manual testing makes Continuous Delivery impractical.

In recent years, automated testing - and especially automated unit testing - has grown in popularity, and the effects can be seen in teams delivering more reliably and more sustainably as a result.

But only to a point.

What I've observed across hundreds of teams over the last decade or more is that, even with high levels of automated testing, the pace of delivery still slows to unacceptable levels.

In order to sustain the pace of change, the code itself needs to remain open to change. Being able to quickly regression test our software is a boon in this respect, no doubt. But it doesn't address the whole picture.

There are other things that can hamper change in our code. If the code's complicated, for example, it will be more likely to break when we change it. If there's duplication in our code - if we've been a bit trigger-happy with Copy+Paste - then that can multiply the cost of making a change. If we've not paid attention to the dependencies in our code, small changes can cause big ripples through the code and amplify the cost.

As we make progress in delivering functionality we tend also to make a mess inside the software, and that mess can get in our way and impede future progress. To maintain the pace of innovation over months and years and get the most out of our investment over the lifetime of a software product, we need to keep our code clean.

Experienced developers view design issues that impede progress in their code as bugs, and they can be every bit as serious as bugs in the functionality of the software.

And, just like functional bugs, these code quality bugs (often referred to as "code smells", because they're indicatice of your code "rotting" as it grows) have a tendency to get harder and more expensive to fix the longer we leave them.

Duplication has a tendency to grow, as does complexity. We build more dependencies on top of our dependencies. Switch statements get longer. Long parameter lists get longer. Big classes get bigger. And so on.

Here's what I've discovered form examining hundreds of code bases over the years: code smells that get committed into the code are very likely to remain for the lifetime of the software.

There seems to be a line that once we've crossed it, our mistakes are likely to live forever (and impede us forever). From observation, I've found that this line is moving on.

In the Test-driven Development cycle, for example, I've seen that when developers move on to the next failing test, any code smells they leave behind will likely not get addressed later. In programming, "later" is a distant and alien land where all our little TO-DO's never get done. "Later" might as well be "Narnia".

Even more so, when developers commit their code to a shared repository, at that point code smells "petrify", and remain forever trapped in the amber of all the other code that surrounds them. 90% of code smells introduced in committed code never get fixed.

This is partly because most teams have no processes for identifying and addressing code quality problems. But even the ones who do tend to find that their approach, while better than nothing, is not up to the task of keeping the code as clean as it needs to be to maintain the pace of change the customer needs.

Why? Well, let's look at the kinds of techniques teams these days use:

1. Code Reviews

There's a joke that goes something like this: "Ask a developer what's wrong with a line of code, and she'll give you a list. Ask her what's wrong with 500 lines of code, and she'll tell you it's fine."

Code reviews have a tendency to store up large amounts of code - potentially containing large numbers of issues - for consideration. The problem here is seeing the wood for the trees. A lot of issues get overlooked in the confusion.

But even if code reviews identified all of the code quality issues, the economics of fixing those issues is working against us. Fixing bugs - functional or non-functional - tends to get exponentially more expensive the longer we leave them in the code, and for precisely the same reasons (longer feedback cycles).

In practice, while rigorous code reviews would be a step forward for many teams who don't do them at all, they are still very much shutting the stable door after the horse has bolted.

2. Pair Programming

In theory, pair programming is a continuous code review where the "navigator" is being especially vigilent to code quality issues and points them out as soon as they spot them. In some cases, this is pretty much how it works. But, sad to say, in the majority of pairs, code quality issues are not high on anyone's agenda.

This is for two good reasons: firstly, most developers are not all that aware of code smells. They don't figure high in our list of priorities. Code quality isn't sexy, and doesn't get you hired at IronicBeards.com.

Secondly, with the best will in the world, people have limitations. When Codemanship does pairing to assess a developer's skill level in certain practices, the level of focus required on what the other person's doing is really quite intense. You don't take your eye off the screen in case you miss something. But there are dozens of code smells we need to be vigilant for, and even with all my experience and know-how, I can't catch them all. My mind will have to skip between lots of competing concerns, and when my remaining brain cells are tied up trying to remember how to do something with Swing, I'm likely to take my eye off the code quality ball. It's also very difficult to maintain that level of focus hour after hour, day-in and day-out. It hurts my brain.

Pair programming, as an approach to guarding against code smells, is good when it's done well. But it's not that good that we can be assured code written in this way will be maintainable enough.

3. Design Authorities

By far the least effective route to ensuring code quality is to make it someone else's job.

Hiring architects or "technical design authorities" suffers from all the shortcomings of code reviews and pair programming, and then adds a big bunch of new shortcomings.

Putting aside the fact that almost every architect or TDA I've ever met has been mostly focused on "the big picture", and that I've seen 1,000-line switch statements waved through the quality gate by people obsessing over whether classes implement certain interfaces they've prescribed, turning design authorities into design quality testers never seems to end well. Who wants to spend their day scouring other people's code for examples of Feature Envy?

I'll say no more, except to summarise by observing that the code I've seen produced by teams with dedicated design authorities counts amongs the worst for code quality.

4. Coding Standards

In theory, a team's coding standards are a codification of what we all agree we mean by "good code".

Typically, these are written down in documents that nobody ever reads, and suffer from the same practical drawbacks as architecture documents and company mission statements. They're aspirational affirmations at best. But, in practice, everybody just ignores them.

Even on those more disciplined teams that try to adhere to coding standards, they still have major drawbacks, all relating back to things we've already discussed.

Firstly, coding standards are a list of "stuff" we need to be thinking about along with all the other "stuff" we have to think about. So they tend to take a lower priority and often get overlooked.

Secondly, as someone who's studied a lot of coding standards documents (and what joy they bring!), they have a tendency to be both arbitrary and by no means universally agreed upon. Often they've been written by some kind of design or development authority, usually with little or no input from the team they're being imposed on. It's rare for issues that affect maintainability to be addressed in a coding standards document. Programmers are a funny bunch: we care deeply about some weird stuff while Elephants In The Room creep in without being questioned and sit on us. Naming conventions, therefore, have little relation to how easy the code will be to read and understand. And it's rare to see duplication, dependencies, complexity and so on even being hinted at. As long as all your instances have names beginning with obj and all your private member variables beging with "m_", the gods of code goodness will be appeased.

And then there's the question of how and when we enforce coding standards. And we're back to the hard physics of software development - time, money and cost. Knowing what we should be looking for is only the tip of the code quality iceberg.

What's needed is the ability to do code reviews so freqently, and do them in a way that's so effective, that we never stray more than a few minutes from clean code. For this, we need code reviewers who miss very little, who are constantly looking at the code, and who never get tired or distracted.

For that, thankfully, we have computers.

Program code is like any other domain model; we can write programs to reason about the design of other programs, expressed in terms of the structure of code itself.

Code quality rules are just like any other computable business rules. If the rule is that a block of code in one class should not make copious references to features in another class ("Feature Envy"), it's possible to write an automated test that reads code and looks at those references to determine if that block of code is in the right place.

Let's illustrate with a technology example. Imagine we're working in Java in, say, Eclipse. We could write code for a plug-in that, whenever we make a change to the code document we're working on, reads the code's Abstract Syntax Tree (basically, a code DOM) and does a calculation for the ratio of internal and external dependencies in that Java method we just changed. If the ratio is too low, it could flag it up as a warning while we're writing the code.

The computational power of computers is such today that this sort of continuous background code reviewing is practically possible, and there have already been some early attempts to create just such plug-ins.

In the article I wrote a few years ago for Visual Studio Journal, Ever-decreasing Cycles, I speculate about the impact such short code quality feedback loops might have on the economics of development.

It's my belief that, just as continuous automated unit testing has had a profound effect on the "bottom line" of software development for many teams and businesses, so too would Continuous Inspection.

In the next blog post, I'll talk about the CInsp process and look at practical ways of managing CInsp requirements, test automation and how we action the code quality problems it can throw up.




November 19, 2014

In 2015, I Are Be Mostly Talking About... Continuous Inspection

Just a quick FYI, for event organisers: after focusing this year on software apprenticeships, in 2015 I'll be focusing on Continuous Inspection.

A critically overlooked aspect of Continuous Delivery is the need to maintain the internal quality of our software to enable us to sustain the pace of innovation. Experience teaches us that Continuous Delivery is not sustainable without Clean Code.

Traditional and Agile approaches to maintaining code quality, like code reviews and Pair Programming, have shown themselves to fall short of the level of rigour teams need to apply. While we place great emphasis on automated testing to ensure functional quality, we fall back on ad hoc and highly subjective approaches for non-functional quality, with predictable results.

Just as with functional bugs, code quality "bugs" are best caught early, and for this we find we need some kind of Continuous Testing approach to raise the alarm as soon after code smells are introduced as possible.

Continuous Inspection is the missing discipline in Continuous Delivery. It is essentially continuous non-functional testing of our code to ensure that we will be able to change it later.

In my conference tutorials, participants will learn how to implement Continuous Inspection using readily available off-the-shelf tools like Checkstyle, Simian, Emma, Java/NDepend and Sonar, as well as rigging up our own bespoke code quality tests using more advanced techniques with reflection and parser generators like ANTLR.

They will also learn about key Continuous Inspection practices that can be used to more effectively manage the process and deliver more valuable results, like Non-functional Stories, Clean Code Check-ins, Build Inspections and Rising Tides (a practice that can be applied to incrementally improving the maintainability of legacy code.)

If you think your audience might find this interesting, drop me a line. I think this is an important and undervalued practice, and want to reach as many developers as possible in 2015.




November 6, 2014

Personal Craftsmanship Coaching Sessions To Raise Money To Feed People In Need This Winter

As part of an Indiegogo campaign to raise £1,000 for hot meals for people in need this winter, I'm offering 10x 2-hour personal coaching sessions for developers in TDD, refactoring, SOLID and other aspects of software craftsmanship.

These would be online pairing sessions, and we can work in any language you like on any problem you like, just as long as the emphasis is on craftsmanship.

The money raised from one coaching session would pay for 100 hot meals for people struggling with poverty this winter.

So if you fancy a bit of one-on-one coaching to help you with any questions or obstacles you're wrestling with in your quest to become a better code crafter, and would like to help people in real need at the same time, please take a look.






October 24, 2014

Refactoring Is Dead (Hard)

Jason Yip tweets jokingly that he's waiting for someone to write "Refactoring Is Dead".

Well, here it is.

The "TDD is Dead" meme started with a blog post with that clickbaity title, arguing that folk are doing TDD wrong. So it seems in that same spirit to argue that "refactoring is dead" for a similar reason.

But that would be innacurate. It would be more accurate to say that refactoring was never really alive in the first place.

As a practice, refactoring has never really taken off. It's one of those things that many claim to have done on the CVs, but few actually ever have, safe in the knowledge that the person interviewing them is probably lying about their refactoring experience, too.

Pair with 100 Agile developers, and if you're lucky you may find a handful who even understand what refactoring really is. having paired with thousands of developers over the years, I know that perhaps 1% of us can actually do it for real.

And the reason for this is simple. Refactoring, as a skill and a discipline, takes a lot of time and study and practice to get competent at. It's like swimming - it exercises all of the muscles developers use. There has to be unit tests. You have to have an eye for code quality problems ("code smells"). You have to understand the mechanics of each refactoring, which requires the ability to reason about code in a way that's more rigorous than most of us are used to. And, once you get past the basics to the good and most useful stuff, it requires the ability to think several refactorings ahead - like a chess player - so that you can restructure and manipulate complex code into the desired forms. If TDD is riding a bicycle, then refactoring is flying a helicopter.

In summary, refactoring is hard. Dead hard.


UPDATE: Yes, I know that TDD involves refactoring. But I also know that a majority of developers who say they do TDD do little or no refactoring as part of it. And the ones who do are doing very simple refactorings. Legacy code is a whole different ball of wax.





October 23, 2014

Software Craftsmanship 2015 - Registration Page Preview

So, Sc2015 preparations are gathering pace. we should be opening soon for registration, and here's a preview of the blurb that will be on the registration page, to wet your appetite:



Software Craftsmanship 2015 is the sixth international software craftsmanship conference, organised by Codemanship.

http://www.softwarecraftsmanship.co.uk

All of the profits from SC2015 will be put towards programming and maths education projects at Bletchley Park and The National Museum of Computing.

People who came to previous SC20xx events told us that what they enjoyed best was exploring and getting hands-on with programming challanges, talking to and pair programming with lots of differet people.

We want to maximise that experience at SC2015.

So, at SC2015, not only will talks still be banned, but we're doing away with the whole idea of sessions and schedules altogether.

Want to spend all day trying different programming challenges?
Want to spend all day tackling one programming challenge 8 times with 8 different pairing partners?
Want to do fun programming stuff in the morning, and then spend the afternoon shooting the breeze with interesting people who care about the things you do?
Want to take your laptop dow to the lake and do live blogging all day?

At SC2015, you can!

Our goal is to make SC2015 as loose as possible. Your day is your own.

What we'll provide is a unique venue with an amazing history, a decent Wi-Fi connection, power sockets for your laptops, refreshments, and 100 of the most passionate and open-minded software developers for you to bounce ideas off, learn from and socialise with.

We'll be asking people to submit their ideas for programming challenges and mini-projects which will hopefully fire the creative juices on the day, and you'll be able to pick these up and try any that take your fancy.

These could be things like "Build your own code metrics tool", or "Write a program that plays another program at poker and start a poker tournament at SC2015" or "Write a program that reverses time and visit the dinosaurs", or whatever flight of fancy our contributors dare to take. As long as it has an element of craftsmanship to it, and exercises the code craft muscles, we're interested in hearing about it. use the organiser's email address at the bottom to get in touch if you have an idea(s). A proper grown-up RFP will go out soon.

Their creators will be around on the day to answer questions and give tips and advice based on their own experiences tackling the project. Some might take about an hour. Some might take you all day. Estimated completion times will be clearly signposted to help you plan your day.

If none of them take your fancy, do your own thing. Or take a wander around beautiful and historic Bletchley Park, including the amazing National Museum of Computing, and see if the muse grabs you.

This year, we'll also be promoting the theme of Developer Culture, Ethics & Diversity in the informal discussions going on throughout the day. We'll be asking a group of volunteers to prepare "Conversation Cards" - yes, just like in Monty Python & The Meaning of Life - with interesting statistics, thought-provoking facts and (hopefully) conversation-inspiring questions to inspire discussion, brainstorming and debate. And, yes, if you just want to spend the entire day discussing issues with your peers, at SC2015 you can.

Doors will open at 9am, where you can register and pick up your information pack, grab a tea of a coffee, and then relax for a bit, or jump straight in. It's entirely up to you.

If you decide to stay with us for the evening, there will be sausages! Oh yes! (And veggie sausages, too.) And booze, soft drinks and entertaining diversions. None of which are compulsory, of course. If you want to sit outside and gaze at the stars instead, at SC2015 you can.

Of course, it can't be complete anarchy. We do have a very simple schedule:

09:00 Doors open for registration (with Tea & Coffee)
09:30 Welcome by organiser Jason Gorman
09:35 Your Time
11:00 Tea & Coffee served
12:30 Lunch
15:00 Tea & Coffee served
16:30 Bar opens
18:00 Mansion closed briefly for evening changeover
18:30 Dinner served
20:45 Bar closes
21:00 Mansion closes

Won't Someone Please Think Of The Software?!

One hope I harbour for putting proper computing - especially programming - back into the British school curriculum is that, having had a go at making computers do stuff by typing text into them - today's kids will grow up to be the kind of responsible, enlightened adults who know from experience that it takes time and costs money to write software (and many times more to change software that's already been written.)

Because I've noticed that, right now, with a generation of adults most of whom have no practical experience of programming, that thought barely registers.

Exhibit A is the commentator insisting that scrapping Greenwich Meantime in the UK (our clocks go back an hour in the winter) would be an easy win for the government as it would be "a very popular move that would cost nothing". I beg to differ. Time and time zones and daylight savings and GMT and all that paraphernalia takes up a lot of source code; code that would need to be changed. Changing code is very expensive.

This is quite typical of how computer muggles think - or, more accurately, don't think - when it comes to matters involving software. More often than not, it just never crosses their minds.

Nowhere is this more visible than in politics; parties gleefully dream up policies like the now-very-very-late-and-getting-later Universal Credit benefits system or a national database of this-that-and-the-other or an online system for geo-tagging naughty donkeys, or whatever the hair-brained policy is, without giving any thought to how much the IT would cost, or if it's even feasible. (TIME SAVING TIP: If it involves IT and government, it's usually not feasible. "Hello, World" could cost billions and still not work after a decade, if the right - i.e., wrong - suppliers got involved. And they always do get involved.)

But this is the Information Age, and that ought to be a clue that anything any government, business, charity or other organisation wants to do on a large scale is going to involve processing large amounts of information, and that mean's it's going to involve computers and very probably new software or changes to existing software.

Even after all this hoo-hah about the Year Of Code and computing in schools and TechCityUK and all that palaver, politicians still haven't seen the light. The fact that political parties don't have a dedicated spokesperson for government computing says it all. They don't think about this stuff until they absolutely have to, by which time the bill has passed, the policy is set in stone and necks are on the chopping block. And once everyone's committed, it seems no amount of money is too much to flush down the toilet of bad IT strategies. Essentially, we have to wait until that government gets voted out and a new broom can come in and acknowledge her predecessors mistakes and scrap the programme, by which time hundreds of millions, or even billions, of public funds have been poured down the drain.

My appeal to them: won't someone please think of the software?!


October 16, 2014

Dear Aunty Jason, I'm An Architect Who Wants To be Relevant Again...

"Dear Aunty Jason,

I am a software architect. You know, like in the 90's.

For years, my life was great. I was the CTO's champion in the boardroom, fighting great battles against the firebreathing dragons of Ad Hoc Design and the evil wizards of Commercial Off-The-Shelf Software. The money was great, and all the ordinary people on the development teams would bow down to me and call me 'Sir'.

Then, about a decade ago, everything changed.

A man called Sir Kent of Beck wrote a book telling the ordinary people that dragons and wizards don't actually exist, and that the songs I'd been singing of my bravery in battle had no basis in reality. 'You', Sir Kent told them, 'are the ones who fight the real battles.'

And so it was that the ordinary people starting coming up with their own songs of bravery, and doing their own software designs. If a damsel in distress needed saving, they would just save her, and not even wait for me - their brave knight - to at the very least sit astride my white horse and look handsome while they did it.

I felt dejected and rejected; they didn't need me any more. Ever since then, I have wandered the land, sobbing quietly to myself in my increasingly rusty armour, looking for a kingdom that needs a brave knight who can sings songs about fighting dragons and who looks good on a horse.

Do you know of such a place?

Your sincereley,

Sir Rational of Rose."

I can sympathise with your story, Sir Rational. I, too, was once a celebrated knight of the UML Realm about whom many songs were sung of battles to the death with the Devils of Enterprise Resource Planning. And, I, too, found myself marginalised and ignored when the ordinary folk started praying at the altar of Agile.

And, I'm sorry to say, there are lots of kingdoms these days where little thought is given to design or to architecture. You can usually spot them from a distance by their higgledy-piggledy, ramshackle rooftops, and the fact that they dug the moat inside the city walls.

But, take heart; there are still some kingdoms where design matters and where architecture is a still a "thing". Be warned, though, that many of them, while they may look impressive from the outside, are in fact uninhabited. Such cities can often be distinguished by gleaming spires and high white walls, beautiful piazzas and glistening fountains, and the fact that nobody wants to live there because they built it on the wrong place.

You don't want to go to either of those kinds of kingdom.

Though very rare, there are a handful of kingdoms where design matters and it matters that people want to live in that design. And in these places, there is a role for you. You can be relevant once again. Once again, people will sing songs of your bravery. But you won't be fighting dragons or wizards in the boardroom. You'll be a different kind of knight.

You'll be fighting real battles, embedded in the ranks of real soldiers. Indeed, you'll be a soldier yourself. And you'll be singing songs about them.




October 15, 2014

Software Craftsmanship 2015. Yes, You Heard Right.

After much discussion and beer and brainstorming and beer and then more beer, I've decided that there will be a Software Craftsmanship 2015.

It's going to be a little different to the previous 4 SC conference, in some key ways:

Firstly, not only will there be no presentations (as usual), there will be no sessions at all. How is this possible? Well, the fun in SC20xx was always in the doing, so SC2015 will be 100% doing. Instead of proposing sessions, I'll be asking people to devise problems, games, challenges, projects - fun stuff that folk can pick up and pair with other folk on for as much of the day as they want. We'll provide the space, the Wi-Fi and the sausages (yes, there will be sausages). You provide the laptop and your brains and enthusiasm.

Each mini-project will have a point to it. This won't be just some random hackathon. They'll be devised to exercise some of your code crafting muscles, hoppefully in entertaining and interesting ways. There'll be a handout that explains it, links to useful resources, and the person/people who devised the challenge will be nearby and easily identifiable, ready to answer questions and give feedback. They will,, of course have completed the challenge themselves beforehand. SC20xx has never been home to people who only talk the talk.

So, with a good wind behind us, you will learn something. But in your own time and at your own pace. How you structure your day will be entirely up to you. We'll open the doors, say "howdy" to y'all and then point you to the rooms/spaces where different kinds of challenges can be found, and off you go. There'll be scheduled coffee breaks and lunch and wotnot, naturally, but your day is your own. Want to gra a challenge and then go sit outside with a chum and do it on the lawn? Dandy.

You can spend an hour on each challenge you like the look of, all all day doing the same challenge, or do a 1-hour challenge several times, with different people, wearing different disguises. It's entirely up to you. The estimated time needed will be clearly sign-posted, so you don't pick an 8-hour project thinking it's going to take half an hour (or vice-versa.)

In the evening, as is the tradition for SC20xx, there'll be diversions, and sausages. Or sausages and diversions, depending on your priotities. Or diverting sausages, if that's your sort of thing.

Of course, there will be discussions. As well as the usual technical stuff we like to natter about, I'm going to provide "Conversation Cards" - yes, exactly like in Monty Python's The Meaning Of Life - on the theme of Developer Culture, Ethics & Diversity. If you want to find a space and have a debate, feel free. If you want to write a blog post based on one of the cards, fill yoour boots. If you just want to chat about it over coffee, all good. if you don't want to talk about it at all, no pressure. Nothing's compulsory.

Blog posts, GitHub wotsits, photos, videos and all manner of outputs from the day will be aggregated, collated, assigned to pastel-coloured categories and then fed to the huddled masses who are starving for a bit of quality.

It's likely to happen in the Spring, when the weather's warming up (I'd put my money on May). We'll be fieelding proposals for challenges within the next few weeks, and all materials will need to be finalised a month before the event. I shan't call it a "conference", because that's the one thing it's not going to be.

So, there you have it: Software Craftsmanship 2015 (#sc2015), May-ish 2015, very probably at Bletchley Park, very probably all profits from ticket sales will fund programming clubs or something along those lines.

Any questions?



October 8, 2014

Programming Well (Part II)

This is a short series of blog posts aimed at teachers and students who are interested in progressing on to more challenging programming projects. Here we explore the disciplines programmers need to apply to make it easier to add new features and change existing features in our programs, which ultimately makes it easier for us to incorporate our learning and make our programs better.

------------------------------------------------------------------------------------------------------------

In my previous post, we looked at why it can be very helpful to be able to test (and re-test) our programs very frequently, and how automating those tests can enable this.

We also looked at why it's very important to write code that other programmers can easily understand.

In this post, we'll talk about how we break down our programs into smaller units, and how we can organise those units to make changing our programs easier.

3. Compose programs out of small, simple pieces that do one job each

Most modern programming languages give us mechanisms for breaking down our code into smaller, simpler and more manageable "chunks". This enables us to tackle more complex programming problems without having to consider the whole problem at once, and it also enables us to make chunks of our code reusable, so we don't have to keep repeating ourselves, and ultimately we write less code.

This principle is applicable at multiple levels of code organisation:
* Blocks of code can be broken down into functions that do a specific job
* Groups of related functions - functions that are used together to do a job - can be split out into their own modules
* Groups of related modules - modules that are used together for a purpose - can be packaged together into a unit of release (e.g., an executable program file, or a code library)

If our goal is to make change easier, breaking the code down into cohesive units like this - units of code that are used together, and are likely to change together - helps to localise the impact of changes.

We balance the need for cohesion in functions, modules and packages with our need to not repeat ourselves, reusing code whenever we get the chance to minimise how much code we have to write.

And it's important that each unit of code has only one reason to change. If I write a function that does two distinct jobs, I lose the ability to change how one of those jobs is done without affecting the other. I also lose the ability to reuse the code that does each of those jobs independently of the other.

Let's take a look at a coupe of specific examples, one at the function level, another at the module level:



In this example, the Java function has two distinct jobs: it reads the scores from a file, and then it calculates the total score.

This presents us with several potential barriers to change.

Firstly, this is harder to test. If I write an automated test for this function, it could fail for a whole bunch of reasons (wrong file path, data in file not formatted correctly, total calculated incorrectly, and so on.) So if anything breaks, I'm probably going to end up in the debugger trying to figure out what actually went wrong.

It also prevents me from changing how either of these jobs is to be done without potentially affecting the other. It's all in the same function, and functions either work as a whole or they don't. Break one part, you break all of it. Ideally, functions should only have one reason to fail.

It's also a barrier to reuse. Maybe when I want to calculate a total, I don't always want scores to be read from a file. Maybe when I want scores read from a file, it's not always because I want to calculate their total. I might want to reuse either of these chunks of code independently of the other.

ON DUPLICATION OF CODE: Reuse is how programmers remove duplication from their code. Instead of, say, copying and pasting a block of code when we want new code to do something similar, we can create a function that accepts parameters and reuse that instead, and we can group functions together into reusable modules, and modules into reusable packages, and so on. The benefit of doing this is that wehn we need to make a change to duplicated logic, we only need to make that change in one place. If we copied and pasted, we'd have to change it everywhere we pasted it - effectively multiplying the effort of making that change.

When I refactor this code so that each function has a distinct responsibility - only one reason to fail - we can see how the code becomes not only easier to test, but easier to change independently and to reuse independently, which makes accomodating change even easier going forward.



In this refactored version of the code, these two distinct jobs of reading the scores from a file and adding up the total are delegated by the original function to two new functions, each with a distinct responsibility.

The function totalScore() is now just composed of calls to these other functions that do the actual work. totalScore doesn't know how the work is done, and reading scores and totalling scores know nothing about each other. As we'll see, composing our program in such a way that all the different bits - all playing their particular role - know as little about the details of how each other does their work becomes very important in making change easier.

Also notice that the two comments in the original function have disappeared? I took the opportunity to incorporate the useful information in the comments into the new functions I created. The original function now pretty much just tells the story that the comments told us. This is an example of how breaking our program down can also help us to make it easier to read and understand.

Which brings me to our second example, and...

4. Hide the details of how work is done, and make it so we can swap those details easily

In our refactored example above, the details of how scores are read from a file, and how those scores are totalled, are hidden from the totalScores() function. This gives us a degree of flexibility, allowing us to compose our program in different ways (e.g., we can total scores that weren't read from a file, or read scores for some other purpose.)

This flexibility - the ability to compose programs in differet ways while making only small changes to the code - can become a very powerful tool for making change easier. The more ways there are to compose the working units of our program, the more things we can make our program do with less effort.

In object oriented programming, using languages like Java and Python, we can go a step further in achieving greater flexibility.

In our example, even though we now have reading scores and totalling scores neatly separated, there is still rigidity in our code. What if I want to read scores from different sources? Let's say, for example, that our program is intended to be used in different settings - sometimes a desktop application that stores the scores in a file on the hard drive, sometimes a web application that stores the scores in a database like MySQL or Microsoft Access. How could we organise our code so that we can actually choose our strategy for storing scores without having to change a lot of the code?



Refactoring the code further, I can move the function that reads scores into its own class (an object oriented module from which we create instances - "objects" - that do the work), and the function that totals the scores into another class. Each class has a distinct job - a single responsibility.

Then - and this is the key to this trick - I set up an interface for each of those classes, and make it so that our RobotScoreboard only knows about the interfaces. I do this by passing in instances of the reader and the totaller in the constructor - a special function that we call when an object is created to properly initialise it before can use it - referencing only the interfaces when I do. So RobotScoreboard has no knowledge of exatly what kind of object will be doing the work. All it knows is what functions it will be able to call on these objects, which is all the interface defines.

If I wanted to use my RobotScoreboard with scores stored in a MySQL database, I write a new class that implements the same ScoresReader interface, and pass an instance of that into the constructor instead.

This gives me the ability to substitute different implementations of ScoresReader and Totaller without having to make any changes to RobotScoreboard, and to compose the program in many different ways.

This is something that's very difficult to do in programming languages that don't have a built-in mechanism for making it possible to subsitute one implementation of a function or a module with another without changing the code that uses those them. Which is why we very highly recommend that on more challenging programming projects, you use languages that support what we call polymorphism (that is, the ability for functions or modules to look the same from the outside but work differently on the inside).

DESIGNING FOR TESTABILITY: Another thing worth noting in this example is that, by introducing this extra level of flexibility, it now potentially becomes easier to write unit tests for our RobotScoreboard. Recall from the previous blog post how important it can be that we test our program as frequently as possible. For this to be possible, our tests need to run quickly. Reading data from files, or from databases, takes considerable computing resources, and tests that read files or depend on databases tend to run slowly. In our refactored version of the code, it now becomes possible for me to write a fake implementation of the ScoresReader - one that just builds a list of scores in program code, rather than reading it from a file - purely for the purposes of our tests. Making our programs easier and quicker to test automatically is another great benefit of organising our code in this sort of way.

Inexperienced programmers often have a problem with organising their programs into small, reusable, subsititible chunks like this. They see lots of new little functions and new modules and say "It's more complicated". But if you look closely, at the individual "chunks", you see that they are each simpler. when we lump all our code together into gib functions that do lots of things, and big modules with many, many functions, we lose a great deal of flexibility.

The key is to keep the individual parts as simple as we can, and to compose our programs out of them in such a way that we can more easily swap parts with new or different parts with less impact on the rest of the code.


In the next blog post, we'll look at how we manage our code and how we can more effectively work with other programmers on the same programs.





October 7, 2014

Programming Well Part I - A Basic Guide For Students & Teachers

This is the first part of a series of blog posts aimed at new programmers who have gotten the hang of the basics and want to progress on to more challenging projects, as well as teachers and mentors who may be helping people to progress as programmers. I'll add more installements as time allows.

-----------------------------------------------------------------------------------------------------------------

If you're a teacher working with the new computing curriculum, then you probably have a lot on your plate.

But there are some things I would like you to think about on the subject of programming that, as far as I'm aware, aren't covered in the curriculum. I believe they should be, because they're actually very important things every programmer should know.

Once a programmer has learned enough to write simple programs that successfully compile and run, and kind-of, sort-of work, they are ready to move on tackling more complex problems and more ambitious projects.

This is where it all can get, well, complicated. Programming is like Jenga. Programs are easy to start, but can be very difficult to keep going.

As the code gets more complicated, we run into difficulties: in particular, it gets harder and harder to make any additions or changes to the code without breaking it. You may be surprised to learn that changing code without breaking it is the predominant focus of most programmers.

And if there's one thing we know about software, it's that it is going to change. That's why we call it "software". We write a program that we think solves our problem. Then we try to use it, and inevitably we learn how we could do it better. Then we go back and change the code to incorporate what we learned. It's one of the main reasons why we often wait for "Version 2.0" of a program before we consider adopting it. When change is difficult, software doesn't improve.

And you might also be surprised how quickly you can run into difficulties. I've run workshops where code attendees wrote in the morning was holding them back in the afternoon.

For these reasons, professional programmers understand only too well the importance of programming well, even on what might seem like trivial projects.

What is "programming well"?

Let's break it down into a handful of simple disciplines (simple to understand, not so simple to master, as it turns out):

1. Test Continuously

By far the most empowering tool a programmer can have is the ability to find out if a change they just made broke their program pretty much straight away. If we break it and don't find out until later, it can take much more time to debug. For starters, we're no longer working on that part of the code, so it's not fresh in our minds. We've got to go back, read the code and understand it again - a bit like loading it into main memory - before we can fix it. And if we've made a bunch of changes since then, then it can be much harder to pinpoint what went wrong.

For this reason, we find it's good idea to test the program as frequently as we can - ideally we'd want to re-test it after making a single change to the code.

Doing this by hand over and over again can be very laborious, though. So we automate those tests. Basically, we write programs that test our programs. This practice is very highly recommended. And it can be done in pretty much any programming language - Visual Basic, Python, C#, Java, FORTRAN, C++, you name it. If we can call functions we've written in our program, we can write code that drives our program - using those functions - and asks questions about the outcomes.

There are special frameworks that can help us to do this, and they're available for most programmig languages (e.g., PyUnit for Python is a "unit testing" tool that lets us automatically test discrete chunks of our software).

Automated tests, at the end of the day, are just programs. If the kids can write programs, they can write automated tests for those program. Simples.

Here's an example of an automated unit test for a Java program:



Tests have three components:

* A set-up - that initialises the test data or objects we intend to use for the particular test case we're interested in. In this example, we set up a DartsPlayer object with an initial score of 501, then take 3 throws, each scoring treble 20.

* An action - this is the code that we want to test. In this case, we're testing that throwing darts correctly changes the player's current score

* An assertion - we ask a question about what the output or outcome of the action should be in that particular set-up

Try it. I suspect you may find it makes a huge difference to how easy the going is, especially on more ambitious projects.

2. Write code other programmers can understand

Here's a fun fact for you: what do programmers spend most of their time doing? Reading code. It's estimated we spend between 50%-80% of our time just reading programs and trying to understand them. Understanding programs is the bulk of the work in programming.

And I'm not just talking about other people's code either. I've spent many a joyful hour looking at code I wrote - months ago, weeks ago, yesterday, before lunch - and wondering what the hell I was thinking.

Most of the programming we do is on programs that already exist. We're adding new features, changing existing features, fixing bugs, making it run faster or use less memory or disk space... There are all sorts of reasons why we spend most of our time wrestling with existing code. Add other programmers (hell is "other programmers"), and that effect magnifies to epic proportions. Working on projects with our friends, or collaborating with programmers out there on Teh Internets, makes readable code a very high priority. We're not just communicating with the computer, we're communicating with each other.

The best kind of code clearly tells the story of what the code does.

Look at this very simple example:



At first glance, it may not be entirely obvious what this code does. Programs give us an opportunity to make them more self-explanatory by choosing more meaningful names for the things in our code - functions variables, constants, modules and so on. If I refactor this code to make it more readable ("refactoring" is the art of improving our code to, for example, make it more readable, without changing what it does), I might end up with something like:



Some people - silly people - will try to convince you that the way to make programs easier to understand is through the use of comments. In prcatice, we find comments are often less than helpful. Firstly, they clutter up the code, which can make it even harder to read. And, like all documentation, comments can get out of step with the code being described. Comments can be an indicator that the code is difficult to read, and we've learned that it's better to try and tackle that issue directly rather than rely on comments or other kinds of documentation in all but the most extreme cases.

In the next blog post, we'll look at how we break our programs down, and a couple of simple principles for organising our programs that can make them easier to change.