July 16, 2016

...Learn TDD with Codemanship

Taking Agile To The Next Level

As with all of my Codemanship training workshops, there's a little twist in the tail of the Agile Software Development course.

Teams learn all about the Agile principles, and the Agile manifesto, Extreme Programming, and Scrum, as you'd expect from an Agile Software Development course.

But they also learn why all of that ain't worth a hill of beans in reality. The problem with Agile, in its most popular incarnations, is that teams iterate towards the wrong thing.

Software doesn't exist in a vacuum, but XP, Scrum, Lean and so forth barely pay lip-service to that fact. What's missing from the manifesto, and from the implementations of the manifesto, is end goals.

In his 1989 book, Principles of Software Engineering Management, Tom Gilb introduced us to the notion of an evolutionary approach to development that iterates towards testable goals.

On the course, I ask teams to define their goals last, after they've designed and started building a solution. Invariably, more than 50% of them discover they're building the wrong thing.

It had a big influence on me, and I devoted a lot of time in the late 90s and early 00s to exploring and refining these ideas.

Going beyond the essential idea that software should have testable goals - based on my own experiences trying to do that - I soon learned that not all goals are created equal. It became very clear that, when it comes to designing goals and ways of testing them (measures), we need to be careful what we wish for.

Today, the state of the art in this area - still relatively unexplored in our industry - is a rather naïve and one-dimensional view of defining goals and associated tests.

Typically, goals are just financial, and a wider set of perspectives isn't taken into account (e.g., we can reduce the cost of manufacture, but will that impact product quality or customer satisfaction?)

Typically, goals are not caveated by obligations on the stakeholder that benefits (e.g., the solution should reduce the cost of sales, but only if every sales person gets adequate training in the software).

Typically, the tests ask the wrong questions (e.g., the airline who measured speed of baggage handling without noticing the increase in lost of damaged property and insurance claims, and then mandated that every baggage handling team at every airport copy how the original team hit their targets.)

Now, don't get me wrong: a development team with testable goals is a big improvement on the vast majority of teams who still work without any goals other than "build this".

But that's just a foundation on which we have to build. Setting the wrong goals, implemented unrealistically and tested misleadingly, can do just as much damage as having no goals at all. Ask any developer whose worked under a regime of management metrics.

Going beyond Gilb's books, I explored the current thinking from business management on goals and measures.

Balancing Goals

First, we need to identify goals from multiple stakeholder perspectives. It's not just what the bean counters care about. How often have we seen companies ruined by an exclusive focus on financial numbers, at the expense of retaining the best employees, keeping customers happy, being kind to the environment, and so on? We're really bad at considering wider perspectives. The law of unintended consequences can be greatly magnified by the unparalleled scalability of software, and there may always be side-effects. But we could at least try to envisage some of the most obvious ones.

Conceptual tools like the Balanced Scorecard and the Performance Prism can help us to do this.

Back in the early 00s, I worked with people like Mike Bourne, Professor of Business Performance Innovation, to explore how these ideas could be applied to software development. The results were highly compatible, but still - more than a decade later - before their time, evidently.

Pre-Conditions

If business goals are post-conditions, then we - above all others - should recognise that many of them will have pre-conditions that constrain the situations in which our strategy or solution will work. A distributed patient record solution for hospitals cannot reduce treatment errors (e.g., giving penicillin to an unconscious patient who is allergic) if the computers they're using can't run our software.

For every goal, we must consider "when would this not be possible?" and clearly caveat for that. Otherwise we can easily end up with unworkable solutions.

Designing Tests

Just as with software or system acceptance tests, to completely clarify what is meant by a goal (e.g., improve customer satisfaction) we need to use examples, which can be worked into executable performance tests. Precise English (or French or Chinese or etc) just isn't precise enough.

Let's run with my example of "improve customer satisfaction"; what does that mean, exactly? How can we know that customer satisfaction has improved?

Imagine we're running a chain of restaurants. Perhaps we could ask customers to leave reviews, and grade their dining experience out of 10, with 1 being "very poor" and 10 being "perfect".

Such things exist, of course. Diners can go online and leave reviews for restaurants they've eaten at. As can the people who own the restaurant. As can online "reputation management" firms who employ armies of paid reviewers to make sure you get a great average rating. So you could be a very highly rated restaurant with very low customer satisfaction, and the illusion of meeting your goal is thus created.

Relying solely on online reviews could actively hurt your business if they invited a false sense of achievement. Why would service improve if it's already "great"?

If diners were really genuinely satisfied, what would the real signs be? They'd come back. Often. They'd recommend you to friends and family. They'd leave good tips. They'd eat all the food you served them.

What has all this got to do with software? Let's imagine a tech example: an online new music discovery platform. The goal is to amplify good bands posting good music, giving them more exposure. Let's call it "Soundclown", just for jolly.

On Soundclown, listeners can give tracks a thumbs-up if they like them, and thumb's down if they really don't. Tracks with more Likes get promoted higher in the site's "billboards" for each genre of music.

But here's the question: just because a track gets more Likes, does that mean more listeners really liked it? Not necessarily. As a user of many such sites, I see how mechanisms for users interacting with music and musicians get "gamed" for various purposes.

First and foremost, most sites identify to the musician who the listener that Liked their track is. This becomes a conduit for unsolicited advertising. Your track may have a lot of Likes, but that could just be because a lot of users would like to sell you promotional services. In many cases, it's evident that they haven't even listened to the track that they're Liking (when you notice it has more Likes than it's had plays.)

If I were designing Soundclown, I'd want to be sure that the music being promoted was genuinely liked. So I might measure how many times a listener plays the track all the way through, for example. The musical equivalent of "but did they eat it all? and "did they come back for more?"

We might also ask for a bit more proof than clicking a thumbs-up icon. One website I use keeps a list of my "fans", but are they really fanatical about my music? Judging by the tumbleweed when I alert my "fans" to new music, the answer is evidently "nope". Again, we could learn from the restaurant, and allow listeners to "tip" artists, conveying some kind of reward that costs the listener something somehow.

Finally, we might consider removing all unintended backwards marketing channels. Liking my music shouldn't be an opportunity for you to try and sell me something. That very much lies at the heart of the corruption of most social networks. "I'm really interested in you, Now buy my stuff!"

This is Design. So, ITERATE!

The lesson I learned early is that, no matter how smart we think we've been about setting goals and defining tests, we always need to revisit them - probably many times. This is a design process, and should be approached the same we design software.

We can make good headway using a workshop format I designed many years ago

Maintaining The Essence

Finally, but most important of all, our goals need to be expressed in a way that doesn't commit us to any solution. If our goal is to promote the best musicians, then that's our goal. We must always keep our eyes on that prize. It takes a lot of hard work and diligence not to lose sight of our end goals when we're bogged down in technical solution details. Most teams fail in that respect, and let the technical details become the goal.







Posted 1 year, 3 months ago on July 16, 2016