March 6, 2017

Learn TDD with Codemanship

Start With A Goal. The Backlog Will Follow.

The little pairing project I'm doing with my 'apprentice' Will at the moment started with a useful reminder of just how powerful it can be to start development with goals, instead of asking for a list of features.

As usual, it was going to be some kind of community video library (it's always a community video library, when will you learn!!!), and - with my customer role-playing hat on - I envisaged the usual video library-ish features: borrowing videos, returning videos, donating videos, and so on.

But, at this point in my mentoring, I'm keen for Will to get some experience working in a wider perspective, so I insisted we started with a business goal.

I stipulated that the aim of the video library was to enable cash-strapped movie lovers to watch a different movie every day for a total cost of less than £100 a year. We fired up Excel and ran some numbers, and figured out that - in a group with similar tastes (e.g,, sci-fi, romantic comedies, etc) - you might need only 40 people to club together to achieve this.

This reframed the whole exercise. A movie club with 40 members could run their library out of a garden shed, using pencil and paper to keep basic records of who has what on loan. They could run an online poll to decide what titles to buy each month. They didn't really need software tools for managing their library.

The hard part, it seemed to us, would be finding people in your local area with similar tastes in movies. So the focus of our project shifted from managing a collection of DVDs to connecting with other movie lovers to form local clubs.

Out of that goal, a small feature list almost wrote itself. This is how planning should work; not with backlogs of feature requests, but with customers and developers closely collaborating to achieve goals.

It's similar in many ways to how TDD should work - in fact, arguably, it is TDD (except we start with business tests). When I'm showing teams how to do TDD, I advise them not to think of a design and then start writing unit tests for all the classes and methods and getters and setters they think they're gloing to need. Start with a customer test, and drive your internal design directly from that. Classes and methods and getters and setters will naturally emerge as and when they're needed.

When I run the Codemanship Agile Software Development workshop, we do it backwards to illustrate this point. Teams are tasked with coming up with internal designs, and then I ask them to write some customer tests afterwards. Inevitably, they realise their internal design isn't what they really needed. Stepping further back, I ask them to describe the business goals of their software, and at least half the teams realise they're building the wrong thing.

So, my advice... Ditch the backlog and start with a goal. The rest will follow.


December 28, 2016

Learn TDD with Codemanship

The Best Software Developers Know It's Not All 'Hacking'

A social media debate that appears to have been raging over the Xmas break was triggered by some tech CEO claiming that the "best developers" would be hacking over the holidays.

Putting aside just how laden with cultural assumptions that tweet was (and, to be fair, many of the angry responses to it), there is a wider question of what makes a software developer more effective.

Consider the same tweet but in a different context: the "best screenwriters" will spend their holiday "hacking" screenplays. There's an assumption there that writing is all there is to it. Look at what happens, though, when screenwriters become very successful. Their life experiences change. They tend to socialise with other movie people. They move out of their poky little downtown apartments and move into more luxurious surroundings. They exchange their beat-up old Nissan for a brand new Mercedes. They become totally immersed in the world of the movie business. And then we wonder why there are so many movies about screenwriters...

The best screenwriters start with great stories, and tell their stories in interesting and authentic voices. To write a compelling movie about firefighters, they need to spend a lot of time with firefighters, listening to their stories and internalising their way of telling them.

First and foremost, great software developers solve real problems. They spend time with people, listen to their stories, and create working solutions to their problems told in the end user's authentic voice.

What happens when developers withdraw from the outside world, and devote all of their time to "hacking" solutions, is the equivalent of the slew of unoriginal and unimaginative blockbuster special effects movies coming out of Hollywood in recent years. They're not really about anything except making money. They're cinema for cinema's sake. And rehashes of movies we've already seen, in one form or another. because the people who make them are immersed in their own world: movie making.

Ironically, if a screenwriter actually switched off their laptop and devoted Xmas to spending time with the folks, helping out with Xmas dinner, taking Grandma for a walk in her wheelchair, etc, they would probably get more good material out of not writing for a few days.

Being immersed in the real world of people and their problems is essential for a software developer. It's where the ideas come from, and if there's one thing that's in short supply in our industry, it's good ideas.

I've worked with VCs and sat through pitches, and they all have a Decline Of The Hollywood Machine feel to them. "It's like Uber meets Moonpig" is our equivalent of the kind of "It's like Iron-Man meets When Harry Met Sally" pitches studio executives have to endure.

As a coach, when I meet organisations and see how they create software, as wel as the usual technical advice about shortening feedback cycles and automating builds and deployment etc, increasingly I find myself urging teams to spend much more time with end users, seeing how the business works, listening to their stories, and internalising their voices.

As a result, many teams realise they've been solving the wrong problems and ultimately building the wrong software. The effect can be profound.

I'll end with a quick illustration: my apprentice and I are working on a simple exercise to build a community video library. As soon as we saw those words "video library", we immediately envisaged features for borrowing and returning videos, and that sort of library-esque thing.

But hang on a moment! When I articulated the goal of the video library - to allow people to watch a movie a day for just 25p per movie - and we broke that down into a business model where 40+ people in the same local area who like the same genres of movies club together - it became apparent that what was really needed was a way for these people to find each other and form clubs.

Our business model (25p to watch one movie) precluded the possibility of using the mail to send and return DVDs. So these people would have to be local to wherever the movies were kept. That could just be a shed in someone's garden. No real need for a sophistated computer system to make titles. They would just need some shelves, and maybe a book to log loans and returns.

So the features we finally came up with had nothing to do with lending and returning DVDs, but was instead about forming local movie clubs.

I doubt we'd have come to that realisation without first articulating the problem. It's not rocket science. But still, too few teams approach development in this way.

So, instead of the "It's like GitHub, but for cats" ideas that start from a solution, take some time out to live in the real world, and keep your eyes and ears open for when someone says "You know what's really annoys me?", because an idea for the Next Big Thing might follow.





November 27, 2016

Learn TDD with Codemanship

Software Craftsmanship is a Requirements Discipline

After the smoke and thunder of the late noughties software craftsmanship movement had cleared, with all its talk of masters and apprentices and "beautiful code", we got to see what code craft was really all about.

At its heart, crafting high quality code is about leaving it open to change. In essence, software craftsmanship is a requirements discipline; specifically, enabling us to keep responding to new requirements, so that customers can learn from using our software and the product can be continually improved.

Try as we might to build the right thing first time, by far the most valuable thing we can do for our customers is allow them to change their minds. Iterating is the ultimate requirements discipline. So much value lies in empirical feedback, as opposed to the untested hypotheses of requirements specifications.

Crafting code to minimise barriers to change helps us keep feedback cycles short, which maximises customer learning. And it helps us to maintain the pace of innovation for longer, effectively giving the customer more "throws of the dice" at the same price before the game is over.

It just so happens that things that make code harder to change also tend to make it less reliable (easier to break) - code that's harder to understand, code that's more complex, code that's full of duplication, code that's highly interdependent, code that can't be re-tested quickly and cheaply, etc.

And it just so happens that writing code that's easy to change - to a point (that most teams never reach) - is also typically quicker and cheaper.

Software craftsmanship can create a virtuous circle: code that's more open to change, which also makes it more reliable, and helps us deliver value sooner and for longer. It's a choice that sounds like a no-brainer. And, couched in those terms, it should be an easy sell. Why would a team choose to deliver buggier software, later, at a higher cost, with less opportunity to improve it in the next release?

The answer is: skills and investment. But that's a whole other blog post!




November 13, 2016

Learn TDD with Codemanship

Think About Workflows Before You Think About Features

Sunday morning's often my time for doing admin and logistical-type stuff, like booking flights or trains or hotels.

Today I searched for a hotel near a client's offices, then booked through the chain's website. After that I copied and pasted the hotel's address to search on Google Maps for a decent restaurant nearby, and then pasted the restaurant's name into Trip Advisor to see reviews. Satisfied, I then pasted the hotel's postcode into an app for booking a taxi to the client site on the first morning because it was 2 miles away (and I find walking to strange places in strange towns a bit risky on the first day).

Finally, I searched on Trainline and booked reserved seats, because it's way cheaper to do in advance.

In all, I used 6 different websites:

1. Google Maps (to find a hotel near client offices)
2. Hotel website to book a room
3. Google Maps again to find a restaurant nearby
4. Trip Advisor to see reviews of the restaurant
5. Restaurant website to reverse a table for dinner
6. Taxi booking site
7. Trainline to find and reserve seats for a return journey

And it occurred to me that there was a lot of browser tabs and copying and pasting involved, creating the impression of some kind of workflow.

This workflow sits above the use cases of Google Maps, the hotel's website, Trip Advisor, the restaurant's website, the taxi booking site, and Trainline.

We can visualise how this workflow maps on to the 6 different software systems involved, using a simple technique borrowed from the book Business Modeling with UML (and even as I type this, I'm copying and pasting the URL to the book, part of yet another higher-level workflow):



The reason I'm mentioning all of this is three-fold: firstly, if I were designing these systems, I would find this kind of context very useful. Right now, to move from one step in the workflow to the next, I have to open a new browser tab, go to the right web page, and paste in data from a previous step. Clunky! Studying these real world workflows can help us to smooth out these kinks. For example, if we found that 50% of people who book hotel rooms on our site then go on to search for nearby restaurants, we could add a link that will take us straight to Google Maps and fill in the data for us.

Secondly, and more importantly, when software design's done right, we start by thinking about business goals and about business workflows, not system features. This technique helps us to identify system use cases from the outside in, starting with the business context. That's as it should be.

And thirdly, it makes good business sense, when we're designing applications, to ask ourselves "where are the user's coming from?" Have they just booked a hotel? Did they just book a flight? Work your way backwards and build relationships with the developers of software that might - indirectly via search engines, perhaps - be sending users our way, so we can work with third parties to smooth that path. And, likewise, smooth the user's path to downstream activities in their workflow. What are they likely to want to do next? In sales and marketing, it's essential to understand what triggers buyers' needs, so you can pitch your tent right outside that triggering event, so to speak. There's a reason why late-night petrol stations sell flowers and firelighters, and it has absolutely nothing to do with selling petrol.

Arguably, this is how the web should work now. Not links to content, but controls for triggering actions in other people's systems - an event-based (rather than content-based) model. But it doesn't. So we'll have to hand-wire that sort of thing ourselves.

And this isn't just relevant to web-based workflows, of course. An end-to-end sales scenario in your company many involve multiple workflows, enacted using multiple systems. Work to understand those workflows, and smooth the user's path from one step to the next. See them copying and pasting data from one system to another? That's a hint that there's more automation to be done.


November 6, 2016

Learn TDD with Codemanship

Real Immersion is the Key to Understanding Your Customer's Needs



I've long been a believer that the key to producing good solutions is understanding the problem.

This is something that developers will nod their heads in agreement at. But I'm not sure they fully grasp what I mean by "understanding the problem".

I have a background in model-driven approaches to requirements analysis and solution design, and am no stranger to the meetings, documents and diagrams dev teams use to describe problem domains. I would even go so far as to claim to be something of a dab hand (in a previous life).

And that's how I know that traditional approaches to domain analysis are usually entirely insufficient to the task. To illustrate what I mean, imagine a non-developer sat and listened to you explain how you do software development. They write copious notes. They draw copious diagrams with boxes and arrows on them. And at the end of all that, do they really understand how you create software? Is an explanation sufficient?

And if they came back and said "we've come up with a better way of developing software", would you find that credible?

Now let's turn the tables back the usual way. You're asked to write some software for, say, an estate agent. You sit in meeting rooms while they explain estate agency to you. You take copious notes. You draw copious diagrams. And then you come back to them and say "we have solved your problem". Credible?

What gets missed in all those meetings and all those documents and diagrams is the complexity and the nuance. Submerged beneath the explanations is an iceberg of tacit knowledge. That's the iceberg that sinks many projects.

My own experience has taught me that teams have come up with much better solutions after they've seen things for themselves, and - even better - tried things for themselves. If an estate could write software, they'd probably write much better solutions for estate agents than we could.

What developers need to do is become the end user, even if it's just for a while. Shadow domain experts and watch them do their work. Try key tasks for yourself, if that's possible. Books on software analysis and design talk about "immersing" ourselves in the problem domain. But I do not think that words means what they think it means. To me, it means becoming the customer - walking a mile or three in their shoes.

Some domains, of course are very complicated. Walking a mile in a brain surgeon's shoes is not practical. It would take years to understand things as well as a brain surgeon does. In these specialisms, we should probably approach things from the other direction: some brain surgeons should learn to write software.

But most problem domains - most jobs - aren't as complex as yet. It maybe takes a day or two to learn how to operate a supermarket checkout. It maybe takes a week or two to learn how to schedule classes in a school.

In the majority of cases, it's practical for developers to spend time on the "shop floor", observing what goes on in the business activities the software will be used in, and learning how to execute those jobs themselves to at least a basic level.

Not only can it clear up a lot of potential misunderstandings - I've seen months of business analysis blown away by just a few hours spent on the shop floor - but it can also help us to empathise with our end users, as well as build relationships with them that will help us when we need their input and feedback. It may also make us think twice about creating software that will affect these people adversely.

The meetings and the diagrams happen as a consequence of this real immersion. Discussions about what can be improved happen a lot faster when every dog has seen the rabbit.

Of course, there'll be lots of devs still nodding in agreement. It is good form to care about the end users and about solving real problems that make their lives easier.

But, although the results of real immersion can be spectacular, but you may have to drag teams kicking and screaming to experience it. We're highly educated professionals, not supermarket checkout operators, goddamit!

In XP, I've found that this is the first step to an on-team customer: making yourself the customer, if just for a while. Climbing inside their heads requires us to start by experiencing first-hand what it's like to be them. For anything but the most rudimentary of tasks - and, as programmers who automate stuff, we should know by now that there's no such thing as a "rudimentary task" - we can no more build a true understanding of the problem through explanations than we could build a true understanding of the colour orange from reading books.

Now, I get on this hobby horse every once in a while, and developers nod their heads and mutter words of support, and nothing changes. I can't stress this enough, though: this is some powerful voodoo. It makes an enormous difference.

You will no doubt agree in principle, but will always find an excuse why this is not possible or desirable in your team. The hidden objection, once you strip away all the excuses is simply: "we don't want to do this".

Try it. I dare you.



October 11, 2016

Learn TDD with Codemanship

Code Quality is a Requirements Issue

The most fundamental aspect of Agile Software Development is responding to change. Whatever software we deliver today, the real value is in what we can learn from that, so we can deliver something even better tomorrow.

It's nature's search algorithm: evolutionary design. So it should come as no surprise that the cost of changing software is pivotal to our ability to succeed. The more change costs, the less change we can accommodate. The less change we can accommodate, the slower we learn. Simples.

In this respect, the cost of changing software can be thought of as a requirements discipline - ever bit as much as user stories and Specification By Example. Indeed, if we're being genuinely Agile, iterating is the requirements discipline, and everything else is about tweaking our seed values to make the search a little more efficient.

As so many have commented in recent years, failing on code quality means failing on agility. You can Lean ScrumBan all you like. But there's more to Agile than turning up to planning meetings.



July 23, 2016

Learn TDD with Codemanship

On The Compromises of Acceptance Test-Driven Development

I'm currently writing a book on Test-Driven Development to accompany the redesigned training workshop. Having thought very hard about TDD for many years, the first 140 pages were very easy to get out.

But things have - predictably - slowed down now that I'm on the chapter on end-to-end TDD and driving internal designs from customer tests.

The issue is that the ways we currently tackle this are all compromises, and there are many gods that need appeasing, just as there are many ways that folk do it.

Some developers will write, say, a failing FitNesse test and come up with an implementation to pass that test. Some will write a failing automated customer test and then drive an internal design using unit tests and "classic TDD". Some will write a failing automated customer test that makes all the assertions about desired outcomes (e.g., "the donated DVD should be in the library"), and rely entirely on interaction tests to drive out the internal design using mock objects. Some will use test doubles only for external dependencies, ensuring their automated customer test runs faster. Some will include external dependencies and use their automated customer test to do integration testing as well. Some will drive the UI with their automated customer tests, effectively making them complete end-to-end system tests. Some will drive the application through controllers or services, excluding the UI as well as external back-end dependencies, so they can concentrate on the internal design.

And, of course, some won't automate their customer tests at all, relying entirely on their own developer tests for design and regression testing, and favouring manual by-eye confirmation of delivery by the customer herself.

And many will use a combination of some or all of these approaches, as required.

In my own approach, I observe that:

a. You cannot automate customer acceptance. The most important part of ATDD is agreeing the test examples and getting the customer's test data. Making those tests executable through automation helps to eliminate ambiguity, but really we're only doing it because we know we'll be running those tests many times, and automating will save us time and money. We still have to let the dog see the rabbit to get confirmation of acceptance. The customer has to step through the tests with working software and see it for themselves at least once.

b. Non-executable customer tests can be ambiguous, and manually reconciling customer-provided data with unit test parameters can be hit-and-miss

c. The customer rarely, if ever, gets involved with writing "customer tests" using the available tools like FitNesse and Cucumber.




We're probably kidding ourselves that we even need a special set of tools distinct from the xUnit frameworks we would use for other kinds of tests, because - chances are - we're going to be writing those tests

d. Customer tests executed using these tools tend to run slow, even when external dependencies are excluded

e. Relying entirely on top-level tests to check that the work got done right can - and usually does - lead to problems with maintainability later. We might identify a class that could be split off into a component to be reused in ther applications, but where are its functional tests? Imagine we could only test a car radio when it's installed in a Ford Mondeo. This is especially pertinent for teams thinking about breaking down monolithic architectures into component-based or service-based designs.

f. When you exclude the UI and external dependencies, you are still a long way from "done" after your customer test has passed. There's many a slip twixt cup and lip.

g. Once we've established a design that passes the customer's test, the main purpose of having automated tests is to catch regressions as the code evolves. For this, we want to be able to test as much of our code as quickly and cheaply as possible. Over-reliance on slower-running customer tests can be at odds with this goal.

With all this in mind, and revisiting the original goal of driving designs directly from the customer's examples, it's difficult to craft a workable single narrative about how we might approach this.

I tend to automate a "happy path" test automated at entry point to the domain model, drive an internal design mostly through "classic" TDD, and use test doubles (stubs, mocks and dummies) to exclude external dependencies (as well as fake complex components I don't want to get into yet - "fake it 'til you make it".) A lot of edge cases get dealt with only in unit tests and with by-eye customer testing. I will work to pass one customer test assertion at a time, running the FitNesse test to get feedback before moving on to the next assertion.

This does lead to three issues:

1. It's not a system test, so there's still more TDD to do after passing the customer's test

2. It produces some duplication of test code, as the customer test will usually ask some of the same questions as the unit tests I write for specific behaviours

3. Even excluding the UI and external dependencies, they still run much slower than a unit test

I solve issue #3 by adapting my FitNesse fixtures to also be JUnit tests that can be run by me as part of continuous regression testing (see an example at https://gist.github.com/jasongorman/74f6a0a049e03b7030ab46e8b01128e7 ). That test is absolutely necessary, because it's typically the only place that checks that we get all of the desired outcomes from a user action. It's the customer test that drives me to wire the objects doing the work together. I prefer to drive the collaborations this way rather than use mock objects, because I have found over the years that an over-reliance on mocks can lead to maintainability issues. I want as few tests as possible that rely on the internal design.

Being honest, I don't know how to easily solve issue #2. It would require the ability to compose tests so that we can apply the same assertions to different set-ups and actions. I did experiment with an Assertion interface with a check() method, but ending up with every assertion having its own implementation just got kerrrazy. I think what's actually needed is a DSL of some kind that hides all of that complexity.

On issue #1, I've long understood that passing an automated customer test does not mean that we're finished. But there is a strong need to separate the concerns of our application's core logic from its user interface and from external dependencies. Most UIs can actually be unit tested, and if you implement an abstraction for the UI logic, the amount of actual code that directly depends on the UI framework tends to be minimal. All you're really doing is checking that logical views are rendered correctly, and that user actions map correctly onto their logical event handlers. The small sliver of GUI code that remains can be driven by integration tests, usually.

I don't write system tests to test logic of any kind. The few that I will write - complicated and cumbersome as they usually are - really just check that, once the car is assembled, when you turn the key in the ignition, it starts. A dozen or more "smoke tests" tend to suffice to check that the thing works when everything's plugged in.

So I continue to iterate this chapter, refining the narrative down to "this is how I would do it", but I suspect I will still be dissatisfied with the result until there's a workable solution to the duplication issue.


April 30, 2016

Learn TDD with Codemanship

Goals vs. Constraints

A classic source of tension and dysfunction in software teams - well, probably all kinds of teams, really - is the relativity between goals and constraints.

Teams often mistake constraints for goals. A common example is when teams treat a design specification as a goal, and lose sight of where that design came from in the first place.

A software design is a constraint. There may be countless ways of solving a problem, but we chose this one. That's the very definition of constraining.

On a larger scale, I've seen many tech start-ups lose sight of why they're doing what they're doing, and degenerate into 100% focusing on raising or making the money to keep doing whatever it is they're doing. This is pretty common. Think of these charities who started out with a clear aim to "save the cat" or whatever, but fast-forward a few years and most - if not all - of the charities' efforts end up being dedicated to raising the funds to pay everybody and keep the charity going.

Now, you could argue that a business's goal is to make money, and that they make money in exchange for helping customers to satisfy their goals. A restaurant's goal is to make money. A diner's goal is to be fed. I give you money. You stop me from being hungry.

Which is why - if your organisation's whole raison d'être is to make a profit - it's vitally important to have a good, deep understanding of your customer's goals or needs.

That's quite a 19th century view of business, though. But even back then, some more progressive industrialists saw aims above and beyond just making a profit. At their best, businesses can provide meaning and purpose for employees, enrich their lives, enrich communities and generally add to the overall spiffiness of life in their vicinity.

But I digress. Where was I? Oh yes. Goals vs. constraints.

Imagine you're planning a trip from your home in Los Angeles to San Francisco. Your goal is to visit SF. A constraint might be that, if you're going to drive, you'll need enough gasoline for the journey.

So you set out raising money for gas. You start a lemonade stall in your front yard. It goes well. People like your lemonade, and thanks to the convenient location of your home, there are lots of passers-by with thirsts that need quenching. Soon you have more than enough money for gas. But things are going so well on your lemonade stall that you've been too busy thinking about that, and not about San Francisco. You make plans to branch out into freshly squeezed orange juice, and even smoothies. You get a bigger table. You hire an assistant, because there's just so much to be done. You buy a bigger house on the same street, with a bigger yard and more storage space. Then you start delivering your drinks to local restaurants, where they go down a storm with diners. 10 years later, you own a chain of lemonade stalls spanning the entire city.

Meanwhile, you have never been to San Francisco. In fact, you're so busy now, you may never go.

Now, if you're a hard-headed capitalist, you may argue "so what?" Surely your lemonade business is ample compensation for missing out on that trip?

Well, maybe it is, and maybe it isn't. As I get older, I find myself more and more questioning "Why am I doing this?" I know too many people who got distracted by "success" and never took those trips, never tried those experiences, never built that home recording studio, never learned that foreign language, and all the other things that were on their list.

For most of us - individuals and businesses alike - earning money is a means to an end. It's a constraint that can enable or prevent us from achieving our goals.

As teams, too, we can too easily get bogged down in the details and lose sight of why we're creating the software and systems that we do in the first place.

So, I think, a balance needs to be struck here. We have to take care of the constraints to achieve our goals, but losing sight of those goals potentially makes all our efforts meaningless.

Getting bogged down in constraints can also make it less likely that we'll achieve our goals at all.

Constraints constrain. That's sort of how that works. If we constrain ourselves to a specific route from LA to San Francisco, for example, and then discover half way that the road is out, we need other options to reach the destination.

Countless times, I've watched teams bang their heads against the brick wall trying to deliver on a spec that can't - for whatever reason - be done. It's powerful voodoo to be able to step back and remind ourselves of where we're really headed, and ask "is there another way?" I've seen $multi-million projects fail because there was no other way - deliver to the spec, or fail. It had to be Oracle. It had to be a web service. It had to be Java.

No. No it didn't. Most constraints we run into are actually choices that someone made - maybe even choices that we made for ourselves - and then forgot that it was a choice.

Yes, try to make it work. But don't mistake choices for goals.



April 23, 2016

Learn TDD with Codemanship

Does Your Tech Idea Pass The Future Dystopia Test?

One thing that at times fascinates and at times appals me is the social effect that web applications can have on us.

Human beings learn fast, but evolve slowly. Hence we can learn to program a video recorder, but living a life that revolves around video recorders can be toxic to us. For all our high-tech savvy, we are still basically hominids, adapted to run from predators and pick fleas off of each other, but not adapted for Facebook or Instagram or Soundcloud.

But the effects of online socialisation are now felt in the Real World - you know, the one we used to live in? People who, just 3-4 years ago, were confined to expressing their opinions on YouTube are now expressing them on my television and making pots of real money.

Tweets are building (and ending) careers. Soundcloud tracks are selling out tours. Facebook viral posts are winning elections. MySpace users are... well, okay, maybe not MySpace users.

For decades, architects and planners obsessed over the design of the physical spaces we live and work in. The design of a school building, they theorise, can make a difference to the life chances of the students who learn in it. The design of a public park can increase or decrease the chances of being attacked in it. Pedestrianisation of a high street can breath new life into local shops, and an out-of-town shopping mall can suck the life out of a town centre.

Architects must actively consider the impact of buildings on residents, on surrounding communities, on businesses, on the environment, when they create and test their designs. Be it for a 1-bed starter home, or for a giant office complex, they have to think about these things. It's the law.

What thought, then, do software developers give to the social, economic and environmental impact of their application designs?

With a billion users, a site like Facebook can impact so many lives just by adding a new button or changing their privacy policy.

Having worked on "Web 2.0" sites of all shapes and sizes, I have yet to see teams and management go out of their way to consider such things. Indeed, I've seen many occasions when management have proposed features of such breath-taking insensitivity to wider issues, that it's easy to believe that we don't really think much about it at all. That is, until it all goes wrong, and the media are baying for our blood, and we're forced to change to keep our share price from crashing.

This is about more than reliability (though reliability would be a start).

Half-jokingly, I've suggested that teams put feature requests through a Future Dystopia Test; can we imagine a dark, dystopian, Philip K Dick-style future in which our feature has caused immense harm to society? Indeed, whole start-up premises fail this test sometimes. Just hearing some elevator pitches conjures up Blade Runner-esque and Logan's Run-ish images.

I do think, though, that we might all benefit from devoting a little time to considering the potential negative effects of what we're creating before we create it, as well as closely monitoring those effects once it's out there. Don't wait for that hysterical headline "AcmeChat Ate My Hamster" to appear before asking yourself if the fun hamster-swallowing feature the product owner suggested might not be such a good thing after all.


This blog post is gluten free and was not tested on animals






April 20, 2016

Learn TDD with Codemanship

A* - A Truly Iterative Development Process

Much to my chagrin, having promoted the idea for so many years, software development still hasn't caught on to the idea that what we ought to be doing is iterating towards goals.

NOT working through a queue of tasks. NOT working through a queue of features.

Working towards a goal. A testable goal.

We, as an industry, have many names for working through queues: Agile, Scrum, Kanban, Feature-driven Development, the Unified Process, DSDM... All names for "working through a prioritised list of stuff that needs to be done or delivered". Of course, the list is allowed to change depending on feedback. But the goal is usually missing. Without the goal, what are we iterating towards?

Ironically, working through a queue of items to be delivered isn't iterating - something I always understood to be the whole point of Agile. But, really, iterating means repeating a process, feeding back the results of each cycle, until we reach some goal. Reaching the goal is when we're done.

What name do we give to "iterating towards a testable goal"? So far, we have none. Buzzword Bingo hasn't graced the door of true iterative development yet.

Uncatchy names like goal-driven development and competitive engineering do exist, but haven't caught on. Most teams still don't even have even a vague idea of the goals of their project or product. They're just working through a list that somebody - a customer, a product owner, a business analyst - dreamed up. Everyone's assuming that somebody else knows what the goal is. NEWSFLASH: They don't.

The Codemanship way compels us to ditch the list. There is no release plan. Only business/user goals and progress. Features and change requests only come into focus for the very near future. The question that starts every rapid iteration is "where are we today, and what's the least we could do today to get closer to where we need to be?" Think of development as a graph algorithm: we're looking for the shortest path from where we are to some destination. There are many roads we could go down, but we're particularly interested in exploring those that bring us closer to our destination.

Now imagine a shortest-path algorithm that has no concept of destination. It's just a route map, a plan - an arbitrary sequence of directions that some product owner came up with that we hope will take us somewhere good, wherever that might be. Yup It just wouldn't work, would it? We'd have to be incredibly lucky to end up somewhere good - somewhere of value.

And so it is - in my quest for a one-word name to describe "iteratively seeking the shortest (cheapest) path to a testable goal", I propose simply A*

As in:

"What method are we following on this project?"

"A*"

Of course, there are prioritised lists in my A* method: but they are short and only concern themselves with what we're doing next to TRY to bring us closer to our goal. Teams meet every few days (or every day, if you're really keen), assess progress made since last meeting, and come up with a very short plan, the results of which will be assessed at the next meeting. And rinse and repeat.

In A*, the product owner has no vision of the solution, only a vision of the problem, and a clear idea of how we'll know when that problem's been solved. Their primary role is to tell us if we're getting warmer or colder with each short cycle, and to help us identify where to aim next.

They don't describe a software product, they describe the world around that product, and how it will be changed by what we deliver. We ain't done until we see that change.

This puts a whole different spin on software development. We don't set out with a product vision and work our way through a list of features, even if that list is allowed to change. We work towards a destination - accepting that some avenues will turn out to be dead-ends - and all our focus is on finding the cheapest way to get there.

And, on top of all that, we embrace the notion that the destination itself may be a moving target. And that's why we don't waste time and effort mapping out the whole route beyond the near future. Any plan that tries to look beyond a few days ends up being an expensive fiction that we become all too easily wedded to.