April 23, 2015

...Learn TDD with Codemanship

The Big Giant Object Oriented Analysis & Design Blog Post

Having been thwarted in my plan to speak at CraftConf on Budapest this week, I find myself with the luxury of time to blog about a topic that comes up again and again.

Object oriented analysis and design (OOA/D) is not a trendy subject these days. Arguably, it had it's heyday in the late 1990's, when UML dinosaurs ruled the Earth. But the fact remains that most software being written today is, at least to some extent, object oriented. And it's therefore necessary and wise to be able to organise our code effectively in an object oriented way. (Read my old, old paper about OOA/D to give yourself the historical context on this.)

Developers, on the whole, seem to really struggle getting from functional requirements (expressed, for example, as acceptance tests; the 21st century equivalent of Use Case scenarios) to a basis for an object oriented design that will satisfy that requirement.

There's a simple thought process to it; so simple that I see a lot of developers struggling mainly to take the leap of faith that it really can be that simple.

The thought process can be best outlined with a series of questions:

1. Who will be using this software, and what will they be using it to do?

2. How will they interact with the software in order to do what they want?

3. What are the outcomes of each user interaction?

4. What knowledge is involved in doing this work?

5. Which objects have the knowledge necessary to do each piece of the work?

6. How will these objects co-ordinate the work between themselves?

In Extreme Programming, we answer the first question with User Stories like the one below.



A video library member wants to donate a DVD to the library. That is who is using the software, and what they will be using it to do.

In XP, we take a test-driven approach to design. So when we sit down with the customer (in this case, the video library member) to flesh out this requirement - remember that a user story isn't a requirements specification, it's just a placeholder to have a conversation about the requirements - we capture their needs explicitly as acceptance tests, like this one:

Given a copy of a DVD title that isn’t in the library,


When a member donates their copy, specifying the name of the DVD title


Then that title is added to the library
AND their copy is registered against that title so that other members can borrow it,
AND an email alert is sent to members who specified an interest in matching titles,
AND the new title is added to the list of new titles for the next member newsletter
AND the member is awarded priority points



This acceptance test, expressed in the trendy Top Of The Pops-style "given...when...then..." format, reveals information about how the user interacts with the software, contained in the when clause. This is the action that the user performs that triggers the software to do the work.


The then clause is the starting point for our OO design. It clearly sets out all of the individual outcomes of performing that action. Outcomes are important. They describe the effect the user action has on the data in the system. In short, they describe what the user can expect to get when they perform that action. From this point on, we'll refer to these outcomes as the work that the software does.

The "ANDs" in this example are significant. They help us to identify 5 unique outcomes - 5 individual pieces of work that the software needs to do in order to pass this test. And whatever design we come up with, first and foremost, it must pass this test. In other words, the first test of a good OO design is that it works.


The essence of OO design is assigning the work we want the software to do to the objects that are best-placed to do that work. By "best-placed", I mean that the object has most, if not all, of the knowledge required to do that work.

Knowledge, in software terms, is data. Let's say we want to calculate a person's age in years; what do we need to know in order to do that calculation? We need to know their date of birth, and we need to know what today's date is, so we can calculate how much time has elapsed since they were born. Who knows a person's date of birth?

Now, this is where a lot of developers come unstuck. We seem to have an in-built tendency to separate agency from data. This leads to a style of design where objects that know stuff (data objects) are acted upon by objects that do stuff (services, command, managers etc etc). In order to do stuff, objects have to get the necessary knowledge from the objects that know stuff.



So we can end up with lots of low-level coupling between the doing objects and the knowing objects, and this drags us into deep waters when it comes to accommodating change later because of the "ripple effect" that tighter coupling amplifies, where a tiny change to one part of the code can rippled out through the dependencies and become a major re-write.

A key goal of good OO design is to minimise coupling between modules, and we achieve this by - as much as possible - encapsulating both the knowledge and the work in the same place, like this:



This is sometimes referred to as the "Tell, Don't Ask") style of OO design, because - instead of asking for an object's data in order to do some work, we tell the object that has that data to do the work itself. The end result is fewer, higher-level couplings between objects, and smaller ripples when we make changes.

If we're aiming for a loosely coupled design - and we are - then the next step in the OO design process, where we assign responsibility for each piece of work, requires us to put the work where the knowledge is. For that, we need to map out in our heads or on paper what this knowledge is.

Now, I'm very much a test-driven sort of dude, and as such I find that design thinking works best when we work from concrete examples. The acceptance test above isn't concrete enough for my tastes. There are still too many questions: which member, what DVD title, who has registered an interest in matching titles, and so on?

To make an acceptance test executable, we must add concrete examples - i.e., test data. So our hand-wavy test script from above becomes:

Given a copy of The Abyss, which isn’t in the library,

When Joe Peters donates his copy, specifying the name of the title, that it was directed by James Cameron and released in 1989

Then The Abyss is added to the library
AND his copy is registered against that title so that other members can borrow it,
AND an email alert with the subject “New DVD title” is sent to Bill Smith and Jane Jones, who specified an interest in titles matching “the abyss”(non-case-sensitive), stating “Dear , Just to let you know that another member has recently donated a copy of The Abyss (dir: James Cameron, 1989) to the library, and it is now available to borrow.”
AND The Abyss is added to the list of new titles for the next member newsletter
AND Joe Peters receives 10 priority points for making a donation



Now it's all starting to come into focus. From this, if I feel I need to - and earlier in development, when my domain knowledge is just beginning to build, I find it more useful - I can draw a knowledge map based on this example.




It's by no means scientific or exhaustive. It just lays out the objects I think are involved, and what these objects know about. The library, for example, knows about members and titles. (If you're UML literate, you'd might think this is an object diagram... and you'd be right.)


So, now we know what work needs to be done, and we know what objects might be involved and what these objects know. The next step is to put the work where the knowledge is.


This is actually quite a mechanical exercise; we have all the information we need. My tip -as on old pro - is to start with the outcomes, not the objects. Remember: first and foremost, our design must pass the acceptance test.


So, take the first piece of work that needs to be done:

The Abyss is added to the library


...and look for the object we believe has the knowledge to do this. The library knows about titles, so maybe the library should have responsibility for adding this title to itself.


Work through the outcomes one at a time, and assign responsibility for that work to the object that has the knowledge to do it.


Class Responsibility Collaboration cards are a neat and simple way of modeling this information. Write the name of type of object doing the work at the top, and on the left-hand side list what is responsible for knowing and what it is responsible for doing.

(HINT: you shouldn't end up with more CRC cards than outcomes. An outcome may indeed involve a subsystem of objects, but better to hide that detail behind a clean interface like Email Alert, and drill down into it later.)





Now we have a design that tells us what objects are involved, and which objects are doing which piece of work. We're almost there. There's only one piece of the OO jigsaw left; we need to decide how these objects collaborate with each other in order to co-ordinate the work. Objects, by themselves, don't do anything unless they're told to. The work is co-ordinated by objects telling each other to do their bit.

If the work happens in the order it's listed in the test, then that pretty much takes care of the collaborations. We start with the library adding the new title to itself. That's our entry point: someone - e.g., a GUI controller, or web service, or unit test - tells the library to add the title.


Once that piece of work is done, we move on to the next piece of work: registering a default copy to that title for members to borrow. Who does that? The title does it. We're thinking here about the thread of control - fans of sequence diagrams will know exactly what this is - where, like a baton in a relay race, control is passed from one object ("worker") to the next by sending a message. The library tells the new title to register a copy against itself. And on to the next piece of work, until all the work's been done, and we've passed the acceptance test.


Again, this is a largely mechanical exercise, with a pinch of good judgement thrown in, based on our understanding of how best to manage dependencies in software. For example, we may choose to avoid circular dependencies that might otherwise naturally fall out of the order in which the work is done. In this case, we don't have Title tell Library to add the title to the list of "new titles" - library's second piece of work - because that would set up a circular dependency between those two objects that we'd like to avoid. Instead, we allow control to be returned by default to the caller, library after title has done it's work.


On a CRC card, we capture information about collaborations on the right-hand side.






Note that Member has no collaborators. This means that Member doesn't tell any other objects to do any work. This is how we can tell it's at the bottom of the call stack. Library, on the other hand, has no objects that tell it to do anything, which places it at the top of the call stack: Library is the outermost object; our entry point for this scenario.


Note also that I've got a question mark on the collaborators side of the Email Alert object. This is because I believe there may be a whole can of worms hiding behind its inscrutable interface - potentially a whole subsystem dedicated to sending emails. I have decided to defer thinking about how that will work. For now, it's enough to know that Title tells Email Alert to send itself. We can fake it 'til we make it.


So, in essence, we now have an object oriented design that we believe will pass the acceptance test.


The next step would be to implement it in code.


Again, being a test-driven sort of cat, I would seek to implement it - and drive out all the low-level/code-level detail - in a test-driven way.


There are different ways we can skin this particular rabbit. We could start at the bottom of the call stack and test-driven an implementation of Member to check that it does indeed award itself priority points when we tell it to. Once we got Member working as desired, we could move up the call stack to Title, and test-driven an implementation of that using our real Member, and a fake Email Alert as a placeholder. Then, when we get Title working, we could finish up by wiring it all together and test-driving an implementation of Library with our real Title, our real Member, and our fake Email Alert. Then we could go away and get to work on designing and implementing the email subsystem.


Or we could work top-down (or "outside-in", as some prefer it), by test-driving an implementation of Library using mock objects for its collaborators Title and Member, wiring them together by writing tests that will fail if Library doesn't tell those collaborators to do their bit. Once we get Library working, we them move down the stack and test-driven implementations of Title and Member, again with the placeholder (e.g., a mock) for Email Alert so we can defer that part until we know what's involved.


A CRC card implies two different kinds of test:


1. Tests that fail because work was not done correctly

2. Tests that fail because an object didn't tell a collaborator to do its part





I tend to find that, in implementing OO designs end-to-end, both kinds of tests come in handy. The important thing to remember is whether the test you're writing is about the work, or about the collaborations. Tests should only have one reason to fail, so that when they do, it's easier to pinpoint what went wrong, so they should never be about both.


Also, sometimes, the test for the work can be implied by an interaction test when parameter values we expect to be passed to a mock object have been calculated by the caller. Tread very carefully here, though. Implicit in this can be two reasons for the test to fail: because the interaction was incorrect (or missing), or because the parameter value was incorrectly calculated. Just as it can be wise not to mix actions with queries, it can also be wise not to mix the types of tests.


Finally, and to reiterate for emphasis, the final arbiter of whether your design works is whether or not it passes the acceptance test. So as you implement it, keep going back to that test. You're not done until it passes.


And there you have it: a test-driven, Agile approach to object oriented analysis and design. Just like in the 90's. Only with less boxes and arrows.







Posted 2 years, 10 months ago on April 23, 2015