July 17, 2016

Learn TDD with Codemanship

Oodles of Free Legacy UML Tutorials

See how we used to do things back in Olden Times by visiting the legacy UML tutorials section of the Codemanship website (the content from the highly-popular-with-your-granddad-back-in-the-day parlezuml.com).



I maintain that:

a. Visual modeling & UML is still useful and probably due for a comeback, and

b. Visual modelling and Agile Software Development can work well together when applied sparingly and sensibly

Check it out.






January 30, 2015

Learn TDD with Codemanship

It's True: TDD Isn't The Only Game In Town. So What *Are* You Doing Instead?

The artificially induced clickbait debate "Is TDD dead?" continues at developer events and in podcasts and blog posts and commemorative murals across the nations, and the same perfectly valid point gets raised every time: TDD isn't the only game in town

They're absolutely right. Before the late 1990's, when the discipline now called "Test-driven Development" was beginning to gain traction at conferences and on Teh Internets, some teams were still somehow managing to create reliable, maintainable software and doing it economically.

If they weren't doing TDD, then what were they doing?

The simplest alternative to TDD would be to write the tests after we've written the implementation. But hey, it's pretty much the same volume of tests we're writing. And, for sure, many TDD practitioners go on to write more tests after they've TDD'd a design, to get better assurance.

And when we watch teams who write the tests afterwards, we tend to find that the smart ones don't write them all at once. They iteratively flesh out the implementation, and write the tests for it, one or two scenarios (test cases) at a time. Does that sound at all familiar?

Some of us were using what they call "Formal Methods" (often confused with heavyweight methods like SSADM and the Unified Process, which aren't really the same thing.)

Formal Methods is the application of rigorous mathematical techniques to the design, development and testing of our software. The most common approach was formal specification - where teams would write a precise, mathematical and testable specification for their code and then write code specifically to satisfy that specification, and then follow that up with tests created from those specifications to check that the code actually works as required.

We had a range of formal specification languages, with exotic names like Z (and Object Z), VDM, OCL, CSP, RSVP, MMRPG and very probably NASA or some such.

Some of them looked and worked like maths. Z, for example, was founded on formal logic and set theory, and used many of the same symbols (since all programming is set theoretic.)

Programmers without maths or computer science backgrounds found mathematical notations a bit tricky, so people invented formal specification languages that looked and worked much more like the programming languages we were familiar with (e.g., the Object Constraint Language, which lets us write precise rules that apply to UML models.)

Contrary to what you may have heard, the (few) teams using formal specification back in the 1990's were not necessarily doing Big Design Up-Front, and were not necessarily using specialist tools either.

Much of the formal specification that happened was scribbled on whiteboards to adorn simple design models to make key behaviours unambiguous. From that teams might have written unit tests (that's how I learned to do it) for a particular feature, and they pretty much became the living specification. Labyrinthine Z or OCL specifications were not necessarily being kept and maintained.

It wasn't, therefore, such a giant leap for teams like the ones I worked on to say "Hey, let's just write the tests and get them passing", and from there to "Hey, let's just write a test, and get that passing".

But it's absolutely true that formal specification is still a thing that some teams still do - you'll find most of them these days alive and well in the Model-driven Development community (and they do create complete specifications and all the code is generated from those, so the specification is the code - so, yes, they are programmers, just in new languages.)

Watch Model-driven Developers work, and you'll see teams - well, the smarter ones - gradually fleshing out executable models one scenario at a time. Sound familiar?

So there's a bunch of folk out there who don't do TDD, but - by jingo! - it sure does look a lot like TDD!

Other developers used to embed their specifications inside the code, in the form of assertions, and then write tests suites (or drive tests in some other way) that would execute the code to see if any of the assertions failed.

So their tests had no assertions. It was sort of like unit testing, but turned inside out. Imagine doing TDD, and then refactoring a group of similar tests into a single test with a general assertion (e.g., instead of assert(balance == 100) and assert(balance == 200), it might be assert(balance = oldBalance + creditAmount).

Now go one step further, and move that assertion out of the test and into the code being tested (at the end of that code, because it's a post-condition). So you're left with the original test cases to drive the code, but all the questions are being asked in the code itself.

Most programming languages these days include a built-in assertion mechanism that allows us to do this. Many have build flags that allows us to turn assertion checking on or off (on if testing, off if deploying to live.)

When you watch teams working this way, they don't write all the assertions (and all the test automation code) at once. They tend to write just enough to implement a feature, or a single use case scenario, and flesh out the code (and the assertions in it) scenario by scenario. Sound familiar?

Of course, some teams don't use test automation at all. Some teams rely on inspections, for example. And inspections are a very powerful way to debug our code - more effective than any other technique we know of today.

But they hit a bit of a snag as development progresses, namely that inspecting all of the code that could be broken after a change, over and over again for every single change, is enormously time-consuming. And so, while it's great for discovering the test cases we missed, as a regression testing approach, it sucks ass Gagnam Style.

But, and let's be clear about this, these are the techniques that are - strictly speaking - not TDD, and that can (even if only initially, like in the case of inspections) produce reliable, maintainable software. if you're not doing these, then you're doing something very like these.

Unless, of course... you're not producing reliable, maintainable code. Or the code you are producing is so very, very simple that these techniques just aren't necessary. Or if the code you're creating simply doesn't matter and is going to be thrown away.

I've been a software developer for approximately 700 million years (give or take), so I know from my wide and varied experience that code that doesn't matter, code that's only for the very short-term, and code that isn't complicated, are very much the exceptions.

Code that gets used tends to stick around far longer than we planned. Even simple code usually turns out to be complicated enough to be broken. And if it doesn't matter, then why in hell are we doing it? Writing software is very, very expensive. If it's not worth doing well, then it's very probably - almost certainly - not worth doing.

So what is the choice that teams are alluding to when they say "TDD isn't the only game in town?" Do they mean they're using Formal Methods? Or perhaps using assertions in their code? Or do they rely on rigorous inspections to make sure they get it at least right the first time around?

Or are they, perhaps, doing none of these things, and the choice they're alluding to is the choice to create software that's not good enough and won't last?

I suspect I know the answer. But feel free to disagree.


UPDATE:

My apprentice, Will Price, has emailed me with a very good question:

"Why don't we embed our assertions in our code and just have the tests exercise the code?

What benefit did we gain from moving them out into tests?

It seems to me that having assertions inside the code is much nicer than having them in tests because it acts as an additional form of documentation, right there when you're reading the code so you can understand what preconditions and postconditions a particular method has, I should imagine you could even automate collection of these to
put them into documentation etc so developers have a good idea of whether they can reuse a method (i.e. have they satisfied the preconditions, are the postconditions sufficient etc)"


My reply to him (copied and pasted):

That's a good question. I think the answer may lie in tradition. Generally, folks writing unit tests weren't using assertions in code, and folks using assertions in code weren't writing automated tests. (For example, many relied on manual testing, or random testing, or other ways of driving the code in test runs).

So, people coming from the unit testing tradition have ended up with assertions in their tests, and folk from the assertions tradition (e.g., Design By Contract) have ended up with no test suites to speak of. Model checking is an example of this: folks using model checkers would often embed assertions in code (or in comments in the code), and the tool would exercise the code using white-box test case generation.

This mismatch is arguably the biggest barrier at the moment to merging these approaches because the tools don't quite match up. I'm hoping this can be fixed.


Thinking a bit more about it, I also believe that asserting correct behaviour inside the code helps with inspections.

July 31, 2014

Learn TDD with Codemanship

My Top 5 Most Under-used Dev Practices

So, due to a last-minute change of plans, I have some time today to fill. I thought I'd spend it writing about those software development practices that come highly recommended by some, but - for whatever reason - almost no teams do.

Let's count down.

5. Mutation Testing - TDD advocates like me always extol the benefits of having a comprehensive suiite of tests we can run quickly, so we can get discover if we've broken our code almost immediately.

Mutation testing is a technique that enables us to ask the critical question: if our code was broken, would our tests show it?

We deliberately introduce a programming error - a "mutation" - into a line of code (e.g., turn a + into a -, or > into a <) and then run our tests. If a test fails, we say our test suite has "killed the mutant". We can be more assured that if that particular line of code had an error, our tests would show it. If no tests fail, that potentially highlights an gap in our test suite that we need to fill.

Mutation testing, done well, can bring us to test suites that offer very high assurance - considerably higher than I've seen most teams achieve. And that extra assurance tends to bring us economic benefits in terms of catching more bugs sooner, saving us valuable time later.

So why do so few teams do it? Well, tool support is one issue. The mutation testing tools available today tend to have a significant learning curve. They can be fiddly, and they can throw up false positives, so teams can spend a lot of time chasing ghosts in their test coverage. It takes some getting used to.

In my own experience, though, it's worth working past the pain. The pay-off is often big enough to warrant a learning curve.

So, in summary, reason why nobody does it : LEARNING CURVE.

4. Visualisation - pictures were big in the 90's. Maybe a bit too big. After the excesses of the UML days, when architects roamed the Earth feeding of smaller prey and taking massive steaming dumps on our code, visual modeling has - quite understandably - fallen out of favour. So much so that many teams do almost none at all. "Baby" and "bathwater" spring to mind.

You don't have to use UML, but we find that in collaborative design, which is what we do when we work with customers and work in teams, a picture really does speak a thousand words. I still hold out hope that one day it will be commonplace to see visualisations of software designs, problem domains, user interfaces and all that jazz prominently displayed in the places where development teams work. Today, I mainly just see boards crammed with teeny-weeny itty-bitty index cards and post-it notes, and the occasional wireframe from the UX guy, who more often than not came up with that design without any input at all from the team.

The effect of lack of visualisation on teams can be profound, and is usually manifested in the chaos and confusion of a code base that comprises several architectures and a domain model that duplicates concepts and makes little to no sense. If you say you're doing Domain-driven Design - and many teams do - then where are your shared models?

There's still a lot of mileage in Scott Ambler's "Agile Modeling" book. Building a shared understanding of a complex problem or solution design by sitting around a table and talking, or by staring at a page of code, has proven to be uneffective. Pictures help.

In summary, reason why so few do it: MISPLACED AGILE HUBRIS

3. Model Office - I will often tell people about this mystical practice of creating simulated testing environments for our software that enable us to see how it would perform in real-world scenarios.

NASA's Apollo team definittely understood the benefits of a Model Office. Their lunar module simulator enabled engineers to try out solutions to systemm failures on the ground before recommending them to the imperilled astronauts on Apollo 13. Tom Hanks was especially grateful, but Bill Paxton went on to star in the Thunderbirds movie, so it wasn't all good.

I first heard about them doing a summer stint in my local W H Smith in the book department. Upstairs, they had a couple of fake checkouts and baskets of fake goods with barcodes.

Not only did we train in their simulated checkout, but they also used them to analyse system issues and to plan IT changes, as well as to test those changes in a range of "this could actually happen" scenarios.

A Model Office is a potentially very powerful tool for understanding problems, for planning solutions and for testing them - way more meaningful than acceptance tests that were agreed among a bunch of people sitting in a room, many of whom have never even seen the working environment in which the software's going to be used, let alone experienced it for themselves.

There really is no substitute for the real thing; but the real thing comes at a cost, and often the real thing is quite busy, actually, thank you very much. I mean, dontcha just hate it when you're at the supermarket and the checkout person is just learning how it all works while you stand in line? And mistakes that get made get made with real customers and real money.

We can buy ourselves time, control and flexibility by recreating the real thing as faithfully as possible, so we can explore it at our leisure.

Time, because we're under no pressure to return the environment to business use, like we would be if it was a real supermarket checkout, or a real lunar module.

Control, because we can deliberately recreate scenarios - even quite rare and outlandish ones - as often as we like, and make it exactly the same, or vary it, as we wish. One of the key reasons I believe many business systems are not very robust is because they haven't been tested in a wide-enough range of possible circumstances. In real life, we might have to wait weeks for a particular scenario to arise.

Flexibility, because in a simulated environment, we can do stuff that might be difficult or dangerous in the real world. We can try out the most extraordinary situations, we can experiment with solutions when the cost of failure is low, and we can explore the problem and possible solutions in ways we just couldn't or wouldn't dare to if real money, or real lives, or real ponies were at stake.

For this reason,, from me, Model Offices come very highly recommended. Which is very probably why nobody uses them.

Reason why nobody does it - NEVER OCCURRED TO THEM

2. Testing by Inspection - This is another of those blind spots teams seem to have about testing. Years of good research have identified reading the code to look for errors as one of the most - if not the most - effective and efficient ways of finding bugs.

Now, a lot of teams do code reviews. It's a ritual humiliation many of us have to go through. But commonly these reviews are about things like coding style, naming conventions, design rules and so forth. It's vanishingly rare to meet a team who get around a computer, check out some code and ask "okay, will this work?"

Testing by inspection is actually quite a straightforward skill, if we want it to be. A practice like guided inspection, for example, simply requires us to pick some interesting test cases, and step through the code, effectively executing it in our heads, asking questions like "what should be true at this point?" and "when might this line of code not work?"

If we want to, we can formalise that process to a very high degree of rigour. But the general pattern is the same; we make assertions about what should be true at key points during the execution of our code, we read the code and dream up interesting test cases that will cause those parts of the code to be executed and ask those questions at the appropriate times. When an inspection throws up interesting test cases that our code doesn't handle, we can codify this knowledge as, say, automated unit tests to ensure that the door is closed to that particular bug permanently.

Do not underestimate the power of testing by inspection. It's very rare to find teams producing high-integrity software who don't do it. (And, yes, I'm saying it's very rare to find teams producing high-integrity software.)

But, possibly because of associations with the likes of NASA, and safety-critical software engineering in general, it has a reputatioon for being "rocket science". It can be, if we choose to go that far. But in most cases, it can be straightforward, utilising things we already know about computer programming. Inspections can be very economical, and can reap considerable rewards. And pretty much anyone who can program can do them. Which is why, of course, almost nobody does.

Reason why nobody does it - NASA-PHOBIA

1. Business Goals - Okay, take a deep breath now. Imminent Rant Alert.

Why do we build software?

There seems to be a disconnect between the motivations of developers and their customers. Customers give us money to build software that hopefully solves their problems. But, let's be honest now, a lot of developers simply could not give two hoots about solving the customer's problems.

Which is why, on the vast majority of software teams, when I ask them what the ultimate business goals of what they're doing are, they just don't know.

Software for the sake of software is where our heads are mostly at. We buiild software to build software.

Given a free reign, what kind of software do developers like to build? Look on Github. What are most personal software projects about?

We don't build software to improve care co-ordination for cancer sufferers. We don't build software to reduce delivery times for bakeries. We don't build software to make it easier to find a hotel room with fast Wi-Fi at 1am in a strange city.

With our own time and resources, when we work on stuff that interests us, we won't solve a problem in the real world. We'll write another Content Management System. Or an MVC framework. Or another testing tool. Or another refactoring plug-in. Or another VCS.

The problems of patients and bakers and weary travelers are of little interest to us, even though - in real life - we can be all of these things ourselves.

So, while we rail at how crappy and poorly thought-out the software we have to use on a daily basis tends to be ("I mean, have they never stayed in a hotel?!"), our lack of interest in understanding and then solving these problems is very much at the root of that.

We can be so busy dreaming up solutions that we fail to see the real problems. The whole way we do development is often a testament to that, when understanding the business problem is an early phase in a project that, really, shouldn't exist until someone's identified the problem and knows at least enough to know it's worth writing some software to address it.

Software projects and products that don't have clearly articulated, testable and realistic goals - beyond the creation of software for its own sake - are almost guaranteed to fail; for the exact same reason that blindly firing arrows in random directions with your eyes closed is almost certainly not going to hit a valuable target. But this is what, in reality, most teams are doing.

We're a solution looking for a problem. Which ultimately makes us a problem. Pretty much anyone worth listening to very, very strongly recommends that software development should have clear and testable business goals. So it goes without saying that almost no teams bother.

Reason why so few teams do it - APATHY





July 4, 2013

Learn TDD with Codemanship

A Little Bit Of UML (for Just Enough Design)

This summer sees the 10th anniversary since I launched parlezuml.com.

To celebrate that milestone, and to raise more money for programming and maths clubs, I'm running a very special one-off training workshop on Saturday August 17th at Bletchley Park.

Tickets are a snip at £99, and all the proceeds will be put towards our goal of starting a programming club and parent-child maths workshops.

To find out more and book your place, visit http://alittlebitofuml.eventbrite.co.uk/







February 24, 2012

Learn TDD with Codemanship

Agile Design - How A Bit Of Informal Visual Modeling Can Save A Heap Of Heartache

All my courses are, of course, fine holiday fine. But the Agile Design workshop's especially enjoyable, as it brings together a whole range of disciplines while challening participants to work effectively together in designing and implementing different features of the same simple system.

The group works in pairs (or threes, depending on the overall numbers). After a bit of a crash course in basic UML - use cases, class diagrams and sequence diagrams - each pair is given a user story for a community DVD library, and tasked with iteratively fleshing out an object oriented design to pass an acceptance test agreed with the customer (me).



In a break from the traditional approach, we turned the design process around - arguably the right way round - and spent day #1 telling the story using plain old objects, designing and implementing a functioning domain model that includes all the concepts and functions required to pass the tests.



On day #2, we look at how these cncepts and functions should be presented to the end users, designing a graphical user interface and retelling the story, this time through the GUI.

The impetus behind the course is to help teams avoid the design train wreck that can ensue when Agile teams pick up stories and go off into their silos to do the design for their part of the overall system. I've seen very experienced teams end up with duplicated classes, database tables, multiple architectures and disjoints in the same code base.

Using informal visual models in a collaborative design approach can aid us in externalising our thinking so that other people can see how what they're doing fits in what everyone else is doing.

Getting the team around the whiteboard to explore shared concepts like the domain model, the screenflow of the user interface or the patterns used in the technical architecture - especially in the earlier stages of development - can draw out misunderstandings and disjoints that might otherwise have only come to light in integration, when these issues can be much more costly to fix (and therefore often never get fixed).



Importantly, teams are soon testing their designs by implementing them in code (test-driven, of course), and important design decisions and changes to the shared vision that happen as a result of making the designs work for real can be visualised and communicated by sketching them out on flipchart paper or on whiteboards and keeping them around the team's work area for everyone to see.

On the course, teams discover just how much active collaboration's needed to coordinate design effectively, and to take the time to resolve design issues and conflicts at the whiteboard if they can. Pairs need to be going out of their way to find out what the other pairs are working on. In real life, we tend to put in a wholly inadequate amount of effort into collaborative design, and our ad hoc, inconsistent, and sometimes just plain wrong, designs can be the end result.

The more visible our work is, the easier it is to bring design issues out into the open early, and the sooner we're able to establish a shared language for meaningfully talking about our designs.

And we're not just talking about developers here, either. Testers and graphic designers can play an active and valuable role in this process, as well as the customer, of course. They should take an active interest in establishing the design of use cases, in designing UI storyboards and screenflows, and in designing good acceptance tests that will effectively constrain our designs to what will meet the customer's real needs.

That's why I love this workshop. You get a buzz and an energy in the room, and a real sense of "stuff happening" and of progress being made. And it incorporates disciplines like continuous integration, TDD and BDD (or, as I know it, "TDD with a B instead of a T"), making it a much closer fit to real-world Agile Software Development.






June 16, 2011

Learn TDD with Codemanship

New Forward For People Reading My Java OCL Tutorial

I checked the web stats for the recently relocated parlezuml.com site, and was slightly to dismayed to discover that the second most popular tutorial - after an introduction to use cases - is not something useful like TDD or refactoring, but (the horror!) the Object Constraint Language. What have thou wraught!

Let's face it, people only learn OCL to pass exams on OCL. In the real world, we don't use it.

So I've added a little forward to the Java OCL tutorial (nobody seems that interested in the .NET fersion, giving even stronger indications that it's being downloaded by university students and academics). Hopefully it will balance things out a little, and assuage some of my guilt.

If you're reading this tutorial, you're probably either studying for or teaching a computing or software engineering academic qualification.

How do I know this?

Simple. In the real world, almost nobody uses OCL. And by "almost nobody", I mean maybe 1/100 software professionals may have learned it. And maybe 1/100 of them ever use it. Learning OCL very probably is not going to get you a job as anything other than someone who teaches OCL.

I believe it is a useful skill to have if you want to really get to grips with UML, but I can state categorically, with my hand on my heart, than I have in my entire career used OCL in anger maybe twice.

Knowing OCL will not make you a better software developer, and you are unlikely to work with other software developers who know OCL, rendering it useless as a communication tool.

You may have been told about Model-driven Architecture. Back in 2000, it was going to be the next big thing. It wasn't.

On 99.9% of professional software projects, we still type code in a third-generation language into a text editor. Occasionally we draw UML diagrams on whiteboards when we want to visualise a design or analysis concept. You will find that some UML notations are still in widespread use - especially class and sequence diagrams, and activity diagrams for workflow analysis.

Consider OCL as being a classical language, like Latin or Ancient Greek.

It's useful to know, as it can give you some general background when applying things like Design By Contract, or even for functional programming. But it is, too all intents and purposes, a dead language.

Trust me, almost nobody out here speaks it.

Having said that, I hope you find this tutorial useful in passing your exams. And I look forward to maybe teaching you some useful skills - like test-driven development or refactoring - when you graduate and join the community of professional software developers.

Best wishes,


Jason Gorman






March 30, 2011

Learn TDD with Codemanship

Visual Models Are Useful, Especially In Those Early Stages

I've been thrown back into the world of teaching UML today running a 2-day workshop in Agile Design for the nice folks at BBC News & Knowledge.

The first point I wanted to make to them was that, although we'd be using some UML notations in the workshop - and to some extent the point of the workshop was to learn these basic notations - we shouldn't fall into any of the bear traps that have been set for the unwary visual modeler.

Trap #1 is that "Design = UML". I've worked with people who claimed that "our developers don't do any design" because they didn't produce evidence in the shape of UML diagrams. This is nonsense. If the developers genuinely did "no design", then their code would just be random characters. The fact that their code compiles, has a shape, and isn't just random ASCII noise suggests that someone, somewhere made some design decisions. Yes, proponents of Intelligent Design will be delighted to learn that code that compiles, works and makes some kind of sense displays compelling evidence of the influence of an intelligent designer.

I'm not a great believer in using models to justify design decisions. In my experience, if a tree falls in the forest but there's no class diagram to document it, the tree does make a sound. Good design is good design, with or without the necessary paperwork.

Trap #2 is the perennial misconception of young and inexperienced Agilistas everywhere: "UML = Big Design Up-Front". This is as misguided as believing that "Ruby = BDD". It's true that the UML user community has earned a reputation for BDUF. But then, the C++ community earned a reputation for premature optimisation. C++ does not require us to optimise early, nor Ruby to write acceptance tests up-front. And UML is just a language, like Ruby and C++. You can be as agile as you like. Do as much or as little up-front design as you want. Go into as much or as little detail as you need. My advice has always been "if a diagram helps, use one".

Trap #3 is kind of related to the first 2 traps, and is potentially the most lethal: "UML is a design process". Meaning that folk learn on a course like this one to follow a certain thought process, and mistake a sequence of UML diagrams for that thought process. So, even if they know what code they're going to write, they draw the diagrams anyway. "Good design is a use case model followed by some sequence diagrams followed by some class diagrams". No, good design is good design. The evidence for good design is in your software. Use a UML diagram (or a Venn diagram, or a Mind map - whatever tells the story best) if and only if you think it'll help you understand something better or communicate something better to other people in your team.

The BBC N&K folk are a smart bunch, and I think they've steered their way through the bear traps quite impressively. I'll be surprised if they go back and say "so when you pick up a user story, draw one of these diagrams and then one of these diagrams and then one of these diagrams before you write any code".

The modus operandi in the workshop, my first using UML in anger for several years, is that everyone works as one team, but split into pairs/threes and each pair is working on one user story for a community video library application. So one pair is working on "As a video club member, I want to donate a DVD" while another is working on "As a video club member, I want to borrow a DVD". All this is happening in parallel. You know how clumsy and awrkward it can be when teams start work on a new application. We tend to trip over each other's feet as we work on the same core classes to implement our part of the functional puzzle. I watched a team of similar size working for a large and reputable Agile development shop end up with several architectures and different versions of the same objects on the code because they didn't collaborate and co-ordinate anywhere near enough over design in those early stages.

The N&K folks each did some modeling to figure out what objects were involved in their scenario (defined as an acceptance test in the popular "given... when.. then..." style), what each object was doing and how the objects were collaborating to complete the work - and therefore pass the acceptance test. They only modeled what they needed for that test. And then they automated their acceptance test and wrote the code based on - but not blindly following - their high-level designs.

Some of the pairs discovered as they coded that their models were not quite what they needed, and new features emerged from that feedback. At this point, their code was the definitive model, and that model was evolving. And I didn't make them update their diagrams. Shock, horror!

Refactoring also helped the model - in the code - to evolve. For example, I spotted a for.. loop in one acceptance test (which we wrote in JUnit because we were just working on the "back end" on day #1 - UI tomorrow) which displayed feature envy for one of the model classes. So they extracted that block of code into a method and moved it on to the model class.

4 pairs co-ordinating around a handful of classes to implement 4 different scenarios, and working code passing their acceptance tests by the end of the first day. I've seen it work well, and I've seen some train wrecks. And I've noticed that visual modeling - be it using UML diagrams or whatever - is a factor in deciding that outcome. Making your thinking visible, especially in those early stages of design when we know the least and the team hasn't established a shared language for the thing they're building, helps. It really does.

I'm looking forward to adding user interfaces to these back ends to see whether, when we join all the MVC dots, we end up with the software we wanted.



December 5, 2010

Learn TDD with Codemanship

Codemanship Presents "Tell, Don't Ask"



Jason Gorman explains the "Tell, Don't Ask" approach to OO design, recently re-popularized on the excellent book Growing Object Oriented Software guided By Tests.

This is a preview of topics covered in the final Test-driven Development master class, which will be in London on Jan 8-9. A quick reminder: the Early Bird offer (all 3 master classes for the price of 2) ends on Dec 11th.




July 12, 2010

Learn TDD with Codemanship

Object Oriented Design Master Class, London Aug 21-22

This summer's run of budget-friendly weekend master classes in software craftsmanship ends with a workshop on object oriented design on August 21-22.


In this very hands-on course, you'll learn about OO design principles (S.O.L.I.D. and more), using real code examples to illustrate how to refactor designs that violate these principles. You'll also get practical experience of using code analysis tools to measure OO design quality and detect problems more easily, even in large code bases. The workshop will also give you a chance to apply simple OO analysis and design techniques in a test-driven approach to development, as we as looking at how up-front design can be balanced with refactoring to produce optimal designs without falling into the shark-infested waters of "Big Design Up-Front" and "no conscious design at all".


As with all the workshops I'm running this summer, there'll be the minimum of talk from me and the maximum of hands-on practical experience and learning by doing, working in pairs with other attendees.


And - as if the price wasn't good enough already - everyone who attends both the TDD and refactoring courses will get to go on this one absolutely FREE. I must be mad! Book your place now before I change my mind!



June 1, 2010

Learn TDD with Codemanship

Budget-Friendly Weekend Masterclasses in TDD, Refactoring and OO Design for Summer 2010

Times are hard.

They're especially tough for freelancers at the moment. Those of us lucky enough to be working can ill afford to buy expensive training, let alone take off time to attend courses.

But at the same time, compeition for work is heating up, and we need to work harder than ever to keep up our knowledge and skills.

So it's no surprise that a trend's emerging for budget-priced training delivered out of office hours.

Over the last decade, I've run premium masterclasses in Test-driven Development, Refactoring, OO design and UML. And this summer, I'll be delivering the same workshops - all very hands-on, practical and based on two decades of real-world experience - at budget-friendly prices and on weekends, so they can easily fit around your day job. The first course is likely to be a TDD masterclass in early July. It'll cost just 199 for 2 packed days, delivered in a purpose-built IT training facility just a brisk riverside stroll from the City and West End.

So if you can make it to London one weekend, and fancy grabbing yourself a bargain, drop me an email and I'll let you know as soon as dates are confirmed.