April 26, 2017

Learn TDD with Codemanship

20 Dev Metrics - 10. Duplication

Number 10 in my series 20 Dev Metrics is another contributor to higher cost of changing code, Duplication.

When we repeat code, we multiply the cost of changing any of that common logic. We also potentially multiply instances of the same bug. (It's not like in your school exams, where repeating an error you already made doesn't cost you any more marks. Repeated bugs hurt just as much.)

For this reason, we seek to minimise code duplication - except when it makes the code easier to understand.

Thankfully, we don't have to go trawling through our projects, comparing every block of code to every other block of code. There are tools that can do this for us, like conQAT and Simian.

The most interesting about duplicate code is the hints it can give us about generalisations and abstractions that would improve our designs, and so we find that when refactoring code, it's a good thread to pull on.

January 20, 2017

Learn TDD with Codemanship

TDD 2.0 - London, May 10th

After the success of last week's TDD 2.0 training workshop, I've immediately scheduled another one for the Spring.

Powered by Eventbrite

It's 3-days jam-packed with hands-on learning and practice, covering everything from TDD basics and customer-driven TDD/BDD, all the way to advanced topics other courses and books don't touch on like mutation testing and non-functional TDD.

And it comes with my new TDD book, exclusive to attendees.

If you fancy a code craft skills boost, twist the boss's arm and join us on May 10th.

January 14, 2017

Learn TDD with Codemanship

Codemanship Alumni - LinkedIn Group

Just a quick note to mention that there's a special LinkedIn group for folk who've attended Codemanship training workshops*.

With demand for skills like TDD and refactoring rising rapidly, membership is something you can display proudly for interested hirers.

* You'll need to list details of Codemanship training courses you've attended (what, when, where) on your LinkedIn profile so I can check against our training records.

January 9, 2017

Learn TDD with Codemanship

Last Call for TDD 2.0 - London, Wed-Fri

Just a quick note to mention I've got a bit of space of available for this week's jam-packed TDD 2.0 training workshop in London.

January can be a quiet period for dev teams, so twist the boss's arm and take advantage of some slow time.


December 19, 2016

Learn TDD with Codemanship

Start The New Year With A TDD Skills Boost

Just a quick reminder about the next public TDD training workshop I'm running in London on Jan 11-13. It's the most hands-on and practical workshop I've ever done, and at about half the price of the competition, it's great value. And, of course, you get the exclusive TDD book. Not available in any shops!

Powered by Eventbrite

November 8, 2016

Learn TDD with Codemanship

Business Benefits of Continuous Delivery: We Need Hard Data

Something that's been bugging me for a while is our apparent lack of attention to the proclaimed business benefits of Continuous Delivery.

I'm not going to argue for one second that CD doesn't have business benefits; I'm a firm believer in the practice myself. But that's just it... I'm a believer in the business benefits of Continuous Delivery. And it's a belief based on personal and anecdotal experience, not on a good, solid body of hard evidence.

I had naturally assumed that such evidence existed, given that the primary motivation for CD, mentioned over and over again in the literature, is the reduced lead times on delivering feature and change requests. It is, after all, the main point of CD.

But where is the data that supports reduced lead times? I've looked, but not found it. I've found surveys about adopting CD. I've found proposed metrics, but no data. I've found largely qualitative studies of one or two organisations. But no smoking gun, as yet.

There's a mountain of data that backs up the benefits of defect prevention, but the case for CI currently rests on little more than smoke.

This, I reckon, we need to fix. It's a pillar on which so much of software craftsmanship and Agile rests; delivering working software sooner (and for longer).

Anything that supports the case for Continuous Delivery indirectly supports the case for Continuous Integration, TDD, refactoring, automation, and a bunch of other stuff we believe is good for business. And as such, I think we need that pillar to unassailably strong.

We need good data - not from surveys and opinion polls - on lead times that we can chart against CD practices so we can build a picture of what real, customer-visible impact these practices have.

To be genuinely useful and compelling, it would need to come from hundreds of places and cover the full spectrum of Continuous Delivery from infrequent manual builds with infrequent testing and no automation, to completely automated Continuous Deployment several times a day with high confidence.

One thing that would of particular interest to Agile mindsets would be how the lead times change over time. As the software grows, do lead times get longer? What difference does, say, automated developer testing make to the shape of the curve?

Going beyond that, can we understand what impact shorter lead times can have on a business? Shorter lead times, in of themselves have no value. The value is in what they enable a business to do - specifically, to learn faster. But what, in real terms, are the business benefits of learning faster? How would we detect them? Are businesses that do CD outperforming competitors who don't in some way? Are they better at achieving their goals?

Much to ponder on.

October 12, 2016

Learn TDD with Codemanship

TDD 2.0 - London, Jan 11th

Here's an idea for what the boss could get your team for Christmas; the first full TDD 2.0 public workshop is happening in London on January 11-13.

There's a 1, 2 and 3-day option, and every attendee gets a copy of the exclusive new TDD book to take away and continue their TDD journey with.

Powered by Eventbrite

October 9, 2016

Learn TDD with Codemanship

TDD 2.0 Launches

Yesterday saw the launch of the new Codemanship TDD training workshop in London. Thirty keen code crafters joined in South Wimbledon to put themselves through their TDD paces, and get their hands on my new book, exclusive to workshop attendees.

It was a packed day - maybe a bit too packed, with everything we crammed in - and everyone rose to the challenge admirably. As always, it's enormously rewarding to spend a day with developers who care about their craft.

The full version of the workshop comes in 1, 2 and 3-day varieties, with more time to explore each exercise, and at a more leisurely pace. Every attendee gets a copy of the 200-page book, which covers everything from TDD basics (red-green-refactor) to advanced topics like property-based testing and non-functional TDD. Further reading's clearly signposted, making the book and the workshop a great gateway into a lifetime of TDD study and practice, as well as a pick-me-up for experienced TDD practitioners, with many ideas not usually covered in TDD books and courses.

You can find out more about the workshop, and download the first seven chapters of the book for free, by visiting http://www.codemanship.com/tdd.html

September 8, 2016

Learn TDD with Codemanship

Introduction to TDD - "Brown Bag" Sessions

With my new training companion book TDD launching next month, I thought it might be fun to offer some convenient "brown bag" sessions where folks can get a quick practical introduction to TDD and a copy of the book to take away.

Feedback via the Twitters suggests some of you are interested, and now I want to flesh the idea out a bit with some more details.

I want to get folk fired up about learning TDD, and the book can help them take the next steps on that journey.

The basic idea is that you invite me into your office in London* between 12pm-2pm, or after work at 6-8pm. You'll need a room/space for everyone, with a projector or big TV I can plug my laptop into. We'll do some TDD together, and you'll need a computer between every two people at least, as you'll be working in pairs. I'll code and talk, and we'll get straight into it - no time for dilly-dallying.

The session will last one hour, and attendees will get a copy of the TDD book, worth £30.

I would suggest there needs to be a minimum of 4 attendees, and pricing would be as follows:

For 4 people - £95/person

5-8 people - £85/person

9+ people - £75/person

In that hour, we'll cover:

* Why do TDD?
* Red-Green-Refactor basics
* The Golden Rule
* Working backwards from assertions
* Refactoring to parameterised tests
* Testing your tests

There's be NO SLIDES. It'll be 100% hands-on. I'll do the practical stuff in Java or C#, but you can do it in any programming language you like, provided you have the appropriate tools (an IDE/editor and an implementation of xUnit - preferably one that supports parameterised tests (or has an add-on that does). Mocking frameworks will not be required for this introduction.

You can grab yourself a free preview of the book, including the first 7 chapters, from http://codemanship.co.uk/tdd.html

More details soon on how to book. But if you're interested in me running a TDD "brown bag" where you work, drop me a line and we can get the ball rolling now.

* If there's enough demand, I'll do tours of other cities

August 31, 2016

Learn TDD with Codemanship

Slow Running/Unmaintainable Automated Tests? Don't Ditch Them. Optimise Them.

It's easy for development teams to underestimate the effort they need to invest in optimising their automated tests for both maintainability and performance.

If your test code is difficult to change, your software is difficult to change. And if your tests take a long time to run - not unheard of for teams to have test suiites that take hours - then you won't run them very often. Slow-running tests can be a blocker to Continuous Delivery, because Continuous Delivery requires Continuous Testing, to be confident that the software is always shippable.

It's very tempting when your tests are slow and/or difficult to change to delete the source of your pain. But the reality is that these unpleasent side effects pale in comparison to the effects of not having automated tests.

We know from decades of experience that bugs are more likely to appear in code that is tested less well and less often. Ditching your automated tests opens the flood gates, and I've seen many times code bases rapidly deteriorate after teams throw away their test suites. I would much rather have a slow, clunky suite of tests to run overnight than no automated tests at all. The alternative is wilful ignorance, which doesn't make the bugs go away, regrettably.

Don't give in to this temptation. You'll end up jumping out of the frying pan and into the fire.

Instead, look at how the test code could be refactored to better accomodate change. In particular, focus on where the test code is directly coupled to the classes (or services, or UIs) under test. I see time after time massive amounts of duplicated interaction code. Refactoring this duplication so that interactions with classes under test happen in one place can dramatically reduce the cost of change.

And if you think you have too many individual tests, parameterized tests are a criminally under-utilised tool for consolidating multiple test cases into a single test method. You can buy yourself quite staggering levels of test assurance with surprisingly little test code.

When tests run slow, that's usually down to external dependencies. System tests, for example, tend to bring all of the end-to-end architecture into play as they're executed: databases, files, web services, and so on.

A clean separation of concerns in your test code can help bring the right amount of horsepower to bear on the logic of your software. A system test that checks a calculation is done correctly using data from a web service doesn't really need that web service in order to do the check. Indeed, it doesn't need to be a system test at all. A fast-running unit test for the module that does that work will be just spiffy.

Fetching the data and doing the calculation with it are two separate concerns. Have a test for one that gets test data from a stub. And another test - a single test - that checks that the implementation of that stub's interface fetches the data correctly.

We tend to find that interactions with external dependencies form a small minority of our software's logic, and should therefore only require a small minority of the tests. If the design of the software doesn't allow separation of concerns (stubbing, mocking, dummies), refactor it until it does. And, again, the mantra "Don't Repeat Yourself" can dramatically reduce the amount of integration code that needs to be tested. You don't need database connection code for every single entity class.

Of course, for a large test suite, this can take a long time. And too many teams - well, their managers, really - balk at the investment they need to make, choosing instead to either live with slow and unmaintainable tests for the lifetime of the software (which is typically a far greater cost), or worse still, to ditch their automated tests.

Because teams who report "we deleted our tests, and the going got easier" are really only reporting a temporary relief.