December 2, 2018

Learn TDD with Codemanship

Architecture: The Belated Return of Big Picture Thinking

A question that's been stalking me is "When does architecture happen in TDD?"

I see a lot of code (a LOT of code) and if there's a trend I've noticed in recent years it's an increasing lack of - what's the word I'm looking for? - rationality in software designs as they grow.

When I watch dev teams produce working software (well, the ones who do produce software that works, at least), I find myself focusing more and more on when the design decisions get made.

In TDD, we can make design decisions during four distinct phases of the red-green-refactor cycle:

1. Planning - decisions we make before we write any code (e.g., a rough sequence diagram that realises a customer test scenario)

2. Specifying- decisions we make while we're writing a failing test (e.g., calling a function to do what you need done for the test, and then declaring it in the solution code)

3. Implementing - decisions we make when we're writing the code to pass the test (e.g., using a loop to search through a list)

4. Refactoring - decisions we make after we've passed the test according to our set of organising principles (e.g., consolidating duplicate code into a reusable method)

If you're a fan of Continuous Delivery like me, then a central goal of the way you write software is that it should be (almost) always shippable. Since 2 and 3 imply not-working code, that suggests we'd spend as little time as possible thinking about design while we're specifying and implementing. While the tests are green (1 and 4), we can consider design at our leisure.

I can break down refactoring even further, into:

4a. Thinking about refactoring

4b. Performing refactorings

Again, if your goal is always-shippable code, you'd spend as little time as possible executing each refactoring.

Put more bluntly, we should be applying the least thought into design while we're editing code.

(In my training workshops, I talk about Little Red Riding Hood and the advice her mother gave her to stay on the path and not wander off into the deep dark forest, where dangers like Big Bad Wolves lurk. Think of working code as the path, and not-working code as the deep dark forest. I encourage developers to always keep at least one foot on the path. When they step off to edit code, they need to step straight back on as quickly as possible.)

Personally - and I've roughly measured this - I make about two-thirds of design decisions during refactoring. That is, roughly 60-70% of the "things" in my code - classes, methods, fields, variables, interfaces etc - appear during refactoring:

* Extracting methods, constants and such to more clearly document what code does

* Extracting methods and classes to consolidate duplicate code

* Extracting classes to eliminate Primitive Obsession (e.g., IF statements that hinge on what is obviously an object identity represented by a literal vaue)

* Extracting and moving methods to eliminate Feature Envy in blocks of code and expressions

* Extracting methods and classes to split up units of code that have > 1 reason to change

* Exctracting methods to decompose complex conditionals

* Extracting client-specific interfaces

* Introducing parameters to make dependencies swappable

And so on and so on.

By this process, my code tends to grow and divide like cells with each new test. A complex order emerges from simple organising principles about readabililty, complexity, duplication and dependencies being applied iteratively over and over again. (This is perfectly illustrated in Joshua Kerievky's Refactoring to Patterns.)

I think of red-green-refactor as the inner loop of software architecture. And lots of developers do this. (Although, let's be honest, too many devs skimp on the refactoring.)

But there's architecture at higher levels of code organisation, too: components, services, systems, systems of systems. And they, too, have their organising principles and patterns, and need their outer feedback loops.

This is where I see a lot of teams falling short. Too little attention is paid to the emerging bigger picture. Few teams, for example, routinely visualise their components and the dependencies between them. Few teams regularly collaborate with other teams on managing the overall architecture. Few devs have a clear perspective on where their work fits in the grand scheme of things.

Buildings need carpentry and plumbing. Roads need tarmaccing. Sewers need digging. Power lines need routing.

But towns need planning. Someone needs to keep an eye on how the buildings and the roads and the sewers and the power lines fit together into a coherent whole that serves the people who live and work there.

Now, I come from a Big ArchitectureTM background. And, for all the badness that we wrought in the pre-XP days, one upside is that I'm a bit more Big Picture-aware than a lot of younger developers seem to be these days.

After focusing almost exclusively on the inner loop of software architecture for the last decade, starting in 2019 I'm going to be trying to help teams build a bit of Big Picture awareness and bring more emphasis on the outer feedback loops and associated principles, patterns and techniques.

The goal here is not to bring back the bad old days, or to ressurect the role of the Big Architect. And it's definitely not to try to reanimate the corpse of Big Design Up-Front.

This is simply about nurturing some Big Picture awareness among developers and hopefully reincorporating the outer feedback loops into today's methodologies, which we misguidedly threw out with the bathwater during the Agile Purges.

And, yes, there may even be a bit of UML. But just enough, mind you.





October 12, 2018

Learn TDD with Codemanship

TDD Training - Part I (Classic TDD), London, Sat Dec 1st

My flagship Codemanship TDD training course returns in a series of 3 standalone Saturday workshops aimed at self-funding learners.

It's the exact same highly popular training we've delivered to more than 2,000 developers since 2009, with 100% hands-on learning reinforced by our jam-packed 200-page TDD course book.

Part 1 is on Saturday Dec 1st in central London, and it's amazingly good value at just £99.

Part I goes in-depth on "classic" TDD, the super-important refactoring discipline, and software design principles that you can apply to your code as it grows and evolves to keep it easy to change so you can maintain the pace of development.

  • Why do TDD?

  • An introduction to TDD

  • Red, Green, Refactor

  • The Golden Rule

  • Working backwards from assertions

  • Testing your tests

  • One reason to fail

  • Writing self-explanatory tests

  • Speaking the customer's language

  • Triangulating designs

  • The Refactoring discipline

  • Software Design Principles
    • Simple Design

    • Tell, Don’t Ask

    • S.O.L.I.D.




The average price of a public 1-day dev training course, per person, is around £600-800. This is fine if your company is picking up the tab.

But we've learned over the years that many devs get no training paid for by their employer, so we appreciate that many of you are self-funding your professional development. Our Saturday workshops are priced to be accessible to professional developers.

In return, developers who've attended our weekend workshops have recommended us to employers and colleagues, and most of the full-price client-site training and coaching we do comes via these referrals.

Please be advised that we do not allow corporate bookings on our workshops for self-funders. Group bookings are limited to a maximum of 4 people. If you would like TDD training for your team(s), please contact me at jason.gorman@codemanship.com to discuss on-site training.

Find out more at the Eventbrite course page

Powered by Eventbrite

October 6, 2018

Learn TDD with Codemanship

Be The Code You Want To See In The World

It's no big secret that I'm very much from the "Just Do It" school of thought on how to apply good practices to software development. I meet teams all the time who complain that they've been forbidden to do, say, TDD by their managers. My answer is always "Next time, don't ask".

After 25 years doing this for a living, much of that devoted to mentoring teams in the developer arts , I've learned two important lessons:

1. It's very difficult to change someone's mind once it's made up. I wasted a lot of time "selling" the benefits of technical practices like unit testing and refactoring to people for whom no amount of evidence or logic was ever going to make them try it. It's one of the reasons I don't do much conference speaking these days.

2. The best strategies rely on things within our control. Indeed, strategies that rely on things beyond our control aren't really strategies at all. They're just wishful thinking.

The upshot of all this is an approach to working that has two core tenets:

1. Don't seek permission

2. Do what you can do

Easy to say, right? It does imply that, as a professional, you have control over how you work.

Here's the thing: as a professional, you have control over how you work. It's not so much a matter of getting that control, as recognising that - in reality - because you're the one writing the code, you already have that control. Your boss is very welcome to write the code themselves if they want it done their way

Of course, with great power comes great responsibility. You want control? Take control. But be sure to be acting in the best interests of your customer and other stakeholders, including the other developers on your team. Code is something you inflict on people. Do it with kindness.

And so there you have it. A mini philosophy. Don't rant and rave about how code should be done. Just do it. Be the code you want to see in the world.

Plenty of developers talk a good game, but their software tells a different story. It's often the case that the great and worthy and noble ideas you see presented in books and at conferences bear little resemblence to how their proponents really work. I've been learning, through Codemanship, that it's more effective to show teams what you do. Talk is cheap. That's why my flagship TDD workshop doesn't have any slides. Every idea is illustrated with real code, every practice is demonstrated right in front of you.

And there isn't a single practice in any Codemanship course I haven't applied many times on real software for real businesses. It's all real, and it all really works in the real world.

What typically prevents teams from applying them isn't their practicality, or how difficult they are to learn. (Although don't underestimate the learning curves.) The obstacles are normally whether they have the will to give it a proper try, and tied up in that, whether they're allowed to try it.

My advice is simple: learn to do it under the radar, in the background, under the bedsheets with a torch, and then the decision to apply it on real software in real teams for real customers will be entirely yours.




October 1, 2018

Learn TDD with Codemanship

50% Off Codemanship Training for Start-ups and Charities

One of the most fun aspects of running a dev training company is watching start-ups I helped a few years ago go from strength to strength.

The best part is seeing how some customers are transforming their markets (I don't use the "d" word), and reaping the long-term benefits of being able to better sustain the pace of innovation through good code craft.

I want to do more to help new businesses, so I've decided that - as of today - start-ups less than 5 years old, with less than 50 employees, will be able to buy Codemanship code craft training half-price.

I'm also extending that offer to non-profits. Registered charities will also be able to buy Codemanship training for just 50% of the normal price.


September 28, 2018

Learn TDD with Codemanship

Micro-cycles & Developing Your Inner Egg Timer

When I'm coaching developers in TDD and refactoring, I find it important to stress the benefits of keeping one foot on the path of working code at all times.

I talk about Little Red Riding Hood, and how she was warned not to stray off the path into the deep dark forest. Bad things happen in the deep dark forest. Similarly, I warn devs to stay on that path of code that works - code that's shippable - and not go wandering off into the deep dark forest of code that's broken.

Of course, in practice, we can't change code without breaking it. So the real skill is in learning how to make the changes we need to make by briefly stepping off the path and stepping straight back on again.

This requires developers to build a kind of internal egg timer that nudges them when they haven't seen their tests pass for too long.



An exercise I've used to develop my internal egg timer uses a real egg timer (or the timer on my smartphone). When I'm mindfully practicing refactoring, for example, I'll set a timer to countdown for 60 seconds, and start it the moment I edit any code.

The moment a source file goes "dirty" - no longer compiles or no longer passes the tests - the countdown starts. I have to get back to passing tests before the sands run out (or the alarm goes off).

I'll do that for maybe 10-15 minutes, then I'll drop the countdown to 50 seconds and do another 10-15 minutes. Then 40 seconds. Then 30. Always trying, as best I can, to get what I need to do done and get back to passing tests before the countdown ends.

I did this every day for about 45-60 minutes for several months, and what I found at the end was that I'd grown a sort of internal countdown. Now, when I haven't seen the tests pass for a few minutes, I get a little knot in my stomach. It makes me genuinely uncomfortable.

I do a similar exercise with TDD, but the countdowns apply the moment I have a failing test. I have 60 seconds to make the test pass. Then 50. Then 40. Then 30. This encourages me to take smaller steps, in tighter micro-cycles.

If my test requires me to take too big a leap, I have to scale back or break it down to simpler steps to get me where I want to go.

The skill is in making progress with one foot firmly on the path of working code at all times. Your inner egg timer is the key.



September 25, 2018

Learn TDD with Codemanship

Third-Generation Testing - Øredev 2018, Malmö, November 22nd

If you're planning on coming to Øredev in Sweden this November, I'm running a brand new training workshop on the final day about Third-Generation Software Testing.

First-generation testing was manual: running the program and clicking the buttons ourselves. We quickly learned that this was slow and often patchy, creating a severe bottleneck in development cycles.

Second-generation testing removed that bottleneck by writing code to test our code.

But what about the tests we didn't think of?

Exploratory testing brought us back to a manual process of exploring what else might be possible - what combinations of inputs, user actions and pathways - using the code we delivered, outside of the behaviours encoded in our automated tests.

Manual exploratory testing suffers from the same setbacks as any kind of manual testing, though. It's slow, and can miss heaps of cases in complex logic.

Third-generation testing automates the generation of the test cases themselves, enabling us to explore much wider state spaces than a manual process could ever hope to achieve. With a little extra test code, and a bit of ingenuity, you can explore thousands, tens of thousands, hundreds of thousands and even millions of extra test cases - combinations, paths, random inputs and ranges - using tools you already know.

In this workshop, we'll explore some simple techniques for adapting and reusing our existing unit tests to exhaustively test our critical code. We'll also look at techniques for identifying what code might need us to go further, and how we can use Cloud technology to execute millions of extra tests in minutes.

You can find out more and book your place at http://oredev.org/2018/sessions/third-generation-software-testing



September 24, 2018

Learn TDD with Codemanship

Why I Throw Away (Most Of) My Customer Tests

There was a period about a decade ago, when BDD frameworks were all new and shiny, when some dev teams experimented with relying entirely on their customer tests. This predictably led to some very slow-running test suites, and an upside-down test pyramid.

It's very important to build a majority of fast-running automated tests to maintain the pace of development. Upside-down test pyramids become a severe bottleneck, slowing down the "metabolism" of delivery.

But it is good to work from precise, executable specifications, too. So I still recommend teams work with their customers to build a shared understanding of what is to be delivered using tools like Cucumber and Fitnesse.

What happens to these customer tests after the software's delivered, though? We've invested time and effort in agreeing them and then automating them. So we shoud keep them, right?

Well, not necessarily. Builders invest a lot of time and effort into erecting scaffolding, but after the house is built, the scaffolding comes down.

The process of test-driving an internal design with fast-running unit tests - by which I mean tests that ask one question and don't involve external dependencies - tends to leave us with the vast majority of our logic tested at that level. That's the base of our testing pyramid, and as it should be.

So I now have customer tests and unit tests asking the same questions. One of them is surplus to requirements for regression testing, and it makes most sense to retain the fastest tests and discard the slowest.

I keep a cherrypicking of customer tests just to check that everything's wired together right in my internal design - maybe a few dozen key happy paths. The rest get archived and quite possibly never run again, or certainly not an a frequent basis. They aren't maintained, because those features or changes have been delivered. Move on.




August 3, 2018

Learn TDD with Codemanship

Keyhole APIs - Good for Microservices, But Not for Unit Testing

I've been thinking a lot lately about what I call keyhole APIs.

A keyhole API is the simplest API possible, that presents the smallest "surface area" to clients for its complete use. This means there's a single function exposed, which has the smallest number of primitive input parameters - ideally one - and a single, simple output.

To illustrate, I had a crack at TDD-ing a solution to the Mars Rover kata, writing tests that only called a single method on a single public class to manipulate the rover and query the results.

You can read the code on my Github account.

This produces test code that's very loosely coupled to the rover implementation. I could have written test code that invokes multiple methods on multiple implementation classes. This would have made it easier to debug, for sure, because tests would pinpoint the source of errors more closely.

If we're writing microservices, keyhole APIs are - I believe - essential. We have to hide as much of the implementation as possible. Clients need to be as loosely coupled to the microservices they use as possible, including microservices that use other microservices.

I encourage developers to create these keyhole APIs for their components and services more and more these days. Even if they're not going to go down the microservice route, its helpful to partition our code into components that could be turned into microservices easily, shoud the need arise.

Having said all that, I don't recommend unit testing entirely through such an API. I draw a distinction there: unit tests are an internal thing, a sort of grey-box testing. Especially important is the ability to isolate units under test from their external dependencies - e.g., by using mocks or stubs - and this requires the test code to know a little about those dependencies. I deliberately avoided that in my Mars Rover tests, and so ended up with a design where dependencies weren't easily swappable in ths way.

So, in summary: keyhole APIs can be a good thing for our architectures, but keyhole developer tests... not so much.


July 27, 2018

Learn TDD with Codemanship

For Load-Bearing Code, Unleash The Power of Third-Generation Testing

As software "eats the world", and people rely more and more on the code we write, there's a strong case for making that code more reliable.

In popular products and services, code may get executed millions or even billions of times a day. In the face of such traffic, the much vaunted "5 nines" reliability (99.999%) just doesn't cut the mustard. Our current mainstream testing practices are arguably not up to the job where our load-bearing code's concerned.

And, yes, when I say "current mainstream practices", I'm including TDD in that. I may test-drive, say, a graph search algorithm in a dozen or so test cases, but put that code in a SatNav system and ship it 1 million cars, and suddenly a dozen tests doesn't fill me with confidence.

Whenever I raise this issue, most developers push back. "None of our code is that critical", they argue. I would suggest that's true of most of their code. But even in pretty run-of-the-mill applications, there's usually a small percentage of code that really needs to not fail. For that code, we should consider going further with our tests.

The first generation of software testing involved running the program and seeing what happens when we enter certain inputs or click certain buttons. We found this to be time-consuming. It created severe bottlenecks in our dev processes. Code needs to be re-tested every time we change it, and manual testing just takes far too long.

So we learned to write code to test our code. The second generation of software testing automated test execution, and removed the bottlenecks. This, for the majority of teams, is the state of the art.

But there are always the test cases we didn't think of. Current practice today is to perform ongoing exploratory testing, to seek out the inputs, paths, user journeys and combinations our test suites miss. This is done manually by test professionals. When they find a failing test we didn't think of, we add it to our automated suite.

But, being manual, it's slow and expensive and doesn't achieve the kind of coverage needed to go beyond the Five 9's.

Which brings me to the Third Generation of Software Testing: writing code to generate the test cases themselves. By automating exploratory testing, teams are able achieve mind-boggling levels of coverage relatively cheaply.

To illustrate, here's a parameterised unit test I wrote when test-driving an algorithm to calculate square roots:

Imagine this is going to be integrated into a flight control system. Those five tests don't give me a warm fuzzy feeling about stepping on any plane using this code.



Now, I feel I need to draw attention to this: unit test fixtures are just classes and unit tests are just methods. They can be reused. We can compose new fixtures and new tests out of them.

So I can write a new parameterised test that, for example, generates a large number of random inputs - all unique - using a library called JCheck (a Java port of the Haskell QuickCheck library).



Don't worry too much about how this works. The important thing to note is that JCheck generates 1,000 unique random inputs. So, with a few extra lines of code we're jumped from 5 test cases to 1,000 test cases.

And with a single extra character, we can leap up a further order of magnitude by simply adding a zero to the number of cases. Or two zeros for 100x more coverage. Or three, or four. Whatever we need. This illustrates the potential power of this kind of technique: we can cover massive state spaces with relatively little extra code.

(And, for those of you thinking "Yeah, but I bet it takes hours to run" - when I ran this for 1 million test cases, it took just over 10 seconds.)

The eagle-eyed among you wil have noticed that I didn't reuse the exact same MathsTest fixture listed above. When test inputs are being generated, we don't have 1,000,000 expected results. We have to generalise our assertions. I adapted the original test into a property-based test, asserting a general property that every correct square root has to have.



Our property-based test can be reused in other ways. This test, for example, generates a range of inputs from 1 to 10 at increments of 0.01.



Again, adding coverage is cheap. Maybe we want to test from 1 to 10000 at increments of 0.001? Easy as peas.

(Yes, these tests take quite a while to run - but that's down to the way JUnit handles parameterised tests, and could be optimised.)

Let's consider a different example. Imagine we have a design with a selection of UI's (Web, Android, iOS, Windows), a selection of local languages (English, French, Chinese, Spanish, Italian, German), and a selection of output formats (Excel, HTML, XML, JSON) and we want to test that every possible combination of UI, language and output works.

There are 96 possible combinations. We could write 96 tests. Or we could generate all the possible combinations with a relatively straightforward bit of code like the Combiner I knocked up in a few hours for larks.



If we added another language (e.g., Polish), we'd go from 96 combinations to 112. It's hopefully easy to see how much easier it could be to evolve the design when the test cases are generated in this way, without dropping below 100% coverage. And, yes, we could take things even further and use reflection to generate the input arrays, so our tests always keep pace with the design without having to change the test code at all. There are many, many possibilities for this kind of testing.

To repeat, I'm not suggesting we'd do this for all our code - just for the code that really has to work.

Food for thought?






June 21, 2018

Learn TDD with Codemanship

Adopting TDD - The Codemanship Roadmap

I've been doing Test-Driven Development for 20 years, and helping dev teams to do it for almost as long. Over that time I've seen thousands of developers and hundreds of teams try to adopt this crucial enabling practice. So I've built a pretty clear picture of what works and what doesn't when you're adopting TDD.

TDD has a steep learning curve. It fundamentally changes the way you approach code, putting the "what" before the "how" and making us work backwards from the question. The most experienced developers, with years of test-after, find it especially difficult to rewrite their internal code to make it comfortable. It's like learning to write with your other hand.

I've seen teams charge at the edifice of this learning curve, trying to test-drive everything from Day #1. That rarely works. Productivity nosedives, and TDD gets jettisoned at the next urgent deadline.

The way to climb this mountain is to ascend via a much shallower route, with a more gentle and realistic gradient. You will most probably not be test-driving all your code in the first week. Or the first month. typically, I find it takes 4-6 months for teams to get the hang of TDD, with regular practice.

So, I have a recommended Codemanship Route To TDD which has worked for many individuals and teams over the last decade.

Week #1: For teams, an orientaton in TDD is a really good idea. It kickstarts the process, and gets everyone talking about TDD in practical detail. My 3-day TDD workshop is designed specifically with this in mind. It shortcuts a lot of conversations, clears up a bunch of misconceptions, and puts a rocket under the team's ambitions to succeed with TDD.

Week #2-#6: Find a couple of hours a week, or 20 minutes a day, to do simple TDD "katas", and focus on the basic Red-Green-Refactor cycle, doing as many micro-iterations as you can to reinforce the habits

Week #7-#11: Progress onto TDD-ing real code for 1 day a week. This could be production code you're working on, or a side project. The goal for that day is to focus on doing it right. The other 4 days of the week, you can focus on getting stuff done. So, overall, your productivity maybe only dips a bit each week. As you gain confidence, widen this "doing it right" time.

Week #12-#16: By this time, you should find TDD more comfortable, and don't struggle to remember what you're supposed to do and when. Your mind is freed up to focus on solving the problem, and TDD is becoming your default way of working. You'll be no less productive TDD-ing than you were befpre (maybe even more productive), and the code you produce will be more reliable and easier to change.

The Team Dojo: Some teams are keen to put their new TDD skills to the test. An exercise I've seen work well for this is my Team Dojo. It's a sufficiently challenging problem, and really works on those individual skills as well as collaborative skills. Afterwards, you can have a retrospective on how the team did, examining their progress (customer tests passed), code quality and the disciplie that was applied to it. Even in the most experienced experienced teams, the doj will reveal gaps that need addressing.

Graduation: TDD is hard. Learning to test-drive code involves all sorts of dev skills, and teams that succeed tell me they feel a real sense of achievement. It can be good to celebrate that achievement. Whether it's a party, or a little ceremony or presentation, when organisations celebrate the achievement with their dev teams, it shows reall commitment to them and to their craft.

Of course, you don't have to do it my way. What's important is that you start slow and burn your pancakes away from the spotlight of real projects with real deadlines. Give yourself the space and the safety to get it wrong, and over time you'll get it less and less wrong.

If you want to talk about adopting TDD on your team, drop me a line.