October 12, 2018

Learn TDD with Codemanship

TDD for Self-Funders, London, Sat Dec 1st

My flagship Codemanship TDD training course returns in a series of 3 standalone Saturday workshops aimed at self-funding learners.

It's the exact same highly popular training we've delivered to more than 2,000 developers since 2009, with 100% hands-on learning reinforced by our jam-packed 200-page TDD course book.

Part 1 is on Saturday Dec 1st in central London, and it's amazingly good value at just £149. Plus you'll get £50 of Part 2.

Part I goes in-depth on "classic" TDD, the super-important refactoring discipline, and software design principles that you can apply to your code as it grows and evolves to keep it easy to change so you can maintain the pace of development.

  • Why do TDD?

  • An introduction to TDD

  • Red, Green, Refactor

  • The Golden Rule

  • Working backwards from assertions

  • Testing your tests

  • One reason to fail

  • Writing self-explanatory tests

  • Speaking the customer's language

  • Triangulating designs

  • The Refactoring discipline

  • Software Design Principles
    • Simple Design

    • Tell, Don’t Ask

    • S.O.L.I.D.




The average price of a public 1-day dev training course, per person, is around £600-800. This is fine if your company is picking up the tab.

But we've learned over the years that many devs get no training paid for by their employer, so we appreciate that many of you are self-funding your professional development. Our Saturday workshops are priced to be accessible to professional developers.

In return, developers who've attended our weekend workshops have recommended us to employers and colleagues, and most of the full-price client-site training and coaching we do comes via these referrals.

Please be advised that we do not allow corporate bookings on our workshops for self-funders. Group bookings are limited to a maximum of 4 people. If you would like TDD training for your team(s), please contact me at jason.gorman@codemanship.com to discuss on-site training.

Find out more at the Eventbrite course page

Powered by Eventbrite

October 6, 2018

Learn TDD with Codemanship

Be The Code You Want To See In The World

It's no big secret that I'm very much from the "Just Do It" school of thought on how to apply good practices to software development. I meet teams all the time who complain that they've been forbidden to do, say, TDD by their managers. My answer is always "Next time, don't ask".

After 25 years doing this for a living, much of that devoted to mentoring teams in the developer arts , I've learned two important lessons:

1. It's very difficult to change someone's mind once it's made up. I wasted a lot of time "selling" the benefits of technical practices like unit testing and refactoring to people for whom no amount of evidence or logic was ever going to make them try it. It's one of the reasons I don't do much conference speaking these days.

2. The best strategies rely on things within our control. Indeed, strategies that rely on things beyond our control aren't really strategies at all. They're just wishful thinking.

The upshot of all this is an approach to working that has two core tenets:

1. Don't seek permission

2. Do what you can do

Easy to say, right? It does imply that, as a professional, you have control over how you work.

Here's the thing: as a professional, you have control over how you work. It's not so much a matter of getting that control, as recognising that - in reality - because you're the one writing the code, you already have that control. Your boss is very welcome to write the code themselves if they want it done their way

Of course, with great power comes great responsibility. You want control? Take control. But be sure to be acting in the best interests of your customer and other stakeholders, including the other developers on your team. Code is something you inflict on people. Do it with kindness.

And so there you have it. A mini philosophy. Don't rant and rave about how code should be done. Just do it. Be the code you want to see in the world.

Plenty of developers talk a good game, but their software tells a different story. It's often the case that the great and worthy and noble ideas you see presented in books and at conferences bear little resemblence to how their proponents really work. I've been learning, through Codemanship, that it's more effective to show teams what you do. Talk is cheap. That's why my flagship TDD workshop doesn't have any slides. Every idea is illustrated with real code, every practice is demonstrated right in front of you.

And there isn't a single practice in any Codemanship course I haven't applied many times on real software for real businesses. It's all real, and it all really works in the real world.

What typically prevents teams from applying them isn't their practicality, or how difficult they are to learn. (Although don't underestimate the learning curves.) The obstacles are normally whether they have the will to give it a proper try, and tied up in that, whether they're allowed to try it.

My advice is simple: learn to do it under the radar, in the background, under the bedsheets with a torch, and then the decision to apply it on real software in real teams for real customers will be entirely yours.




October 1, 2018

Learn TDD with Codemanship

50% Off Codemanship Training for Start-ups and Charities

One of the most fun aspects of running a dev training company is watching start-ups I helped a few years ago go from strength to strength.

The best part is seeing how some customers are transforming their markets (I don't use the "d" word), and reaping the long-term benefits of being able to better sustain the pace of innovation through good code craft.

I want to do more to help new businesses, so I've decided that - as of today - start-ups less than 5 years old, with less than 50 employees, will be able to buy Codemanship code craft training half-price.

I'm also extending that offer to non-profits. Registered charities will also be able to buy Codemanship training for just 50% of the normal price.


September 28, 2018

Learn TDD with Codemanship

Micro-cycles & Developing Your Inner Egg Timer

When I'm coaching developers in TDD and refactoring, I find it important to stress the benefits of keeping one foot on the path of working code at all times.

I talk about Little Red Riding Hood, and how she was warned not to stray off the path into the deep dark forest. Bad things happen in the deep dark forest. Similarly, I warn devs to stay on that path of code that works - code that's shippable - and not go wandering off into the deep dark forest of code that's broken.

Of course, in practice, we can't change code without breaking it. So the real skill is in learning how to make the changes we need to make by briefly stepping off the path and stepping straight back on again.

This requires developers to build a kind of internal egg timer that nudges them when they haven't seen their tests pass for too long.



An exercise I've used to develop my internal egg timer uses a real egg timer (or the timer on my smartphone). When I'm mindfully practicing refactoring, for example, I'll set a timer to countdown for 60 seconds, and start it the moment I edit any code.

The moment a source file goes "dirty" - no longer compiles or no longer passes the tests - the countdown starts. I have to get back to passing tests before the sands run out (or the alarm goes off).

I'll do that for maybe 10-15 minutes, then I'll drop the countdown to 50 seconds and do another 10-15 minutes. Then 40 seconds. Then 30. Always trying, as best I can, to get what I need to do done and get back to passing tests before the countdown ends.

I did this every day for about 45-60 minutes for several months, and what I found at the end was that I'd grown a sort of internal countdown. Now, when I haven't seen the tests pass for a few minutes, I get a little knot in my stomach. It makes me genuinely uncomfortable.

I do a similar exercise with TDD, but the countdowns apply the moment I have a failing test. I have 60 seconds to make the test pass. Then 50. Then 40. Then 30. This encourages me to take smaller steps, in tighter micro-cycles.

If my test requires me to take too big a leap, I have to scale back or break it down to simpler steps to get me where I want to go.

The skill is in making progress with one foot firmly on the path of working code at all times. Your inner egg timer is the key.



September 25, 2018

Learn TDD with Codemanship

Third-Generation Testing - Øredev 2018, Malmö, November 22nd

If you're planning on coming to Øredev in Sweden this November, I'm running a brand new training workshop on the final day about Third-Generation Software Testing.

First-generation testing was manual: running the program and clicking the buttons ourselves. We quickly learned that this was slow and often patchy, creating a severe bottleneck in development cycles.

Second-generation testing removed that bottleneck by writing code to test our code.

But what about the tests we didn't think of?

Exploratory testing brought us back to a manual process of exploring what else might be possible - what combinations of inputs, user actions and pathways - using the code we delivered, outside of the behaviours encoded in our automated tests.

Manual exploratory testing suffers from the same setbacks as any kind of manual testing, though. It's slow, and can miss heaps of cases in complex logic.

Third-generation testing automates the generation of the test cases themselves, enabling us to explore much wider state spaces than a manual process could ever hope to achieve. With a little extra test code, and a bit of ingenuity, you can explore thousands, tens of thousands, hundreds of thousands and even millions of extra test cases - combinations, paths, random inputs and ranges - using tools you already know.

In this workshop, we'll explore some simple techniques for adapting and reusing our existing unit tests to exhaustively test our critical code. We'll also look at techniques for identifying what code might need us to go further, and how we can use Cloud technology to execute millions of extra tests in minutes.

You can find out more and book your place at http://oredev.org/2018/sessions/third-generation-software-testing



September 24, 2018

Learn TDD with Codemanship

Why I Throw Away (Most Of) My Customer Tests

There was a period about a decade ago, when BDD frameworks were all new and shiny, when some dev teams experimented with relying entirely on their customer tests. This predictably led to some very slow-running test suites, and an upside-down test pyramid.

It's very important to build a majority of fast-running automated tests to maintain the pace of development. Upside-down test pyramids become a severe bottleneck, slowing down the "metabolism" of delivery.

But it is good to work from precise, executable specifications, too. So I still recommend teams work with their customers to build a shared understanding of what is to be delivered using tools like Cucumber and Fitnesse.

What happens to these customer tests after the software's delivered, though? We've invested time and effort in agreeing them and then automating them. So we shoud keep them, right?

Well, not necessarily. Builders invest a lot of time and effort into erecting scaffolding, but after the house is built, the scaffolding comes down.

The process of test-driving an internal design with fast-running unit tests - by which I mean tests that ask one question and don't involve external dependencies - tends to leave us with the vast majority of our logic tested at that level. That's the base of our testing pyramid, and as it should be.

So I now have customer tests and unit tests asking the same questions. One of them is surplus to requirements for regression testing, and it makes most sense to retain the fastest tests and discard the slowest.

I keep a cherrypicking of customer tests just to check that everything's wired together right in my internal design - maybe a few dozen key happy paths. The rest get archived and quite possibly never run again, or certainly not an a frequent basis. They aren't maintained, because those features or changes have been delivered. Move on.




August 26, 2018

Learn TDD with Codemanship

Yes, Developers Should Learn Ethics. But That's Only Half The Picture.

Given the negative impact that some technology start-ups have had on society, and how prominent that sentiment is in the news these days, it's no surprise that more and more people are suggesting that the people who create this technology develop their sense of humanity and ethics.

I do not deny that many of us in software could use a crash course in things like ethics, philosophy, law and history. Ethics in our industry is a hot potato at the moment.

But I do not believe that it should all be on us. When I look at the people in leadership positions - in governments, in key institutions, and in the boardrooms - who are driving the decisions that are creating the wars, the environmental catastrophes, the growing inequality, and the injustice and oppression that we see daily in the media - it strikes me that the problem isn't that the world is run by scientists or engineers. Society isn't ruled by evidence and logic.

As well as STEM graduates needing a better-developed sense of ethics, I think the world would also be improved if the rest of the population had more effective bullshit detectors. Taking Brexit as a classic example, voters were bombarded with campaign messages that were demonstrably false, and promises that were provably impossible to deliver. Leave won by appealing to voters' feelings about immigration, about globalisation and about Britain's place in the EU. Had more voters checked the facts, I have no doubt the vote would have swung the other way.

Sure, this post-truth world we seem to be living in now was aided and abetted by new technology, and the people who created that technogy should have said "No". But, as far as I can tell, it never even occured to them to ask those kids of questions.

But let's be honest, it wasn't online social media advertising that gifted a marginal victory to the British far-right and installed a demagogue in the White House, any more than WWII was the fault of the printing presses that churned out copy after copy of Mein Kampf. Somebody made a business decision to let those social media campaigns run and take the advertisers' money.

Rightly IMHO, it's turned a spotlight on social media that was long overdue. I do not argue that technology should not require ethics. Quite the reverse.

What I'm saying, I guess, is that a better understanding of the humanities among scientists and engineers is only half the picture. If we think the world's problems will be solved because a coder said "I'm not going to track that cookie, it's unethical" to their bosses, we're going to be terribly disappointed.


August 6, 2018

Learn TDD with Codemanship

Agile Baggage

In the late 1940s, a genuine mystery gripped the world as it rebuilt after WWII. Thousands of eye witnesses - including pilots, police officers, astronomers, and other credible observers - reported seeing flying objects that had performance characteristics far beyond any known natural or artificial phenomenon.

These "flying saucers" - as they became popularly known - were the subject of intense study by military agencies in the US, the UK and many other countries. Very quickly, the extraterrestrial hypothesis - that these objects were spacecraft from another world - caught the public's imagination, and "flying saucer" became synonymous with Little Green Men.

In an attempt to outrun that pop culture baggage, serious studies of these objects adopted the less sensational term "Unidentified Flying Object". But that, too, soon became shorthand for "alien spacecraft". These days, you can't be taken seriously if you study UFOs, because it lumps you in with some very fanciful notions, and some - how shall we say? - rather colorful characters. Scientists don't study UFOs any more. It's not good for the career.

These days, scientific studies of strange lights in the sky - like the Ministry of Defence's Project Condign - use the term Unidentified Aerial Phenomena (UAP) in an attempt to outrun the cultural baggage of "UFOs".

The fact remains, incontravertibly, that every year thousands of witnesses see things in the sky that conform to no known physical phenomena, and we're no closer to understanding what it is they're seeing after 70 years of study. The most recent scientific studies, in the last 3 decades, all conclude that a portion of reported "UAPs" are genuine unknowns, they they are of real defence significance, and worthy of further scientific study. But well-funded studies never seem to materialise, because of the connotation that UFOs = Little Green Men.

The well has been poisoned by people who claim to know the truth about what these objects are, and they'll happily reveal all in their latest book or DVD - just £19.95 from all good stores (buy today and get a free Alien Grey lunch box!) If these people would just 'fess up that, in reality, they don't know what they are, either - or , certainly, they can't prove their theories - the scientific community could get back to trying to find out, like they attempted to in the late 1940s and early 1950s.

Agile Software Development ("agile" for short) is also now dragging a great weight of cultural baggage behind it, much of it generated by a legion of people also out to make a fast buck by claiming to know the "truth" about what makes businesses successful with technology.

Say "agile" today, and most people think you're talking about Scrum (and its scaled variations). The landscape is very different to 2001, when the term was coined at a ski resort in Utah. Today, there are about 20,000 agile coaches in the UK alone. Two thirds of them come from non-technical backgrounds. Like the laypeople who became "UFO researchers", many agile coaches apply a veneer of pseudoscience to what is - in essence - a technical persuit.

The result is an appearance of agility that often lacks the underlying technical discipline to make it work. Things like unit tests, continuous integration, design principles, refactoring: they're every bit as important as user stories and stand-up meetings and burndown charts.

Many of us saw it coming years ago. Call it "frAgile", "Cargo Cult agile", or "WAgile" (Waterfall-Agile) - it was on the cards as soon as we realised Agile Software Development was being hijacked by management consultants.

Post-agilism was an early response: an attempt to get back to "doing what works". Software Craftsmanship was a more defined reaction, reaffirming the need for technical discipline if we're to be genuinely responsive to change. But these, too, accrued their baggage. Software craft today is more of a cult of personality, dominated by a handful of the most vocal proponents of what has become quite a narrow interpretation of the technical disciplines of writing software. Post-agilism devolved into a pseudo-philosophical talking shop, never quite getting down to the practical detail. Their wells, too, have been poisoned.

But teams are still delivering software, and some teams are more successfully delivering software than others. Just as with UFOs, beneath the hype, there's a real phenomenon to be understood. It ain't Scrum and it ain't Lean and it certainly ain't SAFe. But there's undeniably something that's worthy of further study. Agile has real underlying insights to offer - not necessarily the ones written on the Manifesto website, though.

But, to outrun the cultural baggage, what shall we call it now?




August 3, 2018

Learn TDD with Codemanship

Keyhole APIs - Good for Microservices, But Not for Unit Testing

I've been thinking a lot lately about what I call keyhole APIs.

A keyhole API is the simplest API possible, that presents the smallest "surface area" to clients for its complete use. This means there's a single function exposed, which has the smallest number of primitive input parameters - ideally one - and a single, simple output.

To illustrate, I had a crack at TDD-ing a solution to the Mars Rover kata, writing tests that only called a single method on a single public class to manipulate the rover and query the results.

You can read the code on my Github account.

This produces test code that's very loosely coupled to the rover implementation. I could have written test code that invokes multiple methods on multiple implementation classes. This would have made it easier to debug, for sure, because tests would pinpoint the source of errors more closely.

If we're writing microservices, keyhole APIs are - I believe - essential. We have to hide as much of the implementation as possible. Clients need to be as loosely coupled to the microservices they use as possible, including microservices that use other microservices.

I encourage developers to create these keyhole APIs for their components and services more and more these days. Even if they're not going to go down the microservice route, its helpful to partition our code into components that could be turned into microservices easily, shoud the need arise.

Having said all that, I don't recommend unit testing entirely through such an API. I draw a distinction there: unit tests are an internal thing, a sort of grey-box testing. Especially important is the ability to isolate units under test from their external dependencies - e.g., by using mocks or stubs - and this requires the test code to know a little about those dependencies. I deliberately avoided that in my Mars Rover tests, and so ended up with a design where dependencies weren't easily swappable in ths way.

So, in summary: keyhole APIs can be a good thing for our architectures, but keyhole developer tests... not so much.


July 27, 2018

Learn TDD with Codemanship

For Load-Bearing Code, Unleash The Power of Third-Generation Testing

As software "eats the world", and people rely more and more on the code we write, there's a strong case for making that code more reliable.

In popular products and services, code may get executed millions or even billions of times a day. In the face of such traffic, the much vaunted "5 nines" reliability (99.999%) just doesn't cut the mustard. Our current mainstream testing practices are arguably not up to the job where our load-bearing code's concerned.

And, yes, when I say "current mainstream practices", I'm including TDD in that. I may test-drive, say, a graph search algorithm in a dozen or so test cases, but put that code in a SatNav system and ship it 1 million cars, and suddenly a dozen tests doesn't fill me with confidence.

Whenever I raise this issue, most developers push back. "None of our code is that critical", they argue. I would suggest that's true of most of their code. But even in pretty run-of-the-mill applications, there's usually a small percentage of code that really needs to not fail. For that code, we should consider going further with our tests.

The first generation of software testing involved running the program and seeing what happens when we enter certain inputs or click certain buttons. We found this to be time-consuming. It created severe bottlenecks in our dev processes. Code needs to be re-tested every time we change it, and manual testing just takes far too long.

So we learned to write code to test our code. The second generation of software testing automated test execution, and removed the bottlenecks. This, for the majority of teams, is the state of the art.

But there are always the test cases we didn't think of. Current practice today is to perform ongoing exploratory testing, to seek out the inputs, paths, user journeys and combinations our test suites miss. This is done manually by test professionals. When they find a failing test we didn't think of, we add it to our automated suite.

But, being manual, it's slow and expensive and doesn't achieve the kind of coverage needed to go beyond the Five 9's.

Which brings me to the Third Generation of Software Testing: writing code to generate the test cases themselves. By automating exploratory testing, teams are able achieve mind-boggling levels of coverage relatively cheaply.

To illustrate, here's a parameterised unit test I wrote when test-driving an algorithm to calculate square roots:

Imagine this is going to be integrated into a flight control system. Those five tests don't give me a warm fuzzy feeling about stepping on any plane using this code.



Now, I feel I need to draw attention to this: unit test fixtures are just classes and unit tests are just methods. They can be reused. We can compose new fixtures and new tests out of them.

So I can write a new parameterised test that, for example, generates a large number of random inputs - all unique - using a library called JCheck (a Java port of the Haskell QuickCheck library).



Don't worry too much about how this works. The important thing to note is that JCheck generates 1,000 unique random inputs. So, with a few extra lines of code we're jumped from 5 test cases to 1,000 test cases.

And with a single extra character, we can leap up a further order of magnitude by simply adding a zero to the number of cases. Or two zeros for 100x more coverage. Or three, or four. Whatever we need. This illustrates the potential power of this kind of technique: we can cover massive state spaces with relatively little extra code.

(And, for those of you thinking "Yeah, but I bet it takes hours to run" - when I ran this for 1 million test cases, it took just over 10 seconds.)

The eagle-eyed among you wil have noticed that I didn't reuse the exact same MathsTest fixture listed above. When test inputs are being generated, we don't have 1,000,000 expected results. We have to generalise our assertions. I adapted the original test into a property-based test, asserting a general property that every correct square root has to have.



Our property-based test can be reused in other ways. This test, for example, generates a range of inputs from 1 to 10 at increments of 0.01.



Again, adding coverage is cheap. Maybe we want to test from 1 to 10000 at increments of 0.001? Easy as peas.

(Yes, these tests take quite a while to run - but that's down to the way JUnit handles parameterised tests, and could be optimised.)

Let's consider a different example. Imagine we have a design with a selection of UI's (Web, Android, iOS, Windows), a selection of local languages (English, French, Chinese, Spanish, Italian, German), and a selection of output formats (Excel, HTML, XML, JSON) and we want to test that every possible combination of UI, language and output works.

There are 96 possible combinations. We could write 96 tests. Or we could generate all the possible combinations with a relatively straightforward bit of code like the Combiner I knocked up in a few hours for larks.



If we added another language (e.g., Polish), we'd go from 96 combinations to 112. It's hopefully easy to see how much easier it could be to evolve the design when the test cases are generated in this way, without dropping below 100% coverage. And, yes, we could take things even further and use reflection to generate the input arrays, so our tests always keep pace with the design without having to change the test code at all. There are many, many possibilities for this kind of testing.

To repeat, I'm not suggesting we'd do this for all our code - just for the code that really has to work.

Food for thought?