July 10, 2017
Codemanship Bite-Sized - 2-Hour Trainng Workshops for Busy Teams
One thing that clients mention often is just how difficult it is to make time for team training. A 2 or 3-day course takes your team out of action for a big chunk of time, during which nothing's getting delivered.
For those teams that struggle to find time for training, I've created a spiffing menu of action-packed 2-hour code craft workshops that can be delivered any time from 8am to 8pm.
- Test-Driven Development workshops
- Introduction to TDD
- Specification By Example/BDD
- Stubs, Mocks & Dummies
- Outside-In TDD
- Refactoring workshops
- Refactoring 101
- Refactoring To Patterns
- Design Principles workshops
- Simple Design & Tell, Don’t Ask
- Clean Code Metrics
To find out more, visit http://www.codemanship.co.uk/bitesized.html
June 20, 2017
Are.Fluent(Assertions).Really.Easier.To(Understand)?I'm currently updating a slide deck for an NUnit workshop I run (all the spiffy new library versions, because I'm down with the yoof), and got to the slide on fluent assertions.
The point I make with this slide is that - according to the popular wisdom - fluent assertions are easier to understand because they're more expressive than classic NUnit assertions.
So, I took a bunch of examples of classic assertions from a previous slide and redid them as fluent assertions, and ended up with this.
Compared next to each other like this, suddenly somehow my claim that fluent assertions are easier to understand looks shaky. Are they? Are they really?
A client of mine, some time back now, ran a little internal experiment on this with fluent assertions written with Hamcrest and JUnit. They rigged up a dozen or so assertions in fluent and classic styles, and timed developers while they decided if those tests would pass or fail. It was noted - with some surprise - that people seemed to grok the classic assertions faster.
What do you think?
June 5, 2017
The Codemanship TDD "Driving Test" - Initial UpdateA question that gets asked increasingly frequently by folk who've been on a Codemanship TDD workshop is "Do we get a certificate?"
Now, I'm not a great believer in certification, especially when the certificates are essentially just for turning up. For example, a certificate that says you're an "agile developer", based on sitting an exam at the end of a 2-3 day training course, really doesn't say anything meaningful about your actual abilities.
Having said all that, I have pioneered programs in the past that did seem to be a decent indicator of TDD skills and habits. First of all, to know if a juggler can juggle, we've got to see them juggle.
A TDD exam is meaningless in most respects, except perhaps to show that someone understands why they're doing what they're doing. Someone may be in the habit of writing tests that only ask one question, but I see developers doing things all the time that they "read in a book" or "saw their team doing" and all they're really doing is parroting it.
Conversely, someone may understand that tests should ideally have only one reason to fail so that when they do fail, it's much easier to pinpoint the cause of the problem, but never put that into practice. I also see a lot of developers who can talk the talk but don't walk the walk.
So, the top item on my TDD certification wish-list would be that it has to demonstrate both practical ability and insight.
In this respect, the best analogy I can think of is a driving test; learner drivers have to demonstrate a practical grasp of the mechanics of safe driving as well as a theoretical grasp of motoring and the highway code. In a TDD "driving test", people would need to succeed at both a practical and a theoretical component.
The practical element would need to be challenging enough - but not too challenging - to get a real feel for whether they're good enough at TDD to scale it to non-trivial problems. FizzBuzz just won't vut it, in my experience. (Although you can weed out theose who obviously can't even do the basics in a few minutes.)
The Team Dojo I created for the Software Craftsmanship conference seems like a viable candidate. Except it would be tackled by you alone (which you may actually find easier!) In the original dojo, developers had to tackle requirements for a fictional social network for programmers. There were a handful of user stories, accompanied by some acceptance tests that the solution had to pass to score points.
In a TDD driving test, I might ask developers to tackle a similar scale of problem (roughly 4-8 hours for an individual to complete). There would be some automated acceptance tests that your solution would need to pass before you can complete the driving test.
Once you've committed your finished solution, a much more exhaustive suite of tests would then be run against it (you'd be asked to implement a specific API to enable this). I'm currently pondering and consulting on how many bugs I might allow. My instinct is to say that if any of these tests fail, you've failed your TDD driving test. A solution of maybe 1,000 lines of code should have no bugs in it if the goal is to achieve a defect density of < 0.1/KLOC. I am, of course, from the "code should be of high integrity" school of development. We'll see how that pans out after I trial the driving test.
So, we have two bars that your solution would have to clear so far: acceptance tests, and exhaustive testing.
Provided you successfully jump those hurdles, your code would then be inspected or analysed for key aspects of maintainability: readability, simplicity, and lack of duplication. (The other 3 goals of Simple Design, basically.)
As an indicator, I'd also measure your code coverage (probably using mutation testing). If you really did TDD it rigorously, I'd expect the level of test assurance to be very high. Again, a trial will help set a realistic quality bar for this, but I'm guessing it will be about 90%, depending on which mutation testing I use and which mutations are switched on/off.
Finally, I'd be interested in the "testability" of your design. That's usually a euphamism for whether or not dependencies betwreen your modules are easily swappable (by dependency injection). The problem would also be designed to require the use of some test doubles, and I'd check that they were used appropriately.
So, you'd have to pass the acceptance tests to complete the test. Then your solution would be exhaustively tested to see if any bugs slipped through. If no bugs are found, the code will be inspected for basic cleanliness. I may also check the execution time of the tests and set an upper limit for that.
First and foremost, TDD is about getting shit done - and getting it done right. Any certification that doesn't test this is not worth the paper it's printed on.
And last, but not least, someone - initially me, probably - will pair with you remotely for half an hour at some random time during the test to:
1. Confirm that it really is you who's doing it, and...
2. See if you apply good TDD habits, of which you'd have been given a list well in advance to help you practice. If you've been on a Codemanship TDD course, or seen lists of "good TDD habits" in conference talks and blog posts (most of which originated from Codemanship, BTW), then you'll already know what many of these habits are
During that half hour of pairing, your insights into TDD will also be randomly tested. Do you understand why you're running the test to see it fail first? Do you know the difference between a mock and stub and a dummy?
Naturally, people will complain that "this isn't how we do TDD", and that's fair comment. But you could argue the same thing in a real driving test: "that's not how I'm gonna drive."
The Codemanship TDD driving test would be aimed at people who've been on a Codemanship TDD workshop in the last 8 years and have learned to do TDD the Codemanship way. It would demonstrate not only that you attended the workshop, but that you understood it, and then went away and practiced until you could apply the ideas on something resembling a real-world problem.
Based on experience, I'd expect developers to need 4-6 months of regular practice at TDD after a training workshop before they'd be ready to take the driving test.
Still much thinking and work to be done. Will keep you posted.
May 30, 2017
Do You Write Automated Tests When You Spike?So, I've been running this little poll on Twitter asking devs if they write automated tests when they're knocking up a prototype (or a "spike", as Extreme Programmers call it).
Do you write any automated tests when you do a proof of concept (a "spike", in XP)?— Codemanship (@codemanship) May 29, 2017
The responses so far have been interesting, if not entirely unexpected. About two thirds of rarely or never write automated tests for a spike.
Behind this is the ongoing debate about the limits of usefulness of such tests (and of TDD, if we take that a step further). Some devs believe that when a problem is small, or when they expect to throw away the code afterwards, automated tests add no value and just slow us down.
My own experience has been a slow but sure transition from not bothering with unit tests for spikes 15 years ago, to almost always writing some unit tests even on small experiments. Why? Because I've found - and I've measured myself doing it, so it's not just a feeling - I get my spike done faster when I have a bit of test scaffolding holding it up.
For sure, I'm not as rigorous about it as when I'm working on production code. The tests tends to be at a higher level, and there are fewer of them. I may break a few of my own TDD rules and have tests that ask more than one question, or I may not refactor the test code quite as fastidiously. But the tests are there, nevertheless. And I'm usually really grateful that I wrote some, as the experiment grows and maybe makes some unexpected twists and turns.
And if - as can happen - the experiment becomes part of the production code, I'm confident that what I've produced is just about good enough to be released and maintained. I'm not in the business of producing legacy code... not even by accident.
An example of one of my spikes, for a utility that combines arrays of test data for use with parameterised tests, gives you an idea of the level of discipline I might usually apply. Not quite production quality, but not that far off.
The spike took - in total - maybe a couple of days, and I was really grateful for the tests by the second day. In timed experiments, I've seen me tackle much smaller problems faster when I wrote automated tests for them as I went along. Which is why, for me, that seems to be the way to go. I get done sooner, with something that could potentially be released. It leaves the door open.
Other developers may find that they get done sooner without writing automated tests. With TDD, I'm very much in my comfort zone. They may be outside it. In those instances, they probably need to be especially disciplined about throwing that code away to remove the temptation of releasing unreliable, unmaintainable code.
They could rehabilitate it, writing tests after the fact and refactoring the code to give it a production sparkle. Some people refer to this process as "spike & stabilise". But, to me, it does rather sound like "code and fix". Because, technically, that's exactly what it is. And experience - not just mine, but a mountain of hard data going back decades - strongly suggests that code and fix is the slow route to delivery.
So I'm a little skeptical, to say the least.
May 18, 2017
20 Dev Metrics - 17. Test Execution TimeThe 17th in my 20 Dev Metrics series can have a profound effect on our ability to sustain the pace of development - Test Execution Time.
When it takes too long to get feedback from tests, we have to test less often, which means more changes to the code in between test runs. The economics of defect removal are stark: the longer it is before a problem is detected, exponentially the more expensive it is to fix. If we break the code and discover it minutes later, then fixing the problem is quick and easy. If we break the code and discover hours later, that cost goes up. Days later and we're into code-and-fix territory.
So it's in our interest to make the tests run as fast as possible. Teams who strive for a testing pyramid, where the base of the pyramid - the bulk of the tests - is made up of fast-running unit tests can usually get good test feedback in minutes or even seconds. Teams whose testing pyramid is upside-down, with the bulk of their tests being slow-running system or integration tests, tend to find test execution a barrier to progress.
Teams should be putting continual effort into performance engineering their test suites as they grow from dozens to hundreds to thousands of tests. Be aware of how long test execution takes, and when it's too long, optimise the test architecture or execution environment. My 101 TDD Tips e-book contains a tip about optimising test performance that you might find useful.
Basically, the more often you want to run a test suite, the faster it needs to run. Simples.
May 16, 2017
My Obligatory Annual Rant About People Who Warn That You Can Take Quality Too Far Like It's An Actual Thing That Happens To Dev TeamsIf you teach developers TDD, you can guarantee to bump into people who'll warn you of the dangers of taking quality too far (dun-dun-duuuuuuun!)
"We don't write the tests first because it protects us from over-testing our code", said one person recently. Ah, yes. Over-testing. A common problem in software.
"You need to be careful not to refactor your code too much", said another. And many's the time I've looked at code and thought "This program is just too easy to understand!"
I can't help recalling the time a UK software company, whose main product had literally thousands of open bugs, hired a VP of Quality and sent him around the dev teams warning them that "perfection is the enemy of good enough". Because that was their problem; the software was just too good.
It seems to still pervade our industry's culture, this idea that quality is the enemy of getting things done, despite mountains of very credible evidence that - in the vast majority of cases - the reverse is true. Most dev teams would deliver sooner if they delivered better software. Not aiming for perfection is the enemy of getting shit done more accurately sums up the relationship between quality and productivity in our line of work.
That's not to say that there aren't any teams who have ever taken it too far. In safety-critical software, the costs ramp up very quickly for very modest improvements in reliability. But the fact is that 99.9% of teams are so far away from this asymptote that, from where they're standing, good enough and too good are essentially the same destination.
Worry about wasting time on silly misunderstandings about the requirements. Worry about wasting time fixing avoidable coding errors. Worry about wasting time trying to hack your way through incomprehensible spaghetti code to make changes. Worry about wasting your time doing the same repeatable tasks manually over and over again.
But you very probably needn't worry about over-testing your code. Or about doing too much refactoring. Or about making the software too good. You're almost certainly not in any immediate danger of that.
May 13, 2017
"Test-Driven Development" - The Clue's In The NameWhen devs tell me that they do Test-Driven Development "pragmatically", I'm immediately curious as to what they mean by that. Anyone with sufficient experience in software development will be able to tell you that "pragmatic" is secret code for "not actually doing it".
Most commonly, "pragmatic" TDD turns out to mean declaring implementation classes and interfaces first, and then writing tests for them. This is not TDD, pragmatic or otherwise. The clue's in the name.
Alarmingly, quite a few online video tutorials in TDD do exactly this. So it's understandable how thousands of devs can end up thinking it's the correct way to do it.
But when someone tells you that you don't need to start by writing a failing test, what they're really saying is you don't have to do TDD. And they're right. You don't.
But if you're doing TDD, then putting the test first is kind of the whole point.
It's like telling someone that it's okay to have a pork pie if you're a vegan. What they mean is "You don't have to be vegan".
If you're going vegan, pork pies are out. And if you're doing TDD, writing implementation code first is a no-no.
Good. I'm glad we got that sorted.
May 8, 2017
How To Avoid The TDD SlowdownBoth personal experience and several empirical studies has taught me that TDD works. By "works", I mean that it can help us to deliver more useful, reliable software that's easier to change, and at little or no extra cost in time and effort.
That describes the view from the top of the TDD hill. To enjoy the view, you've got to climb the hill. And this may be where TDD get's it reputation for taking longer and slowing teams down.
First of all, TDD's learning curve should not be underestimated. I try to make it crystal clear to the developers I train and mentor not to expect amazing results overnight. Plan for a journey of 4-6 months before you get the hang of TDD. Plan for a lead time of maybe a year or more before your business starts to notice tangible results. Adopting TDD is not for the impatient.
Instead, give yourself time to learn TDD. Make a regular appointment with yourself to sit down and mindfully practice it on increasingly ambitious problems. Perhaps start with simpler TDD katas, and then maybe try test-driving one or two personal projects. Or set aside one day a week where your focus will be on TDD and getting it right, while the other four days you "get shit done" the way you currently know how.
Eventually, developers make the transition to test-driving most of their code most of the time, with no apparent loss of productivity.
After this rookie period, the next obstacle teams tend to hit is the unmaintainability of their test code. It's quite typical for newly minted Test-Driven Developers to under-refactor their test code, and over time the tests themselves become a barrier to change. However much refactoring you're doing, you should probably do more. I say that with high confidence, because I've never actually seen test code that was cleaner than it needed to be. (Though I've seen plenty that was over-engineered - let's not get those two problems mixed up!)
Refactoring is one of the most undervalued skills in software development, but it is hard to learn. And employers routinely make the mistake of not emphasising it when they're hiring. Your refactoring skills need to be well-developed if you want to sustain TDD. More bluntly, you cannot learn TDD if you don't learn refactoring.
The other barrier I'm increasingly seeing teams hit is slow-running tests. I think this is in part a result of teams relying exclusively on acceptance tests using tools like Cucumber and Fitnesse, leading to test suites that can - in extreme cases - take hours to run. To sustain the pace of change, we need feedback far sooner. Experienced TDD-ers endeavour to test as much of the logic of their code as possible using fast-running unit tests (or "developer tests", if you like) that exclude external dependencies and don't rely on layers of interpretation or external test data files.
Learn to organise your tests into a pyramid, with the base of the pyramid - the vast bulk of the tests - being fast-running unit tests that we can run very frequently to check our logic. Experienced TDD-ers treat acceptance tests as... well... acceptance tests. Not regression tests.
Another pitfall is over-mocking. When too many of our tests know too much about the internal interactions within the objects they're testing, we can end up baking in a bad design. When we try to refactor, a bunch of tests can get broken, even though we haven't changed the logic at all. Used as an interface design tool, mocks can help us achieve a loosely-coupled "Tell, Don't Ask" style of design. Abused as a testing crutch to get around dependency issues, however, and mocks can hurt us. I tend to use them sparingly, typically at system or component or service boundaries, to help me design the interfaces for my integration code.
(And, to be clear, I'm talking here specifically about mock objects in the strictest sense: not stubs that return test data, or dummies.)
So, if you want to avoid the TDD slowdown:
1. Make a realistic plan to learn and practice
2. Work on those refactoring muscles, and keep your test code clean
3. Aim for a pyramid of tests, with the bulk being fast-running unit tests
4. Watch those mocks!
May 6, 2017
Not All Test Doubles Make Test Code Brittle.Much talk out there in Interweb-land about when to use test doubles, when not to use test doubles, and when to confuse mocks with stubs (which almost every commentator seems to).
Bob C. Martin blogs about how he uses test doubles sparingly, and makes a good case for avoiding the very real danger of "over-mocking", where all your unit tests expose internal details of interactions between the object they're testing and its collaborators. This can indeed lead to brittle test code that has to be rewritten often as the design evolves.
But mocks are only one kind of test double, and they definitely have their place. And let's also not confuse mock objects with mocking frameworks. Just because we created it using a mocking tool, that doesn't necessarily mean it's a mock object.
I'm always as clear as I can be that a mock object is one that's used to test an interaction with a collaborator; one that allows us to write a test that fails when the interaction doesn't happen. They're a tool for designing interfaces, really. And you don't need a mocking framework to write mock objects.
I, too, use mock objects sparingly. Typically, for two reasons:
1. Because the object being interacted with has direct external dependencies (e.g. a database) that I don't want to include in the execution of the unit test
2. Because the object being interacted with doesn't exist yet - in terms of an implementation. "Fake it 'til you make it."
In both cases, I'm clear in my own mind that it's only a mock object if the test is specifically about the interaction. A test double that pretends to fetch data from a SQL database is a stub, not a mock. Test doubles that provide test data are stubs. Test doubles that allow us to test interactions are mocks.
Mocks necessarily require our tests to specify an internal interaction. What method should be invoked? What parameter values shoud be passed? I tend to ask those kinds of questions less often.
Stubs don't necessarily have to expose those internal details in the test code. Knowledge of how the object under test asks for the data can be encapsulated inside a general-purpose stub implementation and left out of the actual test itself.
In this example, I'm stubbing an object that knows about video library members who expressed in interest in newly added titles that match a certain string. This is one of those "fake it 'til you make it" examples. We haven't built the component that manages those lists yet.
The stub is parameterised, and we pass in the test data to its constructor. It's not revealed in the test how EmailAlert gets that data from the stub.
This stub code, of course, is test code, too. But using this technique, we don't have to repeat the knowledge of how the stub provides its data to the object under test. So if that detail changes, we only need to change it in one place.
Another thing I do sometimes is use a mocking framework to create dummies of objects where we're not interested in the interaction, and it provides not test data, but it needs to be there and it needs an object indentity for our test.
In this example, Title doesn't need to be a real implementation. We're not interested in any interactions with Title, but we do need to know if it's in the library at the end. This test code doesn't expose any internal details of how Library.donate() works.
If you check out the code for my array combiner spike, you'll notice that there's no use of test doubles at all. This is because of its architectural nature. There are no external dependencies: no database, no use of web services, etc. And there are no components of the design that were so complex that I felt the need to fake them until I made them.
So, to summarise, in my experience over-reliance on mocks can bake in a bad design. (Although, used wisely, they can help us produce a much cleaner design, so there's a balance to be struck.) But I thought I should just qualify "test double", because not all uses of them have that same risk.
April 23, 2017
The Win-Win-Win of Clean CodeA conversation I had with a development team last week has inspired me to write a short post about the Win-Win-Win that Clean Code can offer us.
Code that is easier to understand, made of simpler parts, low in duplication and supported by good, fast-running automated tests tends to be easier to change and cheaper to evolve.
Code that is easier to understand, made of simpler parts, low in duplication and supported by good, fast-running automated tests also tends to be more reliable.
And code that is easier to understand, made of simpler parts, low in duplication and supported by good, fast-running automated tests - it turns out - tends to require less effort to get working.
It's a tried-and-tested marketing tagline for many products in software development - better, faster, cheaper. But in the case of Clean Code, it's actually true.
It's politically expedient to talk in terms of "trade-offs" when discussing code quality. But, in reality, show me the team who made their code too good. With very few niche exceptions - e.g., safety-critical code - teams discover that when they take more care over code quality, they don't pay a penalty for it in terms of productivity.
Unless, of course, they're new to the practices and techniques that improve code quality, like unit testing, TDD, refactoring, and all that lovely stuff. Then they have to sacrifice some productivity to the learning curve.
Good. I'm glad we had this little chat.