April 6, 2018

Learn TDD with Codemanship

Could Refactoring (& Refuctoring) Help Us Test Claims About Benefits of Clean Code

One of the more frustrating things about teaching developers about code craft and "Clean Code" is the lack of credible hard evidence from respectable sources about the claimed benefits of it.

Not only does this make code craft a tougher sell to skeptics - and there was a time when I was one of them, decades ago - but it also calls into question whether the alleged benefits are real.

The biggest barrier to doing research in this area has been twofold:

1. The lack of data points. Most software engineering academic studies take data from a handful of projects. If this were, say, medical research, we'd never get our medicines on to the market.

2. The problem of comparing apples with apples. There are so many factors in software development that it's pretty much impossible to isolate one and rule out all others. Studies into the effects of adopting TDD can't account for the variations in experience and ability, for example. Teams new to TDD tend to have to deal with a steep learning curve before they become productive again.

When I consider some of the theories about what makes code harder to change - the central plank of the code craft thesis - some we have strong evidence to back them up, others... not so much.

I've had a bit of a brainwave in this area that might help researchers. Take a code base, then specifically vary it along a single dimension. e.g., refactor to remove duplication, or "refuctor" to introduce duplication (by inlining functions and modules). The resulting variants should all be functionally equivalent, but you could fine-grain the levels of variation. Then ask developers to make changes to the logic, and measure how much code had to be edited to achieve those changes. Automated acceptance tests would ensure that every change was logically equivalent.

I can easily envisage how refactoring (and it's evil twin, refuctoring) could be used to vary readability, complexity, duplication, coupling and cohesion (e.g., by moving methods between classes to introduce or eliminate feature envy), "swabbability" (e.g., by introducing dependency injection, or by reversing the dependency inversion by using explicit references to concrete implementations of interfaces) and a range of other code qualities. Automated tests could ensure that every variant still works exactly the same way on the outside.

And the tests themselves could be varied. For example, you could manipulate test suite execution time so that in some cases developers had to wait an hour for feedback, while others only need wait seconds for the same feedback.

I think I might be on to something. What do you think?


March 24, 2018

Learn TDD with Codemanship

Code Craft: What Is It, And Why Do You Need It?

One of my missions at the moment is to spread the word about the importance of code craft to organisations of all shapes and sizes.

The software craftsmanship (now "software crafters") movement may have left some observers with the impression that a bunch of prima donna programmers were throwing our toys out of the pram over "beautiful code".

For me, nothing could be further from the truth. It's always been clear in my mind - and I've tried to be clear when talking about craft - that it's not about "beautiful code", or about "masters and apprentices". It has always been about delivering software that works - does what end users need - and that can be easily changed to solve new problems.

I learned early on that iterating our designs was the ultimate requirements discipline. Any solution of any appreciable complexity is something we're unlikely to get right first time. That would be the proverbial "hole in one". We should expect to need multiple passes at it, each pass getting it less wrong.

Iterating software designs requires us to be able to keep changing the code over and over. If the code's difficult to change, then we get less throws of the dice. So there's a simple business truth here: the harder our code is to change, the less likely we are to deliver a good working solution. And, as times goes on, the less able we are to keep our working solution working, as the problem itself changes.

For me, code craft's about delivering the right thing in the short-to-medium term, and about sustaining the pace of innovation to keep our solution working in the long term.

The factors involved here are well-understood.

1. The longer it takes us to re-test our software, the bigger the cost of fixing anything we broke. This is supported by a mountain of evidence collected from thousands of projects over several decades. The cost of fixing bugs rises exponentially the longer they go undetected. So a comprehensive suite of good fast-running automated tests is an essential ingredient in minimising the cost of changing code. I see it being a major bottleneck for many organisations, and see the devastating effect long testing feedback loops can have on a business.

2. The harder it is to understand the code, the more likely it is we'll break it if we change it.

3. The more complex our code is, the harder it is to understand and the easier it is to break. More ways for it to be wrong, basically.

4. Duplication in our code multiplies the cost of changing common logic.

5. The more the different units* in our software depend on each other, the wider the potential impact of changing one unit on other units. (The "ripple effect").

6. When units aren't easily swappable, the impact of changing one unit can break other modules that interact with it.

* Where a "unit" could be a function, a module, a component, or a service. A unit of reusable code, essentially.

So, six key factors determine the cost of changing code:

* Test Assurance & Execution Time
* Readability
* Complexity
* Duplication
* Coupling
* Abstraction of Dependencies

Add to these, a few other factors can make a big difference.

Firstly, the amount of "friction" in the delivery pipeline. I'd classify "friction" here as "steps in releasing or deploying working software into production that take a long time and/or have a high cost". Manually testing the software before a release would be one example of high friction. Manually deploying the executable files would be another.

The longer it takes, the more it costs and the more error-prone the delivery process is, the less often we can deliver. When we deliver less often, we're iterating more slowly. When we iterate more slowly, we're back to my "less throws of the dice" metaphor.

Frequency of releases is directly related also to the size of each release. Releasing changes in big batches has other drawbacks, too. Most importantly - because software either works as a whole or it doesn't - big releases incorporating many changes present us with an all-or-nothing choice. If change X is wrong, we now have to carefully rework that one thing with all the other changes still in place. So much easier to do a single release for change X by itself, and if it doesn't work, roll it back.

Another aside factor to consider is how easy it is to undo mistakes if necessary. If my big refactoring goes awry, can I easily get back to the last good state of the code? If a release goes pear-shaped, can we easily roll it back to a working version, with minimal disruption to our end customer?

Small releases help a lot in this respect, as does Version Control and Continuous Integration. VCS and CI is like seatbelts for programmers. It can significantly reduce lost time if we have a little accident.

So, I add:

* Small & Frequent Releases
* Frictionless Delivery Processes (build-test-deploy automation)
* Version Control
* Continuous Integration

To my working definition of "code craft".

Noted that there's more to delivering software than these things. There's requirements, there's UX, there's InfoSec, there's data management, and a heap of other considerations. Which is why I'm clear to disambiguate code craft and software development.

Organisations who depend on software need code that works and that can change and stay working. My belief is that anyone writing software for a living needs to get to grips with code craft.

As software continues to "eat the world", this need will grow. I've watched $multi-billion on their knees because their software and systems couldn't change fast enough. As the influence of code spreads into every facet of life, our ability to change code becomes more and more a limiting factor on what we can achieve.

To borrow from Peter McBreen's original book on software craftsmanship, there's a code craft imperative.



March 11, 2018

Learn TDD with Codemanship

Proposing The xUnit "Meta-Kata"

A while back I ruminated on refactoring old-fashioned "test it all with a main method" test code (like we did back in the day) into the xUnit unit test framework pattern.

It occurred to me that this might make a good code kata. TDD a well-known kata (e.g., FizzBuzz, Bowling Game, "Rock, Paper, Scissors"), but starting without a unit testing framework doing it all in a single main method.

As the code evolves, refactor the test code to remove code smells like methods testing more than one thing, classes testing more than one "unit" or "feature" (depending on how you roll), high-level modules that depend directly on low-level test fixtures, multiple tests being different examples of the same test, and so on.

It could be an interesting exercise in discovering frameworks. Perhaps it'll take multiple katas for a complete xUnit framework to reveal itself, just as so many great and useful frameworks don't really take shape until they've been reused a few times on other problems.

It might also be an exercise in applying the TDD discipline on two problems simultaneously; a test of your Craft Fu.

Really, it would be a kata within a kata: a meta-kata, if you like. And as such, I think it could be really interesting and rather challenging. I'll hopefully be giving it a go - well, probably a few goes - when I pair with my "apprentice" Will Price soon. When I think we've cracked it, I'll post a screencast and the code (with version history, so you can play it back).



February 4, 2018

Learn TDD with Codemanship

Don't Bake In Yesterday's Business Model With Unmaintainable Code

I'm running a little poll on the Codemanship Twitter account asking whether code craft skills should be something every professional developer should have.




I've always seen these skills as foundational for a career as a developer. Once we've learned to write code that kind of works, the next step in our learning should be to develop the skills needed to write reliable and maintainable code. The responses so far suggest that about 95% of us agree (more than 70% of us strongly).

Some enlightened employers recognise the need for these skills, and address the lack of them when taking on new graduates. Those new hires are the lucky ones, though. Most employers offer no training in unit testing, TDD, refactoring, Continuous Integration or design principles at all. They also often have nobody more experienced who could mentor developers in those things. It's still sadly very much the case that many software developers go through their careers without ever being exposed to code craft.

This translates into a majority of code being less reliable and less maintainable, which has a knock-on effect in the wider economy caused by the dramatically higher cost of changing that code. It's not the actual £ cost that has the impact, of course. It's the "drag factor" that hard-to-change code has on the pace of innovation. Bosses routinely cite IT as being a major factor in impeding progress. I'm sure we can all think of businesses that were held back by their inability to change their software and their systems.

For all our talk of "business agility", only a small percentage of organisations come anywhere close. It's not because they haven't bought into the idea of being agile. The management magazines are now full of chatter about agility. No shortage of companies that aspire to be more responsive to change. They just can't respond fast enough when things change. The code that helped them scale up their operations simultaneously bakes in a status quo, making it much harder to evolve the way they do business. Software giveth, and software taketh away. I see many businesses now achieving ever greater efficiencies at doing things the way they needed to be done 5, 10 or 20 years ago, but unable to adapt to the way things are today and might be tomorrow.

I see this is finance, in retail, in media, in telecoms, in law, in all manner of private sector organisations. And I see it in the public sector, too. "IT delays" is increasingly the reason why government policies are massively delayed or fail to be rolled out altogether. It's a pincer movement: we can't do X at the scale we need to without code, and we can't change the code to do X+1 for a rapidly changing business landscape.

I've always maintained that code craft is a business imperative. I might even go as far as to say a societal imperative, as software seeps into every nook and cranny of our lives. If we don't address issues like how easy to change our code is, we risk baking in the past, relying on inflexible and unreliable systems that are as anachronistic to the way things need to be in the future as our tired old and no-longer-fit-for-purpose systems of governance. An even bigger risk is that other countries will steal a march on us, in much the same way that more agile tech start-ups can steam ahead of established market players simply because they're not dragging millions of lines of legacy code behind them.

While the fashion today is for "digital transformations", encoding all our core operations in software, we must be mindful that legacy code = legacy business model.

So what is your company doing to improve their code craft?






February 1, 2018

Learn TDD with Codemanship

BDD & Specification By Example - Where Did We Go Wrong?

I've been saving this post up for a while, but with a bit of pre-dinner free time I wanted to put it out there now.

I meet a lot of teams, and one thing many of them tell me is that the "customer tests" they've been driving their designs from are actually written by the developers, not the customer.



Sure, they're written using a "Behaviour-Driven Development" or "Acceptance Testing" tool like Cucumber or Fitnesse. But just because you've built a "granny annex" on your house, if there's no granny living in it, it's just an "annex".

We've dropped the ball on this. The CHAOS report, published every year by the Standish Group, consistently cites lack of customer involvement as the number one factor in project failure. A tool won't fix that.

Especially when that tool wasn't designed with customer collaboration in mind. When your "Getting Started" guide begins "First, install Visual Studio..." or requires your customer to learn a mark-up language or to use version control, arguably you're bound to have a hard time getting them to engage in the process.

Increasingly, I work with teams who want to somehow connect the way their customer actually prefers to capture examples with the way devs like to automate tests. 90% of the time, that means pulling data out of Excel spreadsheets - still the most widely used tool in both communities - into unit tests. Some unit testing frameworks even have that facility built in (e.g., MSTest for .NET). But reading data from spreadsheets is child's play for most developers. With OLD DB or JDBC, for example, a spreadsheet's just a database.

But, regardless of the tools, the problem most teams need to solve is a people problem. I've found that close customer involvement is so critical to the chances of a team succeeding at solving the customer's problems that I actually stop development until they engage at the level we need them to. No play? No code.

The mistake many of us make is to give them a choice. "Would you like to spend a lot of time with us discussing requirements and playing with candidate releases and giving us feedback?" "No thanks, ta very much. See you in a year's time."

We made a rod for our backs by allowing them to be absentee partners and trying to figure out what they want and need for them. Specification By Example presents us with an opportunity to make the relationship clearer. The customer has to be "trained" to understand that if they haven't agreed a test for it, they ain't gonna get it.



Learn TDD with Codemanship

A Bit of Old School BDD with NUnit & MS Excel

I'm going Old School this morning with my pairing partner, and while she's popped out for a meeting, I thought I'd quickly jot down what we've been working on.

Back in the good old days before BDD/ATDD frameworks, when we wanted to automate customer tests we just captured the customer's example data in something like MS Excel and then wrote a bit of code to read that data into a unit test. (That, essentially, is what SBE tools do, just with some bells and whistles.)

For example, imagine our customer wants to be able to calculate square roots using the software. We could agree an acceptance test, in the trendy hipster "Given...When...Then..." style, and put that in a spreadsheet, like so.



If we name the cell range containing the example data "examples" (for ease of extracting using OLE DB), and save this spreadsheet in the root directory of our Visual Studio test project, then we can relatively easily suck out that data to provide NUnit test cases for a parameterised test with arguments that match the data in the table.

Here's a complete source listing for our basic spike.



(We're going to try and refine this a bit, and see if it can't be made more general. One of the downsides of using a custom TestCaseSource is that we can't parameterise it easily to specify different Excel files and different ranges. Though why such a mechanism doesn't already exist is a bit of a mystery, after 15+ years of NUnit.)



January 20, 2018

Learn TDD with Codemanship

10 Classic TDD Mistakes

20 years of practicing Test-Driven Development, and training and coaching a few thousand developers in it, has taught me this is not a trivial skillset to learn. There are many potential pitfalls, and I've seen many teams dashed on the rocks by some classic mistakes.

You can learn from their misfortunes, and hopefully steer a path through these treacherous waters. Here are ten classic mistakes I've seen folk make with TDD.

1. Underestimating The Learning Curve

Often, when developers try to adopt TDD, they have unrealistic expectations about the results they'll be getting in the short term. "Red-Green-Refactor" sounds simple enough, but it hides a whole world of ideas, skills and habits that need to be built to be effective at it. If I had a pound for every team that said "we tried TDD, and it didn't work"... Plan for a journey that will take months and years, not days and weeks.

2. Confusing TDD with Testing

The primary aim of TDD is to come up with a good design that will satisfy our customer's needs. It's a design discipline that just happens to use tests as specifications. A lot of people still approach TDD as a testing discipline, and focus too much on making sure everything is tested when they should be thinking about the design. If you're rigorous about applying the Golden Rule (only write solution code when a failing test requires it), your coverage will be high. But that isn't the goal. It's a side benefit.

3. Thinking TDD Is All The Testing They'll Ever Need

If you practice TDD fairly rigorously, the resulting automated tests will probably be sufficient much of the time. But not all of the time. Too many teams pay no heed to whether high risk code needs more testing. (Indeed, too many teams pay no heed to high risk code at all. Do you know where your load-bearing code is?) And what about all those scenarios you didn't think of? It's rare to see a test suite that covers every possible combination of user inputs. More work has to be done to explore the edges of what was specified.

4. Not Starting With Failing Customer Tests

In all approaches to writing software, how we collaborate with our customers is critically important. Designs should be driven directly from testable specifications that we've explicitly agreed with them. In TDD, unsurprisingly, these testable specifications come in the form of... erm... tests. The design process starts by working closely with the customer to flesh out executable acceptance tests that fail. We do not start writing code until we have those failing customer tests. We do not stop writing code until those tests are passing. But a lot of teams still set out on their journey with only the vaguest sense of the destination. Write all the unit tests you want, but without failing executable customer tests, you're just being super-precise about your own assumptions of what the customer wants.

5. Confusing Tools With Practices

Just because they're written using a customer test specification tool like Cucumber or Fitnesse does not mean those are customer tests. They could be automated using JUnit, and be customer tests. What makes them customer tests is that you wrote them with the customer, codifying their examples of how the software will be used. Similarly, just because you used a mock objects framework, that doesn't mean that you are mocking. Mocking is a technique for discovering the design of interfaces by writing failing interaction tests. Just because you're writing JUnit tests doesn't mean you're doing TDD. Just because you use Resharper doesn't mean you're refactoring. Just because you're running Jenkins doesn't mean you're doing Continuous Integration. Kubernetes != Continuous Delivery. And the list goes on (and on and on). Far too many developers think that using certain tools will automatically produce certain results. The tools will not do your thinking for you. As far as I'm aware, RSpec doesn't discuss the requirements with the customer and write the tests itself. You have to talk to the customer.

6. Not Actually Doing TDD. At All.

When I run the Codemanship TDD training workshop, I often start the first day by asking for a show of hands from people who think they've done TDD. At the end of the first day I ask them to raise their hands if they still think they've done TDD. The number is always considerably lower. Here's the thing: I know from experience that 9 out of 10 developers who put "TDD" on their CV really mean "unit testing". Many don't even know what TDD is. I know this sounds basic, but if you're going to try doing TDD, try doing TDD. Google it. Read an introduction. Watch a tutorial or three. Buy a book. Come on a course.

7. Skimping On Refactoring

To produce code that's as clean as I feel it needs to be, I find I tend to spend about 50% of my time refactoring. Most dev teams do a lot less. Many do none at all. Now, I know many will say "enough refactoring" is subjective, and the debate rages on social media about whether anyone is doing too much refactoring, but let's be frank: the vast majority of us are simply not doing anywhere near enough. The effects of this are felt soon enough, as the going gets harder and harder. Refactoring's a very undervalued skill; I know from my training orders. For every ten TDD courses I run, I might be asked to run one refactoring course. Too little refactoring makes TDD unsustainable. Typical outcome: "We did TDD for 6 months, but our tests got so hard to change that we threw them away."

8. Making The Tests Too Big

The granularity of tests is key to making TDD work as a design discipline, as well as determining how effective your test suites will be at pinpointing broken code. When our tests ask too many questions (e.g., "What are the first 10 Fibonacci numbers?"), we find ourselves having to make a bunch of design decisions before we get feedback. When we work in bigger batches, we make more mistakes. I like to think of it like crossing a stream using stepping stones; if the stones are too far apart, we have to make big, risky leaps, increasing the risk of falling in. Start by asking "What's the first Fibonacci number?".

9. Making The Tests Too Small

Conversely, I also often see people writing tests that focus on minute details that would naturally fall out of passing a more interesting higher-level test. For example, I see people writing tests for getters and setters that really only need to exist because they're used in some interesting behaviour that the customer wants. I've even seen tests that create an object and then assert that it isn't null. Those kinds of tests are redundant. I can kind of see where the thinking comes from, though. "I want to declare a BankAccount class, but the Golden Rule of TDD is I can't until I have a failing test that requires it. So I'll write one." But this is coming at it from the wrong direction. In TDD, we don't write tests to force the design we want. We write tests for behaviour that the customer wants, and discover the design by passing it (and by refactoring afterwards if necessary). We'll need a BankAccount class to test crediting an account, for example. We'll need a getter for the balance to check the result. Focus on behaviour and let the details follow. There's a balance to be struck on test granularity that comes with experience.

10. Going Into "Design Autopilot"

Despite what you may have heard, TDD doesn't take care of the design for you. You can follow the discipline to the letter, and end up with a crappy design.

TDD helps by providing frequent "beats" in development to remind us to think about the design. We're thinking about what the code should do when we write our failing test. We're thinking about how it should do it when we're passing the test. And we're thinking about how maintainable our solution is after we've passed the test as we refactor the code. It's all design, really. But it's not magic.

YOU STILL HAVE TO THINK ABOUT THE DESIGN. A LOT.


So, there you have it: 10 classic TDD mistakes. But all completely avoidable, with some thought, some practice, and maybe a bit of help from an old hand.


January 15, 2018

Learn TDD with Codemanship

Refactoring to the xUnit Pattern

16 days left to get my spiffy on-site Unit Testing training workshop at half-price. It's jam-packed with unit testy goodness. Here's a little taste of the kind of stuff we cover.

In the introductory part of the workshop, we look at the anatomy of unit test suites and see how - from the most basic designs - we eventually arrive by refactoring at the xUnit design pattern for unit testing frameworks.

If you've been programming for a while, there's a good chance you've written test code in a Main() method, like this:



This saves us the bother of having to run an entire application to get quick feedback while we're adding or changing code in, say, a library.

Notice that there are three components to this test:

Arrange - we set up the object(s) we're going to use to be in the initial state we need for this particular test

Act - we invoke the method we want to test

Assert - We ask questions about the final state of our test object(s) to see if the action has had the desired effect

Simples!

Of course, a real-world application might need hundreds or even thousands of such tests. Our Main() method is going to get pretty big and unwieldy if we keep adding more and more test cases.

So we can break it down into multiple test methods, one for each test case. The name of each test method can clearly describe what the test is.



Our original Main() method just calls all of our test methods.

But still, when there are hundreds or thousands of test methods, we can end up with one ginormous class. That too can be broken down, grouping related test methods (e.g., all the tests for a bank account) into smaller test fixtures.



Note that each test fixture has a method that invokes all of its test methods, so our original main method doesn't need to invoke them all itself.

This is a final piece of the unit testing jigsaw: the class that tells all of our test fixtures to run their tests. We call this a test suite.



At the most basic level, this simple design gives us the ability to write, organise and run large numbers of tests quickly.

As time goes on, we may add a few bells and whistles to streamline the process and make it more useful and usable.

For example, in our current design, when an assertion fails (using .NET's built-in Debug.Assert() method), it will halt execution. If the first test fails in a suite of 1,000 tests, it won't run the other 999. So we might write our own assertion methods to check and report test failures without halting execution.

And we might want to make the output more user friendly and display more helpful results, so we may add a custom formatter/reporter to write out test results.

And - I can attest from personal experience - it can be a real pain the you-know-what to have to remember to write code to invoke every test method on every test fixture. So we might create a custom test runner - not just a Main() method - that automates the process of test discovery and execution.

We could, for example, invert the dependencies in our test suite on individual test fixtures by extracting a common interface that all fixtures must implement for running its tests. Then we could use reflection or search through the source code for all classes that implement that interface and build the suite automatically.

Likewise, we could specify that test methods must have a specific signature (e.g., start with "Test", a void return type, and have no parameters) and search for all test methods that match.

In my early career, I wrote several unit testing frameworks, and they tended to end up with a similar design. Thousands more had the same experience, and that commonality of experience is captured in the xUnit design pattern for unit testing frameworks.



The original implementation of this pattern was done in Smalltalk ("SUnit") by Kent Beck, and many more have followed in pretty much every programming language you can think of.

In the years since, some useful advanced features have been added, which we'll explore later in the workshop. But, under the hood, they're all pretty much along these lines.







January 9, 2018

Learn TDD with Codemanship

Test Granularity Matters. Ask Any Accountant.

It's that time of year when I have to make sure my company's accounts are all up to date and tickety-boo, and I got a useful reminder about why the granularity of our tests really matters.

In my spreadsheet for bank payments and receipts, I have a formula for calculating what the closing balance at the end of the financial year is. Today, I realised that calculated balance was about £1200 short. Evidently, I had either entered one or more payments incorrectly, or one or more receipts.

I had to go back through all the bank statements for the year double-checking every line item against the spreadsheet.

Now, if I'd had a formula for the balance at the end of every line item, I could simply have checked the closing balances on each statement to see where they diverged.

I've experienced similar pain when relying on tests that check logic at too high a level (e.g., system tests or API tests). When a test fails, I have to go rummage through the call stack to figure out where it went wrong - the equivalent of reading all my bank statements looking for the line item that doesn't match. Much time is spent in the debugger: a red flag.

I strongly encourage teams to rely more on small, focused tests that - ideally - have only reason to fail, and to write those tests as close to the module that's doing that piece of work as they can. So when a test fails it's easy to deduce that "the problem is this, and the problem is here".


January 7, 2018

Learn TDD with Codemanship

Do Your Automated Tests Give You Confidence In Your Code?

I ran a little poll on the @codemanship Twitter account asking:




The responses suggest many developers don't put a lot of faith in their automated tests for detecting bugs. The aim of test automation is to dramatically lower the cost and execution time of regression testing our code so that we're alerted to new bugs sooner rather than later.

The ultimate goal is to have high confidence at any point in time that the software works, and is therefore fit for release. This is a foundational requirement of Continuous Delivery - software should always be shippable.


Examining many test suites, as I do every year, I think I have some insight into this problem. Firstly, most teams that have automated tests don't have particularly good test suites. Much of the code isn't reached by them. Many of the tests ask loose questions, leaving big gaps in their assertions that you could drive a bus-load of bugs through.

Teams quickly learn, after the first few releases, that just because their tests are passing, that doesn't mean the code is working. But there seems to be little appetite for beefing up their tests suites to plug the leaks that bugs are pouring in through.

Very few teams test their tests to see how effective they are at catching bugs. Even fewer teams target more exhaustive testing at "load-bearing" code, or even have any awareness of which parts of the code present the highest risk.

Happy Path thinking still dominates the developer mindset. Most of us don't think like testers. We want to show that our code works, not that it doesn't in certain edge cases. So our tests tend to skip over the edge cases.

In code reviews - for those teams that do them on any regular basis - test assurance tends not to be one of the things reviewers look for. At best, line coverage is checked. If the coverage report shows the new or changed code is executed in a test, that's spiffy for most dev teams. And, to be fair, most teams don't even check for that. You'd be shocked at how many teams are genuinely surprised to learn how low their coverage is. "But we do TDD...!" Evidently not much of the time.

Teams that practice TDD fairly rigorously tend to have test suites they can put more faith in. But, even as a TDD trainer and mentor with two decades of experience doing it, I regularly feel the need to take testing further after my design is complete.

I'm a big fan of guided inspection, reading the code carefully, looking for test cases I may have missed. I'm also big on parameterised testing, because it can buy you potentially massive amounts of test coverage with surprisingly little extra test code.

And, believe it or not, to some extent you can also automate exploratory testing. One example is the simple Java prototype for generating combinations of inputs for use in JUnit tests that I threw together last year. Another example is tools that can randomly generate input data, like Haskell's QuickCheck (and it's many language-specific ports, like JCheck).

I also find simple test analysis techniques like truth tables and decision tables, state transition and program flow models very useful for discovering edge cases I might have missed. Think you're thinking like a tester? Read the first few chapters of Robert Binder's Testing Object Oriented Systems and think again.

So, if you're one of the 58% who said they don't have high confidence in their automated tests, it may be time to take your automated testing to the next level.