September 25, 2018

Learn TDD with Codemanship

Third-Generation Testing - Øredev 2018, Malmö, November 22nd

If you're planning on coming to Øredev in Sweden this November, I'm running a brand new training workshop on the final day about Third-Generation Software Testing.

First-generation testing was manual: running the program and clicking the buttons ourselves. We quickly learned that this was slow and often patchy, creating a severe bottleneck in development cycles.

Second-generation testing removed that bottleneck by writing code to test our code.

But what about the tests we didn't think of?

Exploratory testing brought us back to a manual process of exploring what else might be possible - what combinations of inputs, user actions and pathways - using the code we delivered, outside of the behaviours encoded in our automated tests.

Manual exploratory testing suffers from the same setbacks as any kind of manual testing, though. It's slow, and can miss heaps of cases in complex logic.

Third-generation testing automates the generation of the test cases themselves, enabling us to explore much wider state spaces than a manual process could ever hope to achieve. With a little extra test code, and a bit of ingenuity, you can explore thousands, tens of thousands, hundreds of thousands and even millions of extra test cases - combinations, paths, random inputs and ranges - using tools you already know.

In this workshop, we'll explore some simple techniques for adapting and reusing our existing unit tests to exhaustively test our critical code. We'll also look at techniques for identifying what code might need us to go further, and how we can use Cloud technology to execute millions of extra tests in minutes.

You can find out more and book your place at http://oredev.org/2018/sessions/third-generation-software-testing



July 27, 2018

Learn TDD with Codemanship

For Load-Bearing Code, Unleash The Power of Third-Generation Testing

As software "eats the world", and people rely more and more on the code we write, there's a strong case for making that code more reliable.

In popular products and services, code may get executed millions or even billions of times a day. In the face of such traffic, the much vaunted "5 nines" reliability (99.999%) just doesn't cut the mustard. Our current mainstream testing practices are arguably not up to the job where our load-bearing code's concerned.

And, yes, when I say "current mainstream practices", I'm including TDD in that. I may test-drive, say, a graph search algorithm in a dozen or so test cases, but put that code in a SatNav system and ship it 1 million cars, and suddenly a dozen tests doesn't fill me with confidence.

Whenever I raise this issue, most developers push back. "None of our code is that critical", they argue. I would suggest that's true of most of their code. But even in pretty run-of-the-mill applications, there's usually a small percentage of code that really needs to not fail. For that code, we should consider going further with our tests.

The first generation of software testing involved running the program and seeing what happens when we enter certain inputs or click certain buttons. We found this to be time-consuming. It created severe bottlenecks in our dev processes. Code needs to be re-tested every time we change it, and manual testing just takes far too long.

So we learned to write code to test our code. The second generation of software testing automated test execution, and removed the bottlenecks. This, for the majority of teams, is the state of the art.

But there are always the test cases we didn't think of. Current practice today is to perform ongoing exploratory testing, to seek out the inputs, paths, user journeys and combinations our test suites miss. This is done manually by test professionals. When they find a failing test we didn't think of, we add it to our automated suite.

But, being manual, it's slow and expensive and doesn't achieve the kind of coverage needed to go beyond the Five 9's.

Which brings me to the Third Generation of Software Testing: writing code to generate the test cases themselves. By automating exploratory testing, teams are able achieve mind-boggling levels of coverage relatively cheaply.

To illustrate, here's a parameterised unit test I wrote when test-driving an algorithm to calculate square roots:

Imagine this is going to be integrated into a flight control system. Those five tests don't give me a warm fuzzy feeling about stepping on any plane using this code.



Now, I feel I need to draw attention to this: unit test fixtures are just classes and unit tests are just methods. They can be reused. We can compose new fixtures and new tests out of them.

So I can write a new parameterised test that, for example, generates a large number of random inputs - all unique - using a library called JCheck (a Java port of the Haskell QuickCheck library).



Don't worry too much about how this works. The important thing to note is that JCheck generates 1,000 unique random inputs. So, with a few extra lines of code we're jumped from 5 test cases to 1,000 test cases.

And with a single extra character, we can leap up a further order of magnitude by simply adding a zero to the number of cases. Or two zeros for 100x more coverage. Or three, or four. Whatever we need. This illustrates the potential power of this kind of technique: we can cover massive state spaces with relatively little extra code.

(And, for those of you thinking "Yeah, but I bet it takes hours to run" - when I ran this for 1 million test cases, it took just over 10 seconds.)

The eagle-eyed among you wil have noticed that I didn't reuse the exact same MathsTest fixture listed above. When test inputs are being generated, we don't have 1,000,000 expected results. We have to generalise our assertions. I adapted the original test into a property-based test, asserting a general property that every correct square root has to have.



Our property-based test can be reused in other ways. This test, for example, generates a range of inputs from 1 to 10 at increments of 0.01.



Again, adding coverage is cheap. Maybe we want to test from 1 to 10000 at increments of 0.001? Easy as peas.

(Yes, these tests take quite a while to run - but that's down to the way JUnit handles parameterised tests, and could be optimised.)

Let's consider a different example. Imagine we have a design with a selection of UI's (Web, Android, iOS, Windows), a selection of local languages (English, French, Chinese, Spanish, Italian, German), and a selection of output formats (Excel, HTML, XML, JSON) and we want to test that every possible combination of UI, language and output works.

There are 96 possible combinations. We could write 96 tests. Or we could generate all the possible combinations with a relatively straightforward bit of code like the Combiner I knocked up in a few hours for larks.



If we added another language (e.g., Polish), we'd go from 96 combinations to 112. It's hopefully easy to see how much easier it could be to evolve the design when the test cases are generated in this way, without dropping below 100% coverage. And, yes, we could take things even further and use reflection to generate the input arrays, so our tests always keep pace with the design without having to change the test code at all. There are many, many possibilities for this kind of testing.

To repeat, I'm not suggesting we'd do this for all our code - just for the code that really has to work.

Food for thought?






March 24, 2018

Learn TDD with Codemanship

Code Craft: What Is It, And Why Do You Need It?

One of my missions at the moment is to spread the word about the importance of code craft to organisations of all shapes and sizes.

The software craftsmanship (now "software crafters") movement may have left some observers with the impression that a bunch of prima donna programmers were throwing our toys out of the pram over "beautiful code".

For me, nothing could be further from the truth. It's always been clear in my mind - and I've tried to be clear when talking about craft - that it's not about "beautiful code", or about "masters and apprentices". It has always been about delivering software that works - does what end users need - and that can be easily changed to solve new problems.

I learned early on that iterating our designs was the ultimate requirements discipline. Any solution of any appreciable complexity is something we're unlikely to get right first time. That would be the proverbial "hole in one". We should expect to need multiple passes at it, each pass getting it less wrong.

Iterating software designs requires us to be able to keep changing the code over and over. If the code's difficult to change, then we get less throws of the dice. So there's a simple business truth here: the harder our code is to change, the less likely we are to deliver a good working solution. And, as times goes on, the less able we are to keep our working solution working, as the problem itself changes.

For me, code craft's about delivering the right thing in the short-to-medium term, and about sustaining the pace of innovation to keep our solution working in the long term.

The factors involved here are well-understood.

1. The longer it takes us to re-test our software, the bigger the cost of fixing anything we broke. This is supported by a mountain of evidence collected from thousands of projects over several decades. The cost of fixing bugs rises exponentially the longer they go undetected. So a comprehensive suite of good fast-running automated tests is an essential ingredient in minimising the cost of changing code. I see it being a major bottleneck for many organisations, and see the devastating effect long testing feedback loops can have on a business.

2. The harder it is to understand the code, the more likely it is we'll break it if we change it.

3. The more complex our code is, the harder it is to understand and the easier it is to break. More ways for it to be wrong, basically.

4. Duplication in our code multiplies the cost of changing common logic.

5. The more the different units* in our software depend on each other, the wider the potential impact of changing one unit on other units. (The "ripple effect").

6. When units aren't easily swappable, the impact of changing one unit can break other modules that interact with it.

* Where a "unit" could be a function, a module, a component, or a service. A unit of reusable code, essentially.

So, six key factors determine the cost of changing code:

* Test Assurance & Execution Time
* Readability
* Complexity
* Duplication
* Coupling
* Abstraction of Dependencies

Add to these, a few other factors can make a big difference.

Firstly, the amount of "friction" in the delivery pipeline. I'd classify "friction" here as "steps in releasing or deploying working software into production that take a long time and/or have a high cost". Manually testing the software before a release would be one example of high friction. Manually deploying the executable files would be another.

The longer it takes, the more it costs and the more error-prone the delivery process is, the less often we can deliver. When we deliver less often, we're iterating more slowly. When we iterate more slowly, we're back to my "less throws of the dice" metaphor.

Frequency of releases is directly related also to the size of each release. Releasing changes in big batches has other drawbacks, too. Most importantly - because software either works as a whole or it doesn't - big releases incorporating many changes present us with an all-or-nothing choice. If change X is wrong, we now have to carefully rework that one thing with all the other changes still in place. So much easier to do a single release for change X by itself, and if it doesn't work, roll it back.

Another aside factor to consider is how easy it is to undo mistakes if necessary. If my big refactoring goes awry, can I easily get back to the last good state of the code? If a release goes pear-shaped, can we easily roll it back to a working version, with minimal disruption to our end customer?

Small releases help a lot in this respect, as does Version Control and Continuous Integration. VCS and CI is like seatbelts for programmers. It can significantly reduce lost time if we have a little accident.

So, I add:

* Small & Frequent Releases
* Frictionless Delivery Processes (build-test-deploy automation)
* Version Control
* Continuous Integration

To my working definition of "code craft".

Noted that there's more to delivering software than these things. There's requirements, there's UX, there's InfoSec, there's data management, and a heap of other considerations. Which is why I'm clear to disambiguate code craft and software development.

Organisations who depend on software need code that works and that can change and stay working. My belief is that anyone writing software for a living needs to get to grips with code craft.

As software continues to "eat the world", this need will grow. I've watched $multi-billion on their knees because their software and systems couldn't change fast enough. As the influence of code spreads into every facet of life, our ability to change code becomes more and more a limiting factor on what we can achieve.

To borrow from Peter McBreen's original book on software craftsmanship, there's a code craft imperative.



March 16, 2018

Learn TDD with Codemanship

Lamenting the Golden Age of High-Integrity Software That Never Came

When I was a much younger programmer, I read a paper that had a big impact on the way I thought about software integrity.

Up to then, I - like so many - believed that "software has bugs". It seemed inevitable. Because all the software I'd seen had bugs. And all the software I'd written had bugs. We just have to live with it, right?

And then along came this paper on a thing called Cleanroom Software Engineering, and my mind was blown.

IBM wrote a COBOL pre-compiler that had about 85,000 lines of code and zero bugs reported in production. Not one. Ever. And what really struck me is that - bearing in mind how primitive dev tools were in the 1980s - it only took a team of six, achieving an average dev productivity that was measurably higher than the industry average. Also, the cost of maintaining the product - typically a lot higher than the cost of initial development - was relatively low; just one developer-year per year. Because nobody was bug fixing.

Now, of course, compared to software today 85 KLOC isn't much. But it's not insignificant, statistically. Maybe an equivalent product today would have 20x as much code. But what's 20x zero?

A single paper turned my whole worldview about software integrity (vs. productivity) upside-down. I've been lucky enough to experience this kind of approach - not specifically Cleanroom, but along similar lines - since, and seen the results for myself. Seeing is believing, and - praise Knuth! - I'm a believer!

So you can probably imagine my frustration to see how, 20 years later, the "software has bugs" paradigm still dominates. Who out there is producing very high-integrity code? Vanishingly few. I've waited and waited for high-integrity development techniques to catch on. I've even stirred the pot a few times myself with attempts at training products and talks with various publishers about a book that updates the ideas for the hipster Agile generation. To no avail. Still, vanishingly few are interested.

It's not as if there isn't a compelling business case. More reliable code, for little to no extra cost (you might even save time and money)? Lower maintenance costs? Happier customers? A world of digital stuff we can rely on? What's not to like? It's not as if these techniques are incompatible with Agile, either. I've done both at the same time, for real.

But for every person like me out there selling the dream, there are 10 more actively briefing against it. "Quick and dirty". "Move fast and break stuff". "Perfection is the enemy of good enough." Etc etc etc.

It's an easy sell to managers who don't understand the relationship between quality, time and cost. Cut some corners, get there sooner, save some money. A much harder proposition is "take more care, get there sooner, save some money". Bosses don't believe it. Heck, most devs don't believe it, despite the mountain of strong evidence to back it up.

I still live in hope that - one day - high-integrity software will go mainstream. The tools and techniques are not, despite what you may have heard, rocket science. Most devs are smart, and most devs could learn to do this. I did, so it can't be that difficult.







February 4, 2018

Learn TDD with Codemanship

Don't Bake In Yesterday's Business Model With Unmaintainable Code

I'm running a little poll on the Codemanship Twitter account asking whether code craft skills should be something every professional developer should have.




I've always seen these skills as foundational for a career as a developer. Once we've learned to write code that kind of works, the next step in our learning should be to develop the skills needed to write reliable and maintainable code. The responses so far suggest that about 95% of us agree (more than 70% of us strongly).

Some enlightened employers recognise the need for these skills, and address the lack of them when taking on new graduates. Those new hires are the lucky ones, though. Most employers offer no training in unit testing, TDD, refactoring, Continuous Integration or design principles at all. They also often have nobody more experienced who could mentor developers in those things. It's still sadly very much the case that many software developers go through their careers without ever being exposed to code craft.

This translates into a majority of code being less reliable and less maintainable, which has a knock-on effect in the wider economy caused by the dramatically higher cost of changing that code. It's not the actual £ cost that has the impact, of course. It's the "drag factor" that hard-to-change code has on the pace of innovation. Bosses routinely cite IT as being a major factor in impeding progress. I'm sure we can all think of businesses that were held back by their inability to change their software and their systems.

For all our talk of "business agility", only a small percentage of organisations come anywhere close. It's not because they haven't bought into the idea of being agile. The management magazines are now full of chatter about agility. No shortage of companies that aspire to be more responsive to change. They just can't respond fast enough when things change. The code that helped them scale up their operations simultaneously bakes in a status quo, making it much harder to evolve the way they do business. Software giveth, and software taketh away. I see many businesses now achieving ever greater efficiencies at doing things the way they needed to be done 5, 10 or 20 years ago, but unable to adapt to the way things are today and might be tomorrow.

I see this is finance, in retail, in media, in telecoms, in law, in all manner of private sector organisations. And I see it in the public sector, too. "IT delays" is increasingly the reason why government policies are massively delayed or fail to be rolled out altogether. It's a pincer movement: we can't do X at the scale we need to without code, and we can't change the code to do X+1 for a rapidly changing business landscape.

I've always maintained that code craft is a business imperative. I might even go as far as to say a societal imperative, as software seeps into every nook and cranny of our lives. If we don't address issues like how easy to change our code is, we risk baking in the past, relying on inflexible and unreliable systems that are as anachronistic to the way things need to be in the future as our tired old and no-longer-fit-for-purpose systems of governance. An even bigger risk is that other countries will steal a march on us, in much the same way that more agile tech start-ups can steam ahead of established market players simply because they're not dragging millions of lines of legacy code behind them.

While the fashion today is for "digital transformations", encoding all our core operations in software, we must be mindful that legacy code = legacy business model.

So what is your company doing to improve their code craft?






January 9, 2018

Learn TDD with Codemanship

Test Granularity Matters. Ask Any Accountant.

It's that time of year when I have to make sure my company's accounts are all up to date and tickety-boo, and I got a useful reminder about why the granularity of our tests really matters.

In my spreadsheet for bank payments and receipts, I have a formula for calculating what the closing balance at the end of the financial year is. Today, I realised that calculated balance was about £1200 short. Evidently, I had either entered one or more payments incorrectly, or one or more receipts.

I had to go back through all the bank statements for the year double-checking every line item against the spreadsheet.

Now, if I'd had a formula for the balance at the end of every line item, I could simply have checked the closing balances on each statement to see where they diverged.

I've experienced similar pain when relying on tests that check logic at too high a level (e.g., system tests or API tests). When a test fails, I have to go rummage through the call stack to figure out where it went wrong - the equivalent of reading all my bank statements looking for the line item that doesn't match. Much time is spent in the debugger: a red flag.

I strongly encourage teams to rely more on small, focused tests that - ideally - have only reason to fail, and to write those tests as close to the module that's doing that piece of work as they can. So when a test fails it's easy to deduce that "the problem is this, and the problem is here".


January 7, 2018

Learn TDD with Codemanship

Do Your Automated Tests Give You Confidence In Your Code?

I ran a little poll on the @codemanship Twitter account asking:




The responses suggest many developers don't put a lot of faith in their automated tests for detecting bugs. The aim of test automation is to dramatically lower the cost and execution time of regression testing our code so that we're alerted to new bugs sooner rather than later.

The ultimate goal is to have high confidence at any point in time that the software works, and is therefore fit for release. This is a foundational requirement of Continuous Delivery - software should always be shippable.


Examining many test suites, as I do every year, I think I have some insight into this problem. Firstly, most teams that have automated tests don't have particularly good test suites. Much of the code isn't reached by them. Many of the tests ask loose questions, leaving big gaps in their assertions that you could drive a bus-load of bugs through.

Teams quickly learn, after the first few releases, that just because their tests are passing, that doesn't mean the code is working. But there seems to be little appetite for beefing up their tests suites to plug the leaks that bugs are pouring in through.

Very few teams test their tests to see how effective they are at catching bugs. Even fewer teams target more exhaustive testing at "load-bearing" code, or even have any awareness of which parts of the code present the highest risk.

Happy Path thinking still dominates the developer mindset. Most of us don't think like testers. We want to show that our code works, not that it doesn't in certain edge cases. So our tests tend to skip over the edge cases.

In code reviews - for those teams that do them on any regular basis - test assurance tends not to be one of the things reviewers look for. At best, line coverage is checked. If the coverage report shows the new or changed code is executed in a test, that's spiffy for most dev teams. And, to be fair, most teams don't even check for that. You'd be shocked at how many teams are genuinely surprised to learn how low their coverage is. "But we do TDD...!" Evidently not much of the time.

Teams that practice TDD fairly rigorously tend to have test suites they can put more faith in. But, even as a TDD trainer and mentor with two decades of experience doing it, I regularly feel the need to take testing further after my design is complete.

I'm a big fan of guided inspection, reading the code carefully, looking for test cases I may have missed. I'm also big on parameterised testing, because it can buy you potentially massive amounts of test coverage with surprisingly little extra test code.

And, believe it or not, to some extent you can also automate exploratory testing. One example is the simple Java prototype for generating combinations of inputs for use in JUnit tests that I threw together last year. Another example is tools that can randomly generate input data, like Haskell's QuickCheck (and it's many language-specific ports, like JCheck).

I also find simple test analysis techniques like truth tables and decision tables, state transition and program flow models very useful for discovering edge cases I might have missed. Think you're thinking like a tester? Read the first few chapters of Robert Binder's Testing Object Oriented Systems and think again.

So, if you're one of the 58% who said they don't have high confidence in their automated tests, it may be time to take your automated testing to the next level.





January 4, 2018

Learn TDD with Codemanship

The Impact of Fast-Running Unit Tests Can Be Profound

The most common issue I find that holds dev teams back is the testing bottleneck. How long it takes to check that your software still works before rolling it out is a major factor in how often you can do releases.

Consider a rudimentary "maturity model" for checking that our code is fit for release: it's a spectrum of maturity, with the lowest level (let's call it Level 0) being that we don't test it at all and just release it for the users to "test" in the field, and the highest level being testing continuously to try to ensure bugs don't make it into the next 10 minutes, let alone into a production release (call that Level 5).

And there are all levels in between 0 and 5. You might be manually testing before a big release. You might be manually testing iteratively, every couple of weeks. You might be running automated GUI tests overnight. You might have a suite of, say, Cucumber tests that take an hour to run. And so on. Or we might have a mix of 50/50 GUI and unit tests. Or a bunch of "unit" tests that hit databases, making them integration tests. And so on.

There are 3 axes for our maturity model:

x. How effective our tests are at detecting bugs

y. How quickly they run

z. How often we run them

These factors all interrelate. Catching more bugs often means running more tests, which takes longer. And the longer the tests take to run, the less often we're likely to run them.

Together, they answer the question: how long before a bug is likely to be detected?

Teams have to climb the maturity model if they want to release more reliable code more often and reap the business benefits of Continuous Delivery.

They not only have to improve at writing fast-running automated tests, which is a non-trivial skillset that takes years to master, but also at test analysis and design, so the tests they write are asking more of the right questions. (Yes, it's not all about automation.)

Slow-running tests (manual or automated) is a very common bottleneck I find in dev teams, who wrestle with the much higher cost of removing bugs resulting from catching them much later. I've watched teams go round and round in circles trying to stabilise their product to make it acceptable for a major release, sometimes for many months and at a cost of millions. Such costs are typically dwarfed by the knock-on opportunity cost to the business waiting for critical updates to their software and systems.

I also come into contact with a lot of teams who've been writing automated tests for years, but have remained at a fairly low level of testing maturity. Their tests run slow (hours). Their tests miss a bunch of stuff. While these teams don't suffer from prolonged "stabilisation phases" before releases, they still feel like they're wading through treacle to get working code out of the door. High productivity at the birth of a new code base quickly drops to a trickle of new features and a great deal of bug fixing.

The aim for teams striving for sustainable Continuous Delivery is to be able to re-test their code every single time a change is made. Make one change. Run the tests. Fix the one thing you broke if you broke it. Then on to the next baby step.

This means that your tests must run in seconds, not hours, or days, or weeks. And you need high confidence that if you broke the code, a test would show that.

The effect of tightening up test execution can be profound for dev teams, and for the businesses relying on them. I've witnessed some miracles in my time where organisations that were on their knees trying to evolve their legacy systems eventually managed to stand up and walk, even run, as their testing cycles accelerated.

So, for a developer, writing effective fast-running automated tests is a key skill. It's something we should learn early, and continue to improve on throughout our careers.

If you or your team needs to work on your unit testing chops, I've designed a jam-packed 1-day training workshop that'll kickstart things. And this month, bookings are half-price.





December 31, 2017

Learn TDD with Codemanship

New Year's Resolutions - Making High-Integrity Code & Automated Code Inspections Mainstream

What's your software development New Year's Resolution for 2018?

Through Codemanship, I'm going to be giving two things a big push, starting tomorrow:

1. Techniques for producing high-integrity code

This has been my pet topic for the best part of 20 years. Ever since I started contracting, I've been shocked at just how unreliable the majority of software we create is. Especially because I know from practical experience that the techniques we can apply to produce software that almost never fails are actually quite straightforward and can be applied economically if you know what you're doing.

I've been banging the drum for quality by design ever since, but - to my immense frustration - it never seems to catch on. Techniques like Design By Contract, data-driven and property-based testing, and even good old-fashioned guided inspections, are perfectly within reach of the average dev team. No need for Z specifications, proofs of correctness, or any of that hifalutin malarkey in the majority of cases. You'd be amazed what's possible using the tools you already know, like your xUnit framework.

But still, two decades later, most teams see basic unit testing as "advanced". New tools and technologies spread like wildfire through our community, but good practices catch on at a glacial pace.

As we rely on software more and more, software needs to become more reliable. Our practices have lagged far behind the exponentially increasing potential for harm. We need to up our game.

So, in 2018 I'm going to be doing a lot of promoting of these techniques, as well as raising awareness of their value in engineering load-bearing code that can be relied on.

2. Continuous code inspections

The more code I see (hundreds of code bases ever year), the more convinced I become that the practical route to the most maintainable code is automating code inspections. Regular code reviews are too little, too late, and suffer the economic drawbacks of all after-the-fact manual ad hoc testing. Pair programming is better, but it's a very human activity. It misses too much, because pairs are trying to focus on too many things simultaneously. Like code reviews, it's too subjective, too ad hoc, too hit-and-miss.

For years now, I've been in the habit of automating key code quality checks so all of the code can be checked all of the time. The economic argument for this is clear: code inspection is just another kind of testing. It's unit testing for code quality bugs. If testing is infrequent and arbitrary, many bugs will slip through the net. Later code reviews may pick them up, but the longer maintainability issues persist, the more it costs to a. live with them until they are fixed (because they make the code harder to change), and b. fix them.

Dev teams that do continuous automated inspection tend to produce much cleaner code, and they do it with little to no extra effort. This is for the exact same reasons that dev teams that do continuous automated functional testing tend to produce much more reliable code than teams that test manually, and take little to no extra time and effort to achieve that. Many teams even save time and money.

To be honest, automating code inspections involves a non-trivial learning curve. Devs have to reason about code and express their views on design in a way many of us aren't used to. It's its own problem domain, and the skills and experience required to do it well are currently in short supply. But the tools are readily available, should teams choose to try it.

So, a significant investment has to be made to get automated code inspections up and running. But the potential for reuse of code quality checks is massive. There's just one teeny obstacle: we have to agree on what constitutes a code quality bug, within the team, between teams, and across dev communities. Right now, I have some big issues with what the developers of some code analysis tools suggest is "good code". So I switch off their off-the-peg rules and write my own checks. But, even then, it pays off quite quickly.

Anyhoo, those are the two things I'm going to be focusing on in 2018. Wish me luck!


December 7, 2017

Learn TDD with Codemanship

"This would never have happened if we'd written it in Haskell" - Bah Humbug!

Spurred on by a spate of social media activity of the "We replaced a system written in X with one written in Y, and it was way better" kind, I just wanted to throw my hat into the ring on this topic.

As someone with practical insights into high-integrity software development, I can confidently attest that this is bunk of the highest order. There is no programming language that assures reliability.

Sure, there are languages with built-in features that can help, but you actually have to do stuff to make sure your code works 99.999999999% of the time. Y'know? Like testing and wotnot.

For example, you can inflict all kinds of damage in C, thanks to direct manipulation of memory, but you don't have to abuse those features of the language. A Bugatti Veyron has a top speed of 254 mph, but you don't have to drive it at 254 mph.

"We would never have had that crash if we'd been driving a Volvo" really means "We'd never have had that crash if we'd been driving slower".

If you want to avoid dangling pointers in a C program, you can. It just takes a bit of know-how and a bit of discipline. Don't blame the language for any shortcomings you might have in either. The difference the language makes is small compared to the difference you make.


ADDENDUM: Just to clarify, I'm not saying better languages and tools don't help. What I'm saying is that the difference they make can be very small compared to other factors. How do I know this? Well, I've been programming for 35 years and have worked in a dozen or more languages on real projects. So there's that. But also, I've worked with a lot of teams, and noticed how the same team using different tools gets similar results, while different teams using identical tools can get strikingly different results. So I conclude that the team makes the bigger difference, by orders of magnitude. So I choose to focus more on teams and how they work than on the tools, by orders of magnitude. And it's not as if tools and technology don't get enough focus within the industry :)