December 9, 2005

...Learn TDD with Codemanship

Agile Testing - Where's My Feedback, Dude?

A common experience for many agile developers is, while they're doing great agile stuff like test-driven wotnot and continuous howsyourfather, everyody around them is still living in waterfall land. You may be integrating new features, changes and bug fixes several times a day, but what's the use if your testers aren't planning to give you feedback until the "testing phase" of your release cycle?

In agile projects, working software is the measure of progress, and tests are the measure of working software. If it's not tested, it's not finished, and therefore you're making no progress. I operate a strict rule on planning: if the stuff the developers say they delivered in the last iteration hasn't passed the acceptance (or system) tests, then it hasn't actually been delivered. That means your velocity is zero, and you therefore can't shedule any stories in the next iteration. It sounds Draconian, and it quite deliberately is. I punish teams that don't get feedback from testing by stopping the project until they do.

The job of the tester in agile development is primarily to agree the specification in the form of tests, and to provide immediate feedback on features delivered (or changes made, or bugs fixed) by executing those tests. If they're still messing about with spreadsheets in iteration 5 (as some testers are want to do), then they're not agile testers - and your project is not an agile project.

If you're using automated acceptance tests, then there's likely to be a considerable overhead in developing and maintaining all your executable test scripts. Automated tests are code, after all. If you have dedicated test automation people, then their time needs to be planned, and their progress tracked (and the quality of their work measured) - just like any other developers.

The trick is to have testing keep up with development, so when a feature is ready for testing, the tests are ready too. This must be taken into account in agile planning. You need to schedule the development and execution of tests along with the development and delivery of software.

I've worked with a lot of testers, and I've had many experiences where the test team were treated as very much being the tambourine players in the band. Developers don't really take them seriously, and I've lost count of the number of times, when a project is running behind schedule, that the "luxury item" the team decided to throw out in order to meet the deadline was testing. This is very dangerous and usually ends with tears before bedtime. If it's not tested, then how do you know it works? If you don't know it works, how do you know it was actually delivered? Sure, you're meeting the deadline, but with what? Anyone can meet a deadline if the deliverable isn't tested: you just stand still and time does the rest for you.

In agile projects, testers are critical - both as requirements analysts (because in agile developments, the tests are the specifications) and as invaulable sources of objective feedback. They are equals to the developers - hell, they probably are developers (who just happen to specialise in testing) - and if they're not, then maybe you've got the wrong testers...

I sat down with a test team quite recently, and we discussed these very issues. They felt undervalued; squeezed to make room for "more important activities" (like building code that hasn't been tested). They weren't active players in the agile process, and nobody could account for their time because nobody was planning or tracking their output.

After a lengthy chat, I offered then a handful of recommendations:


    * Testing is requirements analysis, test design, test development (coding) and execution. Testing is software development, basically, and all the same principles apply. Certainly, agile principles apply every bit as as much.
    * Testers should occasionally pair with developers to get over the "them and us" culture
    * Test code has to be maintained, so it should be maintainable - design principles also apply
    * Professional testers are not the customer, just as requirements analysts aren't either. You still need customer feedback (acceptance testing), but for 100% clarity and comprehensive regression testing, you also need people who can write executable tests (and that's almost certainly NOT the customer!)
    * Process Improvement is every bit as applicable to testing as to anything else. In TDD, the software will only be as good as the tests, so you need to improve the tests if you want better software, and you'll need to deliver these tests faster if you want higher overall productivity.


Finally, can I point you to some good online agile testing resources? Yes? Oh, you're so kind...

Agile Testing at testingReflections

testdriven.com

Agile Testing @ testing.com
Posted 15 years, 1 month ago on December 9, 2005