What do testers do on an Agile team?
Quite a few Agile teams believe that you don’t need testers to deliver working software. Testers are looked upon as a relic from the waterfall days (requirements, design, code, then pass off to test). On XP teams, everyone is a developer, and developers are responsible and accountable for testing their own code, writing automated unit tests and then automating the acceptance tests that the Customer has defined. Scrum doesn’t explain how testing is done at all – the team will find a way to figure it out as they “inspect and adapt” themselves towards good practices.
If developers are already testing their own code (and maybe even pairing up to review code as it is written), then what do you need testers for?
Janet Gregory and Lisa Crispin wrote a big book to justify the role of testers on Agile teams and to explain to programmers and testers how testers can fit into Agile development, but this hasn’t changed the attitude of many teams, especially in “engineering-driven cultures” (startups founded by programmers).
One of their arguments is that Agile teams move too fast for testers, that black box testers writing up test plans and working through manual test scripts or constantly updating their Quality Center or Selenium UI regression tests can never catch up to a team delivering new features in short sprints. If the testers don’t have the technical skills to at least write acceptance tests in something like Fitnesse or Cucumber, or if they don’t have the business domain knowledge to help fill in for the Customer/Product Owner and answer developer questions, what are they good for?
This is taken to the extreme in Continuous Deployment,a practice made popular by companies like IMVU and Facebook where developers review their work, write automated tests, check the code and tests in and if the tests pass, the changes are immediately and automatically pushed to production.
Letting Customers test your work
Some shops look at Continuous Deployment as a chance to “crowdsource” their testing – by getting their customers to do their testing for them. It’s actually promoted as a competitive advantage. But it’s really hard – maybe impossible – to write secure and reliable software this way, as I have looked at before. For a critical review of the quality of a system continuously deployed to customers, read James Bach’s fascinating post on 20 minutes spent testing one of the poster child apps for Continuous Deployment and the problems that they found in the app in just a short period of time.
Other Continuous Deployment shops are more careful and follow Etsy/Flickr’s approach of dark launching: deploying changes continuously, but testing and reviewing them before turning them on progressively for customers and closely monitoring the outcome.
Regardless, it’s important to remember that there are some things that customers can test and in fact only customers should test: whether a feature is useful or not, whether a feature is usable, what kind of information they need to do a task properly, what the optimal workflow is. This is what A/B split testing is supposed to be about – experimenting with ideas and features and workflows, collecting usage data and finding out what customers use or like best and what they don’t. To evaluate alternatives and get feedback.
But you don’t ask your customers to test whether something is finished or not, whether the code works or not, whether the system is stable and secure or whether it will perform under load.
What do you need from your test team?
Even the best, most responsible and experienced developers make mistakes. In our shop, everyone is an experienced developer – some of them have been working in this domain for 10-15 years or more. They carefully test their own work and update the automated unit/functional test suite for every check-in. These tests and static analysis checks are run in Continuous Integration – we’ve learned to lean heavily on the test suite (there are thousands and thousands of tests now with a high level of coverage) and on static analysis bug checking and security vulnerability checking tools to find common coding mistakes. All code changes are also reviewed by another senior developer – without exception.
Even with good discipline and good tools, good programmers still make mistakes: some subtle (inconsistencies, look-and-feel problems, data conversion and setup, missing edits) and some fundamental (run-time failures under load, concurrency problems, missed requirements, mistakes in rules, errors in error handling). I want to make sure that we find most (if not all) of these mistakes before the customers do. And so do the developers.
That’s where our test team comes in. We have a small, experienced and highly-specialized test team. One tester focuses on acceptance testing, validating functional requirements and usability and workflow with the business. Another tester works on functional regression and business rules correctness and coverage, looking for missing rules and for holes in the developer’s test suites, and automating our integration tests at the API level. And the other tester’s main thing is operational testing, stress testing for spikes and demand shocks and soak testing to look for leaks and GC issues, destructive system testing and bug hunting – actively trying to break the system. They all know enough to fill in for each other when someone is away, but they each have their own unique knowledge and skills and strengths, and their own ways of approaching problems.
When we were first building the system we started with a larger test team focused more on coverage and assurance, with test planning and traceability and detailed manual testing checklists, and writing automated regression tests at the UI. But there was a lot of wasted time and effort working this way.
Now we depend more on automated tests written by the developers underneath the UI for functional coverage and regression protection. Our test team puts most of their effort into exploratory functional and system and operational testing, risk-based and customer-focused targeted tests to find the most important bugs, to find weaknesses and exploit them. They like this approach, I like it, and developers like it, because we find real and important bugs in testing, the kinds of problems that escape code reviews and unit testing.
They smoke test changes as soon as developers check them in, in different customer configurations. They pair up with developers to test through new features and run war games and simulations with the developers to try to find run-time errors and race conditions and timing issues and workflow problems under “real-world” conditions. They fail the system to make sure that the failure-detection and recovery mechanisms work. They test security features and setup and manage pen tests with consultants. They run the system through an operational day. Together with Ops they also handle integration certification with new customers and partners. They do this in short sprints with the rest of the team, releasing to production every 2 weeks (and sometimes more often).
The test team is also responsible for getting the software into production. They put together each release, check the dependencies, they decide when the release is done, what will make it into a release and what won’t, they check that we have done all of the reviews that the team agreed to, they test the roll-back and data conversion routines and then they work with Ops to deploy the release through to production.
They don’t slow the team down, they don’t keep us from delivering software. They help us make sure that the software works and that it gets into production safely.
Testers find more than bugs
I’ve worked for a long time in high-assurance, high-integrity businesses where not having testers isn’t an option – the stakes of making mistakes are too high. But I don’t think that you can build real software without someone helping to test it. Unless you are an early stage startup pounding out a proof of concept, or you are a small team building something trivial for internal use (but then you probably won’t read this), you need help testing the system to make sure that it works.
It doesn’t matter how you are working, what method you follow - Agile or Waterfall doesn’t change the need for testers. If you’re moving fast and light, testers need to adapt to the pace and to the way that they get and share information. That’s ok. Good testers can do that.
I’m not naïve enough (any more) to think that the test team will find all of the bugs that might be in the system – or that this is their job. Of course, I hope that the testers will find any important or obvious bugs before customers do.
What I need for them to do is to help us to answer some important questions: Are we ready to release? What’s too rough or unstable or incomplete, what needs to be backed-out, or what needs further review, or maybe a rewrite? What’s weak in the design? Where are we missing automated tests? Where do we need better test tools? What features are too hard to understand, or inconsistent, or too hard to setup? What error messages are missing or misleading? Are we trying to do too much, too fast? What do we need to change in the design, or the code, or the way that we design or code the system to make it better, more reliable?
Testing doesn’t provide all possible information, but it provides some. Good testing will provide lots of useful information.
James Bach (Satisfice)
Without testers, not only do you put out code that you shouldn’t with bugs that you should have caught – you also lose a lot of important information about how good your software really is and what you need to do to make it better. If you care about building good software, this is an opportunity that you cannot pass up.