What do testers do on an Agile team?
Quite a few Agile teams believe that you don’t need testers to deliver working software. Testers are looked upon as a relic from the waterfall days (requirements, design, code, then pass off to test). On XP teams, everyone is a developer, and developers are responsible and accountable for testing their own code, writing automated unit tests and then automating the acceptance tests that the Customer has defined. Scrum doesn’t explain how testing is done at all – the team will find a way to figure it out as they “inspect and adapt” themselves towards good practices.
If developers are already testing their own code (and maybe even pairing up to review code as it is written), then what do you need testers for?
Janet Gregory and Lisa Crispin wrote a big book to justify the role of testers on Agile teams and to explain to programmers and testers how testers can fit into Agile development, but this hasn’t changed the attitude of many teams, especially in “engineering-driven cultures” (startups founded by programmers).
One of their arguments is that Agile teams move too fast for testers, that black box testers writing up test plans and working through manual test scripts or constantly updating their Quality Center or Selenium UI regression tests can never catch up to a team delivering new features in short sprints. If the testers don’t have the technical skills to at least write acceptance tests in something like Fitnesse or Cucumber, or if they don’t have the business domain knowledge to help fill in for the Customer/Product Owner and answer developer questions, what are they good for?
This is taken to the extreme in Continuous Deployment,a practice made popular by companies like IMVU and Facebook where developers review their work, write automated tests, check the code and tests in and if the tests pass, the changes are immediately and automatically pushed to production.
Letting Customers test your work
Some shops look at Continuous Deployment as a chance to “crowdsource” their testing – by getting their customers to do their testing for them. It’s actually promoted as a competitive advantage. But it’s really hard – maybe impossible – to write secure and reliable software this way, as I have looked at before. For a critical review of the quality of a system continuously deployed to customers, read James Bach’s fascinating post on 20 minutes spent testing one of the poster child apps for Continuous Deployment and the problems that they found in the app in just a short period of time.
Other Continuous Deployment shops are more careful and follow Etsy/Flickr’s approach of dark launching: deploying changes continuously, but testing and reviewing them before turning them on progressively for customers and closely monitoring the outcome.
Regardless, it’s important to remember that there are some things that customers can test and in fact only customers should test: whether a feature is useful or not, whether a feature is usable, what kind of information they need to do a task properly, what the optimal workflow is. This is what A/B split testing is supposed to be about – experimenting with ideas and features and workflows, collecting usage data and finding out what customers use or like best and what they don’t. To evaluate alternatives and get feedback.
But you don’t ask your customers to test whether something is finished or not, whether the code works or not, whether the system is stable and secure or whether it will perform under load.
What do you need from your test team?
Even the best, most responsible and experienced developers make mistakes. In our shop, everyone is an experienced developer – some of them have been working in this domain for 10-15 years or more. They carefully test their own work and update the automated unit/functional test suite for every check-in. These tests and static analysis checks are run in Continuous Integration – we’ve learned to lean heavily on the test suite (there are thousands and thousands of tests now with a high level of coverage) and on static analysis bug checking and security vulnerability checking tools to find common coding mistakes. All code changes are also reviewed by another senior developer – without exception.
Even with good discipline and good tools, good programmers still make mistakes: some subtle (inconsistencies, look-and-feel problems, data conversion and setup, missing edits) and some fundamental (run-time failures under load, concurrency problems, missed requirements, mistakes in rules, errors in error handling). I want to make sure that we find most (if not all) of these mistakes before the customers do. And so do the developers.
That’s where our test team comes in. We have a small, experienced and highly-specialized test team. One tester focuses on acceptance testing, validating functional requirements and usability and workflow with the business. Another tester works on functional regression and business rules correctness and coverage, looking for missing rules and for holes in the developer’s test suites, and automating our integration tests at the API level. And the other tester’s main thing is operational testing, stress testing for spikes and demand shocks and soak testing to look for leaks and GC issues, destructive system testing and bug hunting – actively trying to break the system. They all know enough to fill in for each other when someone is away, but they each have their own unique knowledge and skills and strengths, and their own ways of approaching problems.
When we were first building the system we started with a larger test team focused more on coverage and assurance, with test planning and traceability and detailed manual testing checklists, and writing automated regression tests at the UI. But there was a lot of wasted time and effort working this way.
Now we depend more on automated tests written by the developers underneath the UI for functional coverage and regression protection. Our test team puts most of their effort into exploratory functional and system and operational testing, risk-based and customer-focused targeted tests to find the most important bugs, to find weaknesses and exploit them. They like this approach, I like it, and developers like it, because we find real and important bugs in testing, the kinds of problems that escape code reviews and unit testing.
They smoke test changes as soon as developers check them in, in different customer configurations. They pair up with developers to test through new features and run war games and simulations with the developers to try to find run-time errors and race conditions and timing issues and workflow problems under “real-world” conditions. They fail the system to make sure that the failure-detection and recovery mechanisms work. They test security features and setup and manage pen tests with consultants. They run the system through an operational day. Together with Ops they also handle integration certification with new customers and partners. They do this in short sprints with the rest of the team, releasing to production every 2 weeks (and sometimes more often).
The test team is also responsible for getting the software into production. They put together each release, check the dependencies, they decide when the release is done, what will make it into a release and what won’t, they check that we have done all of the reviews that the team agreed to, they test the roll-back and data conversion routines and then they work with Ops to deploy the release through to production.
They don’t slow the team down, they don’t keep us from delivering software. They help us make sure that the software works and that it gets into production safely.
Testers find more than bugs
I’ve worked for a long time in high-assurance, high-integrity businesses where not having testers isn’t an option – the stakes of making mistakes are too high. But I don’t think that you can build real software without someone helping to test it. Unless you are an early stage startup pounding out a proof of concept, or you are a small team building something trivial for internal use (but then you probably won’t read this), you need help testing the system to make sure that it works.
It doesn’t matter how you are working, what method you follow - Agile or Waterfall doesn’t change the need for testers. If you’re moving fast and light, testers need to adapt to the pace and to the way that they get and share information. That’s ok. Good testers can do that.
I’m not naïve enough (any more) to think that the test team will find all of the bugs that might be in the system – or that this is their job. Of course, I hope that the testers will find any important or obvious bugs before customers do.
What I need for them to do is to help us to answer some important questions: Are we ready to release? What’s too rough or unstable or incomplete, what needs to be backed-out, or what needs further review, or maybe a rewrite? What’s weak in the design? Where are we missing automated tests? Where do we need better test tools? What features are too hard to understand, or inconsistent, or too hard to setup? What error messages are missing or misleading? Are we trying to do too much, too fast? What do we need to change in the design, or the code, or the way that we design or code the system to make it better, more reliable?
Testing doesn’t provide all possible information, but it provides some. Good testing will provide lots of useful information.
James Bach (Satisfice)
Without testers, not only do you put out code that you shouldn’t with bugs that you should have caught – you also lose a lot of important information about how good your software really is and what you need to do to make it better. If you care about building good software, this is an opportunity that you cannot pass up.
Thank you so much for taking the time to write an honest, comprehensive blog post on an extremely important aspect of our industry.
ReplyDeleteReally nice post which states tester importance
ReplyDeleteAwesome post and pretty spot on I think. Not often I read through a whole blog post :)
ReplyDeleteFrom what I've seen it would appear that people want to say whether test is dead or alive. No inbetweens. There's been a fair bit of debate about this recently.
Testing is changing. Perhaps the job title should change. Or perhaps in earlier years people expected testers to do too much of the testing, where much of the coded/automated/unit stuff should have been done by developers all along.
There are developers/testers. There are testers/developers. There are support/testers. There are designers/developers. Etc.
Why is it assumed that everyone should be confined to a strict job title?
Testing is not dead, we just need to see and show that testers can do more than just test & find bugs. There needs to be the passion and interest in the business, in creating a great company, in innovation...from everyone, not just testers.
Very interesting post. Thank you. So testers are not for bugs, but for expertise, for exploration and to provide you first feedback.
ReplyDelete@Maxim,
ReplyDeleteThe job of testers is to find bugs AND provide feedback to developers and management. AND to help understand risks that the developers aren't looking for. If you're not getting good information from the test team, or not using this information, you're not doing it properly.
A tester finds a bug. The developer can a) fix the bug and go on to the next thing; or b) look at what the tester did, try to understand what kind of mistake they made, why they didn't catch it in their own testing, what other mistakes like this could be in the code, what help they need to prevent these mistakes in the future. Mature teams and developers follow the second approach.
@Rosie,
ReplyDeleteThank you. Yes when I talk to development teams and people who do testing for a living, it does seem that there is a line between organizations that either "can't afford" or don't see the need for testing (relying on good developers and good tools, or relying on their customers) and organizations that spend a lot on testing and aren't always sure what they are getting out of it. So then they look to offshore the testing work to reduce costs, with mixed results but at least it costs less.
I do agree with you that people can play multiple roles, especially in smaller organizations. I worked as a "support/tester" in one small development shop years ago. Developers reviewed and tested their code, and then the support team (me and my buddy) tested it, and our overseas distributors tested it when they got updates. Everybody helped in testing.
trackback: http://jlottosen.wordpress.com/2012/04/25/what-do-testers-do-on-an-agile-team/
ReplyDeleteThank you so much for writing this up.
ReplyDeleteI am in full support for "testers are not for bugs....." as put by Maxim Shulga.....
This is a good article, and continues to get the idea out that testers are ultimately information providers.
ReplyDeleteI worked as a hardware engineer for a few years, and while software is a different animal (of sorts), there are some strong parallels. Hardware tends to have cleaner requirements (under the hood at least) - input set A results in output B. But it seems like parallels can still be drawn.
In the hardware world, the engineers (developers analog) would spend a LOT of time making sure the functional requirements were met. At most, I would write up extensive test procedures, then hand them off to a tech to go through (there were also fixtures for test automation). So I still think that the developers have a vested interest in producing quality - making sure their software does the best it can.
Where the QA team came into play was regarding areas of failure that required additional expertise and some level of acceptance/integration testing. The test team could centralize the resources to perform regulatory compliance testing and reliability/performance testing.
It seems that can work in Software, too. The developer should be able to render the requirements into a clean set of development goals and code to those goals. Then the test team can focus on things like Security, Performance, Stability, all those other "-ities." We can also focus on the overall user experience (customer acceptance). When I am spending my time writing low-level functional tests, I have to spend time reading and understanding the code (which the devs already do), and it takes time away from these other areas of concern. In that case, it lowers the quality in both areas, since I do not have the time for comprehensive Unit/Functional Testing, and I have lost time from Acceptance/Compliance testing.
Thanks for the article - it does a good job of describing where testers fit into the agile software process, and re-iterates the idea that QA/Test people are the measurers and information providers regarding the software product.
ReplyDeleteMy early career was in hardware, and I found there that the roles for design and QA were fairly well-understood. As the hardware designer, it was my responsibility to see the production through design, prototyping, etc..., and to make sure the functional specifications were met. QA came in to measure things that were generally outside my expertise (regulatory compliance, environmental issues) or that required test environments on a larger scale than was practical for the individual engineer.
It seems that software should be similar. As testers, we are better suited to identifying the larger issues such as security, performance, reliability, etc.... We also are valuable as a resource for test environments that cannot easily be set up by the individual developer. I am frustrated when a developer "hands off" code with the attitude of "Please find my bugs." I would rather see them say "I defy you to find a functional bug in this code!" and expect feedback at the systems level. If I have to write functional automation, then I end up spending time reading the code and doing branch analysis, which the developers should have already done, since they are the code experts.
Thanks again for the article!
Testers ARE there to find bugs - trust me, I've found thousands - but we are also valuable for our perspective, particularly in game development. I've been playing video games my whole life, and even though I can't tell you how to fix something, I can tell you why it doesn't work. Too often developers and designers are too busy to be able to step back from their work and look at it the way a user would. That's where we come in.
ReplyDeleteVery nice article.
ReplyDeleteThis might not apply to everyone but where I work, Testers are the "encyclopedias" of the company.
Developers are very Project-focused. They can tell you every bit of detail about a Project that they have worked on. But they cannot tell you about an overall Product. They might not have been involved in all changes to the Product.
Testers, however, are the ones who collect business knowledge, immerse themselves in it. A Tester will be able to describe the entire functionality of a Product. A Developer typically won't be able to without looking at the code.
A Tester will also be interesting in the "Why" (Why does this button need clicking twice?) rather than just the "How" that Developers are typically only interested in.
@Jim Bird
ReplyDeleteYou shared the details about your test team of 3 members.
How many developers were working on the project?
Thanks
Yakiv