It’s not clear how much of a role QA – which in most organizations means black box testers who do manual functional testing or write automated functional acceptance tests – can or should play in an Application Security program.
Train QA, not Developers, on Security
At RSA 2011, Caleb Sima asserted that training developers in Appsec is mostly a waste of time (“Don’t Teach Developers Security”).
Because most developers won’t get it; they don’t have the time to worry about security even if they do get it; and the rate of turnover in most development teams is too high so if you train them, they are not likely to be around for long enough to make much of a difference.
Sima suggests to start with QA instead. Because testers are paid to find where things break, and Appsec gives them more broken things to find.
Instead of putting a test team through general Appsec training, he recommends taking a more targeted, incremental approach.
Start with a security scan or pen test. Have a security expert review the results and identify the 1 or 2 highest risk types of vulnerabilities found, problems like SQL injection or XSS.
Then get that security expert to train the testers on what these bugs are about and how to find them, and help them to explain the bugs to development. Developers will also learn something about security by working through these bugs. When all of the highest priority bugs are fixed, then train the test team on the next couple of important vulnerabilities, and keep going.
Unfortunately, this won't work...
This approach is flawed in a couple of important ways.
First, it doesn’t address the root cause of software security problems: developers making security mistakes when designing and writing software. It’s a short-term bandage.
And in the short term, there is a fundamental problem with asking the QA team to take a leadership role in application security: most testers don’t understand security, even after training.
A recent study by Denim Group assessed how much developers and QA understood about application security before and after they got training. Only 22% of testers passed a basic application security test after finishing security training.
Testing is not the same as Pen Testing
This is disappointing, but not surprising. A few hours or even a few days, of security training can’t make a black box functional QA tester into an application security pen tester. Nick Coblentz points out that some stars will emerge from security training. Some testers, like some developers, will “get” the White Hat/Black Hat stuff and fall in love with it, and make the investment in time to really get good at it. However, these people probably won’t stay in testing any ways – there's too much demand for talented Appsec specialists today.
But most testers won’t get good at it. Because it’s not their job. Because there are too many technical details to understand about the architecture and platform and languages for people who are just as likely to have a degree in Art History as Computer Science, who are often inexperienced, and already over worked. And these details are important – in Appsec, making even small mistakes, and missing small mistakes, matters.
Cigital has spent a lot of time helping setup Appsec programs at different companies and studying what works for these companies. They have found that:
Involving QA in software security is non-trivial... Even the "simple" black box Web testing tools are too hard to use.
In order to scale to address the sheer magnitude of the software security problem we've created for ourselves, the QA department has to be part of the solution. The challenge is to get QA to understand security and the all-important attackers' perspective. One sneaky trick to solving this problem is to encapsulate the attackers' perspective in automated tools that can be used by QA. What we learned is that even today's Web application testing tools (badness-ometers of the first order) remain too difficult to use for testers who spend most of their time verifying functional requirements…
Software [In]security: Software Security Top 10 Surprises
But there’s more to Security Testing than Pen Testing
There’s more to security testing than pen testing and running black box scans. So Appsec training can still add value even if it can’t make QA testers into pen testers.Appsec training can help testers to do a better job of testing security features and the system’s privacy and compliance requirements: making sure that user setup and login and password management work correctly, checking that access control rules are applied consistently, reviewing audit and log files to make sure that activities are properly recorded, and tracing where private and confidential data are used and displayed and stored.
And Appsec training can give testers a license to test the system in a different way.
Most testers spend most of their time verifying correctness: walking through test matrices and manual checklists, writing automated functional tests, focused on test coverage and making sure that the code conforms to the specification, or watching out for regressions when the code is changed. A lot of this is CYA verification. It has to be done, but it is expensive and a poor use of people and time. You won’t find a lot of serious bugs this way unless the programmers are doing a sloppy job. As more development teams adopt practices like TDD, where developers are responsible for testing their own code, having testers doing this kind of manual verification and regression will become less useful and less common.
This kind of testing is not useful at all in security, outside of verifying security features. You can’t prove that a system is secure, that it isn’t vulnerable to injection attacks or privilege escalation or other attacks by running some positive tests. You need to do negative testing until you are satisfied that the risks of a successful exploit are low. You still won’t know that the system is secure, only that it appears to be “secure enough”.Stepping off of the Happy Path
It’s when testers step off of the tightly-scripted happy path and explore how the system works that things get more interesting. Doing what ifs. Testing boundary conditions. Trying things that aren’t in the specification. Trying to break things, and watching what happens when they break.
Testing high-risk business logic like online shopping or online trading or online banking functions in real-world scenarios, testing in pairs or teams to check for timing errors or TOC/TOU problems or other race conditions or locking problems, injecting errors to see what happens if the system fails half way through a transaction, interrupting the workflow by going back to previous steps again or trying to skip the next step, repeating steps two or three or more times. Entering negative amounts or really big numbers or invalid account numbers. Watching for information leaks in error messages. Acting unpredictably.
This is the kind of testing that can be done better by a QA tester who understands the domain and how the system works than by a pen tester on a short-term engagement. As long as they are willing to do a little hacking.
It shouldn’t take a lot of training, or add a lot to the cost of testing, to get testers doing some exploratory and negative testing in the high-risk areas of an application. A lot of important security bugs (and functional bugs and usability problems) can be found testing this way – bugs that can’t be found by walking through test checklists, or by running vulnerability scanners and dynamic analysis tools. Application Security training should reinforce to testers – and developers, and managers – how important it is to do this kind of testing, and that the bugs found this way are important to fix.
Moving from functional testing of security features to edge and boundary condition testing and “adversarial testing” is the first major step that QA teams need to take in playing a role in Application Security, according to Cigital’s Build Security in Maturity Model: From there, some QA teams may go on to integrate black box security testing tools, and possibly other more advanced security testing tools and practices.
The real value of Security Testing
But even if you can farm some security testing out to QA, you’ll still need to rely on security experts. You need someone who really understands the tools and the technical issues, who has spent a lot of time hacking, who understands security risks and who is paid to keep up with the changing threat landscape. Someone who is at least as good or better than whoever you expect to be attacking you.
This might mean relying on full-time security experts in your own organization, or contracting outside consultants to do pen tests and other reviews, or taking advantage of third party on-demand security testing platforms from companies like WhiteHat, Qualys, Veracode, HP and IBM.The important thing is to take the results of all of this testing – whether it’s done by QA or a pen tester or by a third party testing service – and act on it.
Developers – and managers – need to understand what these bugs mean and why they need to be fixed and how to fix them properly, and more importantly how to prevent them from happening in the future. The real point of testing is to provide information back to development and management, about risks, about where the design looks weak, or where there are holes in the SDLC that need to be plugged. Until we can move past test-then-fix-then-test-then-fix… to stopping security problems upfront, we aren’t accomplishing anything important. Which means that testers and developers and managers all have to understand security a lot better than they do today. So teach developers security. And testers. And managers too.
No comments:
Post a Comment