The SEI recently published some fascinating research which shows a clear relationship between software quality and software security.
The consensus of researchers is that at least half, and maybe as many as 70% of common software vulnerabilities are fundamental code quality problems that could be prevented by writing better software. Sloppy coding. Not checking input data. Bad – or no – error handling. Brackets in the wrong spot... Better code is more secure.
Using Bug Counts to Predict Security Vulnerabilities – and vice versa
The more bugs you have, the more security problems you have.
Somewhere between 1% and 5% of software defects cause security vulnerabilities. Which means you can get a good idea of how secure an application is based on how many bugs it has.
If you do everything right:
- Developers are trained in secure development so that they can prevent – or at least find and fix – security problems
- The system is designed and built with a deliberate focus on quality and security
- You collect/measure defect data and use it to assess and improve your development practices
Heartbleed and Goto Fail = Bad Coding
The SEI looked at recent high profile security vulnerabilities including Heartbleed and the Apple “goto fail” SSL bug, both of which were caused by coding mistakes that could have and should have been caught in code reviews or thorough unit testing (read Martin Fowler’s exhaustive analysis here). No black hat security magic here. Just standard, accepted good development practices.
This research also points out the limits of static analysis tools in ensuring safe and secure code. Bugs that could have been found by people working carefully could not be found by tools:“Heartbleed created a significant challenge for current software assurance tools, and we are not aware of any such tools that were able to discover the Heartbleed vulnerability at the time of announcement”.The only way to find the Heartbleed bug with today’s leading tools is to write custom rules or overrides, which means that you have to anticipate that this code is bad in the first place. You’d be better off spending your time reviewing or testing the code more carefully instead.
If you got bugs, you’ll get pwned
If you have a quality problem, then you have a security problem.
Security and reliability have to be designed and engineered in. You can’t test this in:
Medium- and large-scale systems typically contain many defects and these defects do not always cause problems when the software systems are used precisely as tested…
Even a small system might require an enormous number of tests to confirm correct operations under expected conditions. As systems grow, the number of possible conditions may be infinite. For any non-trivial system, the tested area is small. Test, by necessity, focuses on the conditions most likely to be encountered and most likely to trigger a fault in the system. Test, therefore, can only find a fraction of the defects in the system.
Functional testing proves that the system works as expected. This kind of testing, even at high levels of coverage, can’t prove that the system is reliable or secure. Pen testing, fuzzing, DAST and destructive testing stress the system in unexpected ways to see how the system behaves. But pen testing can’t prove that the system is secure either – for a big system, you would need an infinite number of pen testers on an infinite number of keyboards working for an infinite number of hours to maybe find all of the bugs.
Like any other kind of testing, pen testing gives you information about the quality and completeness of the system’s design and implementation – where you made mistakes, where you missed something. The results tell you where to look deeper for other problems in the design or code, or problems in how you design or how you code. Pen testing is wasted if you don’t use this information to get to the root cause and make things better.
The SEI’s research makes a few things clear:
- Security and reliability go hand in hand. Security-critical systems need to be built like safety-critical systems – with the same careful attention to quality.
- You can predict how secure your system is based on the total number of bugs that have been found in the code.
- Design reviews and code reviews (including desk checking your own code) are the most effective ways to find security and reliability problems. The amount of time spent in reviews is a key indicator of system reliability and security: top performers spent 2/3 as much time in reviews as in development. For security-critical or safety-critical code, you need to get experts involved in doing reviews.
- Static analysis testing should be part of everyone’s development program. But don’t lean too heavily on it. Run static analysis before code reviews to catch basic mistakes and clean them up, or to identify problem areas in the code that need to be reviewed carefully. Run static analysis after code reviews to verify that the code looks good. But don’t try to use static analysis as a substitute for code reviews.
- Focus on writing good, clean code. Most Level 1 (high severity) defects are caused by coding mistakes.
- Train developers in secure design and coding so they know what not to do, and what to look for when reviewing code, and so that they know how to fix security bugs properly.
Building reliable and secure systems isn't cheap and it isn't easy, especially at scale. The SEI says that you must assume that complex systems are never error free. Which means that they will never be completely secure. Our job is to do the best that we can, and hope that it is enough.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.