An effective software security program has to rely on a foundation of software quality management practices and governance. Your quality management program can be lightweight and agile, but if you are hacking out code without care for planning, risk management, design, code reviews, testing, incident management then what you are trying to build is not going to be secure. Period. Coming from a software development background, I feel that the security community is trying to take on too much of the responsibility, and too much of the costs, for ensuring that basic good practices are in place.
But it’s more than that. If you are doing risk management, design reviews, code reviews, testing, change and release management, then adding security requirements, perspectives, controls to these practices can be done simply, incrementally, and at a modest cost:
- Risk management: add security risk scenarios, threat assessment to your technical risk management program.
- Design reviews: add threat modeling where it applies, just as you should add failure mode analysis for reliability.
- Code reviews: first you can argue, as I did earlier in my response, that a significant number of security coding problems are basic quality problems, simply a matter of writing crappy code: poor or missing input validation (a reliability problem, not just a security problem), lousy error handling (same), race conditions and deadlocks (same), and so on. If making these kinds of mistakes a security problem (and a security cost) helps get programmers to take them seriously, then go ahead, but it shouldn’t be necessary. So what you are left with are the costs of dealing with “hard” security coding problems, like improper use of crypto, secure APIs, authentication and session management, and so on.
- Static analysis: as I argued in an earlier post on the value of static analysis, static analysis checks should be part of your build anyways. If the tools that you use, like Coverity or Klocwork, check for both quality and security mistakes, then your incremental costs are in properly configuring the tools and understanding, and dealing with, the results for security defects.
- Testing: security requirements need to be tested along with other requirements, on a risk basis. Regression tests, boundary and edge tests, stress tests and soak tests, negative destructive tests (thinking like an attacker), even fuzzing should be part of your testing practices for building robust and reliable software anyways.
- Change management and release management: ensure that someone responsible for security has a voice on your change advisory board. Add secure deployment checks and secure operations instructions to your release and configuration management controls.
- Incident management: security vulnerabilities should be tracked and managed as you would other defects. Handle security incidents as “level 1” issues in your incident management, escalation, and problem management services.
- Training managers, architects, developers, testers on security concepts, issues, threats, practices. You need to invest in training upfront, and then refresh the team: Microsoft’s SDL requires that technical staff be retrained annually, to make sure that the team is aware of changes to the threat landscape, to attack bad habits, to reinforce good ones.
- As I described in an earlier post on our software security roadmap , we hired expert consultants to conduct upfront secure architecture and code reviews, to help define secure coding practices, and to work with our architects and development manager to plan out a roadmap for software security. Paying for a couple of consulting engagements was worthwhile and necessary to kickstart our software security program and to get senior technical staff and management engaged.
- Buying “black box” vulnerability scanning tools like IBM Rational Appscan and Nessus, and the costs of understanding and using these tools to check for common vulnerabilities in the application and the infrastructure.
- Penetration testing: hiring experts to conduct penetration tests of the application a few times per year, to check that we haven’t got sloppy and missed something in our internal checks, tests and reviews, and to learn more about new attacks.
- Some extra management and administrative oversight of the security program, and of suppliers and partners.
1 comment:
I think the problem w/ this analysis is that in many organizations, security (specifically, the threat of lawsuits) is the only reason management ever considers the cost of poor quality. In most other cases, they choose to eat the costs (because they don't understand them or can't quantify them ahead of time).
Post a Comment