This post is based on feedback that I shared with some of the team working on the next OWASP Development Guide, in my review of an early draft of the introductory section on risk management. I’m not sure if or how my feedback will be reflected in the new guide, since the project has recently undergone a change in leadership and there has been a lot of rethinking and resetting of expectations. So I have taken some time to put my thoughts together here and think through the issues some more.
Risk Management in Software Development
As a software development manager or project manager, our focus (and our training) is on how to identify and manage risks that could impact the delivery of a project. We learn to avoid project failure through project management structure and controls, by following good practices, and by actively and continuously managing risks inside and outside of the project team, including:
- Schedule and estimation risks
- Requirements and scope risks – the problem of managing change
- Cost and budget and financial/funding risks
- Staffing and personnel risks: team skills and availability, turnover, subcontractors, dependencies on partners
- Business strategy risks, portfolio risks, ROI/business case
- Stakeholder and sponsorship risks – the basic politics of business
- Program risks – interfaces and dependencies with other projects
- Legal and contracting risks
- Technical risks: architecture, platforms, tooling: how well do we understand them, are we on the bleeding edge (will it work).
Discovering and managing risks from a security perspective is different: the perspective that you need to take is different, and the issues that you need to manage are different.
To find software security risks, you need to think beyond the risks of delivery and consider the operational and business context of the system. You need to look at the design of the system and its infrastructure, IT and business operations, your organization and its security posture, and your organization’s business assets.
Assets: What does the business have that is worth stealing or damaging?
Think like an attacker. Put yourself in the position of a motivated and skilled attacker: most of us can ignore the possibility of being attacked by script kiddies or amateurs who want to show off how smart they are. The bigger, more serious threat now comes from either disgruntled insiders or former employees, or from motivated professional criminals or even nation states.
Every business has something worth stealing or damaging. You need to understand what somebody would want, and how badly they might want it:
Confidential and personal data – are you holding private, personally identifiable or confidential data on customers, on your partners, your personnel, or anyone else?
Are you handling financial payments, or other high-value transactions?
Information about buying patterns or other business activities that would be valuable to competitors or other outside parties.
Information about your financials, investments and spending, or your business plans and strategy.
Intellectual Property: research data, design work, or information not about what you are doing, but how you are doing it – your operational systems, supply chain management. Or the designs and algorithms that drive the technology platform – it may not be the data behind the system that is valuable, the target could be the system itself, your technical knowledge.
Start with data and other assets, then look at the systems that you are building, that you support.
Is the system a critical online service? In rare cases, the system could be part of critical infrastructure (electrical power transmission, or a core financial system such as the NYSE, or maybe an emergency notification system). Or you may be running a completely online business – if the system is down, your business stops. Such systems may be vulnerable to Denial of Service attacks or other attacks that could affect service over an extended period of time. Although DDOS attacks are so 2006, it’s worth remembering what happened to those Internet offshore betting systems held to ransom by distributed DOS attacks a few years ago…
Or attackers could use your system as a launch platform to attack other more valuable systems – by compromising and exploiting your connectivity and trust relationships with other, more strategic or valuable systems or organizations.
Starting early with the idea of identifying assets under threat naturally supports threat modeling later.
Attack Surface: Finding the open doors and windows
Once you know what’s valuable to bad guys, you need to consider how they could get it. Look at your systems, identify and understand the paths into and out of the system, and what opportunities are offered to an attacker. Do this through high-level attack surface analysis: walk around the house, check to see how many doors and windows there are, and how easy they are to force open. The idea behind attack surface analysis is straightforward: the fewer entry points, and the harder they are to access, the safer you are.
Most enterprise systems, especially e-commerce systems, have a remarkable number of interfaces to clients and to other systems, including shared services and shared data. Focus on public and remote interfaces. What is reachable or potentially reachable by outside parties, especially unauthenticated / anonymous access?
What other systems are we sharing information with and how? What do we know about these other systems, can we trust them, rely on them?
Is the application client-facing across a private network, or public-facing across the Internet? Do you offer an application API to partners or customers to build their own interfaces? A desktop client? A browser-based web client? How much dynamic content? Online ordering, queries? How rich an experience does your client provide, using what technologies: Ajax, Flash, Java, Silverlight, …? What about mobile clients?
How much personalization and customization do you offer to your customers: more options and more combinations to design, test and review, more chances to miss something or get something wrong.
Then you look behind the attack surface to the implementation details for each interface, at the technology stack, the trust boundaries, look at the authorization and authentication controls, trace the flow of control and flow of data. And do this each time you make changes or create a new type of interface. This is the basis of threat modeling.
Capabilities and Countermeasures: How hard / expensive is it for them to get in?
What is the state of your defenses, your protective controls?
How secure is the network? Is the server OS, database, and the rest of the technology stack hardened? Are your patches up to date – are you sure?
Was the application built in a secure way? Has it been maintained in a secure way? Is it deployed and operated in a secure way? How do you know? When was your last pen test? Your last security design review? How many outstanding security vulnerabilities or bugs do you have in your development backlog?
Are there any regulatory or legal requirements that need to be satisfied? PCI, HIPAA, GLBA, … Do you understand your obligations, are you satisfying them?
Do you know enough to know how much trouble you might be in? What is the security posture/capability of the organization, of your team? Do you have someone (or some group) in the company responsible for setting security policies and helping with security-based decisions? Has the development team been trained in security awareness, and in defensive coding and security testing? Is the team following a secure SDLC? Are you prepared to deal with security incidents – do you have an incident response team, do they know what to do?
It is important for the stakeholders to understand upfront what we know and what we don’t know. How confident they should be in the team’s ability to deliver a secure solution – and the team’s ability to understand and anticipate security risks in the first place. There are a couple of good, freely-available frameworks for assessing your organization’s software security capabilities. OWASP’s SAMM framework is one that I have used before with some success. Another comprehensive organizational assessment framework is Cigital’s BSIMM which has been built up using data from 30 different software security programs, generally at larger companies.
Back to Managing the Risks
Now you can assess your risk exposure: the likelihood of a successful attack, and the impact, the cost to your company if an attack was successful. With this information you can decide how much more to spend on defenses, and put into place a defensive action plan.
Risk management is done at the strategic level first: working with business stakeholders, managers, the people who make business decisions, who decide where and how to spend money. You need to describe risks in business terms, spend some time selling and educating. The point here is to secure a mandate for security: an agreement with the people who run the business on security priorities and goals and budgets.
Then you move to tactical implementation of your security mandate: figuring out where and how much to focus, what technical and project-based trade-off decisions to make within your mandate.
Start at a high level, with a rough H/M/L rule-of-thumb rating of the probability and impact of each risk or risk area. Use this initial assessment, and your understanding of the team’s security capabilities, to determine where to dig deeper and how deep to dig, where you will need to focus more analysis, reviews, testing.
As you find more, and more specific risks and weaknesses through threat modeling or architectural risk analysis, through code reviews, or pen testing or fuzzing or whatever, consider using Microsoft’s DREAD risk assessment model. I like this model because it asks you to evaluate each risk from different perspectives, forces you to answer more questions:
D: Damage potential, what would happen to the business?
R: Reproducibility – how often does the problem come up, does it happen every time?
E: Exploitability – how easy is to take advantage of, how expensive, how skilled does the bad guy need to be, what kind of tools does he need?
A: Affected users – who (how important) and how many?
D: Discoverability: how easy is it to find?
As you identify risks, you go back and apply generic risk management methods to determine the cost trade-offs and your response:
- accept (plan for the cost, or choose to ignore it because the likelihood is small and/or the cost to fix is high)
- avoid (do something else, if you can)
- prevent (plug the hole, fix the problem)
- reduce (take steps to reduce the likelihood or impact: put in early warning tools to detect, like an IDS, so that you can react quickly; or contain the risk through fire walls, isolation and partitioning, layered defenses, ...).
- informed judgment
- legal requirements and contractual commitments
- the business’ general tolerance for any kind of risk, and the company’s willingness to face risks – a startup for example has a much higher tolerance for risk than an established brand
- politics: some issues aren’t fundamentally important to the business, but they are important to somebody important – and some issues are important to the business (or should be), but unfortunately aren’t important to anybody important
- and economic trade-offs – the cost today vs the potential cost tomorrow.
1 comment:
I can't believe I'm saying this, but there's basically almost no point in testing (i.e. risk assessing) apps or software components if there is not a list of the current controls implemented and compared as a gap analysis.
In other words, risk management for appsec is all about gap analysis i.e. comparing existing controls (currently implemented) vs. the ideal controls (what controls should be in place).
In between gap analysis churn (at at baseline), testing and app assessments should occur. Rating the success of these isn't necessary -- let me tell you something to make this easy on you: they always fail. There is always a high critical as a result of proper appsec testing if the currently implemented control is not the optimal control.
If the optimal controls are implemented, well then risk management becomes a different scenario completely. Heat maps based on qualitative ratings can be used for both incidents as well as pre-production app assessments. I'm sure you can go further in-depth and I will if you want me to. Give me a use case or a real world scenario and I'll say more about this. Perhaps it would just be better to point you to "Information Security Management Metrics" by Krag Brotby.
Post a Comment