Thursday, July 17, 2014

Trust instead of Threats

According to Dr. Gary McGraw’s ground breaking work on software security, up to half of security mistakes are made in design rather than in coding. So it’s critical to prevent – or at least try to find and fix – security problems in design.

For the last 10 years we’ve been told that we are supposed to do this through threat modeling aka architectural risk analysis – a structured review of the design or architecture of a system from a threat perspective to identify security weaknesses and come up with ways to resolve them.

But outside of a few organizations like Microsoft threat modeling isn’t being done at all, or at best only on an inconsistent basis.

Cigital’s work on the Build Security In Maturity Model (BSIMM), which looks in detail at application security programs in different organizations, has found that threat modeling doesn't scale. Threat modeling is still too heavyweight, too expensive, too waterfally, and requires special knowledge and skills.

The SANS Institute’s latest survey on application security practices and tools asked organizations to rank the application security tools and practices they used the most and found most effective. Threat modeling was second last.

And at the 2014 RSA Conference, Jim Routh at Aetna, who has implemented large-scale secure development programs in 4 different major organizations, admitted that he has not yet succeeded in injecting threat modeling into design anywhere “because designers don’t understand how to make the necessary tradeoff decisions”.

Most developers don’t know what threat modeling is, or how do to it, never mind practice it on a regular basis. With the push to accelerate software delivery, from Agile to One-Piece Continuous Flow and Continuous Deployment to production in Devops, the opportunities to inject threat modeling into software development are disappearing.

What else can we do to include security in application design?

If threat modeling isn’t working, what else can we try?

There are much better ways to deal with security than threat modelling... like not being a tool.
JeffCurless, comment on a blog post about threat modeling

Security people think in terms of threats and risks – at least the good ones do. They are good at exploring negative scenarios and what-ifs, discovering and assessing risks.

Developers don’t think this way. For most of them, walking through possibilities, things that will probably never happen, is a waste of time. They have problems that need to be solved, requirements to understand, features to deliver. They think like engineers, and sometimes they can think like customers, but not like hackers or attackers.

In his new book on Threat Modeling Adam Shostack says that telling developers to “think like an attacker” is like telling someone to think like a professional chef. Most people know something about cooking, but cooking at home and being a professional chef are very different things. The only way to know what it’s like to be a chef and to think like a chef is to work for some time as a chef. Talking to a chef or reading a book about being a chef or sitting in meetings with a chef won’t cut it.

Developrs aren’t good at thinking like attackers, but they constantly make assertions in design, including important assertions about dependencies and trust. This is where security should be injected into design.

Trust instead of Threats

Threats don’t seem real when you are designing a system, and they are hard to quantify, even if you are an expert. But trust assertions and dependencies are real and clear and concrete. Easy to see, easy to understand, easy to verify. You can read the code, or write some tests, or add a run-time check.

Reviewing a design this way starts off the same as a threat modeling exercise, but it is much simpler and less expensive. Look at the design at a system or subsystem-level. Draw trust boundaries between systems or subsystems or layers in the architecture, to see what’s inside and what’s outside of your code, your network, your datacenter:

Trust boundaries are like software firewalls in the system. Data inside a trust boundary is assumed to be valid, commands inside the trust boundary are assumed to have been authorized, users are assumed to be authenticated. Make sure that these assumptions are valid. And make sure to review dependencies on outside code. A lot of security vulnerabilities occur at the boundaries with other systems, or with outside libraries because of misunderstandings or assumptions in contracts.
OWASP Application Threat Modeling

Then, instead of walking through STRIDE or CAPEC or attack trees or some other way of enumerating threats and risks, ask some simple questions about trust:

Are the trust boundaries actually where you think they are, or think they should be?

Can you trust the system or subsystem or service on the other side of the boundary? How can you be sure? Do you know how it works, what controls and limits it enforces? Have you reviewed the code? Is there a well-defined API contract or protocol? Do you have tests that validate the interface semantics and syntax?

What data is being passed to your code? Can you trust this data – has it been validated and safely encoded, or do you need to take care of this in your code? Could the data have been tampered with or altered by someone else or some other system along the way?

Can you trust the code on the other side to protect the integrity and confidentiality of data that you pass to it? How can you be sure? Should you enforce this through a hash or an HMAC or a digital signature or by encrypting the data?

Can you trust the user’s identity? Have they been properly authenticated? Is the session protected?

What happens if an exception or error occurs, or if a remote call hangs or times out – could you lose data or data integrity, or leak data, does the code fail open or fail closed?

Are you relying on protections in the run-time infrastructure or application framework or language to enforce any of your assertions? Are you sure that you are using these functions correctly?

These are all simple, easy-to-answer questions about fundamental security controls: authentication, access control, auditing, encryption and hashing, and especially input data validation and input trust, which Michael Howard at Microsoft has found to be the cause of half of all security bugs.

Secure Design that can actually be done

Looking at dependencies and trust will find – and prevent – important problems in application design.

Developers don’t need to learn security jargon, try to come up with attacker personas or build catalogs of known attacks and risk weighting matrices, or figure out how to use threat modeling tools or know what a cyber kill chain is or understand the relative advantages of asset-centric threat modeling over attacker-centric modeling or software-centric modeling.

They don’t need to build separate models or hold separate formal review meetings. Just look at the existing design, and ask some questions about trust and dependencies. This can be done by developers and architects in-phase as they are working out the design or changes to the design – when it is easiest and cheapest to fix mistakes and oversights.

And like threat modeling, questioning trust doesn’t need to be done all of the time. It’s important when you are in the early stages of defining the architecture or when making a major design change, especially a change that makes the application’s attack surface much bigger (like introducing a new API or transitioning part of the system to the Cloud). Any time that you are doing a “first of”, including working on a part of the system for the first time. The rest of the time, the risks of getting trust assumptions wrong should be much lower.

Just focusing on trust won’t be enough if you are building a proprietary secure protocol. And it won’t be enough for high-risk security features – although you should be trying to leverage the security capabilities of your application framework or a special-purpose security library to do this anyways. There are still cases where threat modeling should be done – and code reviews and pen testing too. But for most application design, making sure that you aren’t misplacing trust should be enough to catch important security problems before it is too late.

1 comment:

Carol Jennings said...
This comment has been removed by the author.
Site Meter