Thursday, June 20, 2013

What is Important in Secure Software Design?

There are many basic architectural and design mistakes that can compromise the security of a system:

  1. Missing something important in security features like access control or auditing, privacy and compliance requirements;
  2. Technical mistakes in understanding and implementing defence-against-the-dark-arts security stuff like crypto, managing secrets and session management (you didn’t know enough to do something or to do it right);
  3. Misunderstanding architectural responsibilities and trust zones, like relying on client-side validation, or “I thought that the data was already sanitized”;
  4. Leaving the attack surface bigger than it has to be – because most developers don’t understand what a system’s attack surface is, or know that they need to watch out when they change it;
  5. Allowing access by default, so when an error happens or somebody forgets to add the right check in the right place, the doors and windows are left open and the bad guys can walk right in;
  6. Choosing an insecure development platform or technology stack or framework or API and inheriting somebody else’s design and coding mistakes;
  7. Making stupid mistakes in business workflows that allow attackers to bypass checks and limits and steal money or steal information.

Learning about Secure Software Design

If you want to build a secure system, you need to understand secure design. Hopefully you won’t start by reading Secure Software Design by Richardson and Thies. While it does describe many of the major issues in application security and IT security in general, and some common threats and vulnerabilities, it (ironically, given the title) doesn't explain how to do secure software design. And too much of the “practical information” in the book is dangerously almost but not quite right: the section on XSS for example, which does mention output escaping, but doesn't explain how to do it properly or that it is much more important than “Scrubbing input for unnecessary characters and altering necessary but possibly dangerous characters” (however you would go about doing that safely). Or mostly wrong: the section on secure database design – no, “One of the simplest ways to protect a web application from an [sic] SQL injection attack is to validate all input parameters” is not correct, and “You should also avoid dynamic SQL and use parameterized stored procedures” is not close enough to being correct to be understood or followed properly. The book does raise awareness of application security issues, and early on the authors do point readers to CERT, SANS and OWASP, so there is hope that students will find and use those resources instead of relying on this book.

Principles – Motherhood and Apple Pie, or Goodness and Rightness and So What?

Every book that takes on secure software design, even a good book like Secure and Resilient Software Development by Merkow and Raghavan, spends time going through basic secure design principles: The importance of C and I and maybe A. Modularity and compartmentalization, separation of responsibilities, economy of mechanism (an unsimple way to say simplicity), least privilege, defence in depth certainly but not security through obscurity, complete mediation (uh huh), and psychological acceptability, and whatever else Saltzer and Schroeder wrote up 40 years ago.

All good and true and wise and right ideas to live by, but you can read this stuff all day (if you can stay awake through it) and it won’t help you to design a more secure system. There’s nothing clear or actionable here – it’s preaching and high-level hand waving. You can’t tell if you have done enough of it, you’ll never know if you got it right or what you missed, or what really important and what isn't.

Threats and Attacks and Risks – Learning to be Afraid of …something…

The rest of secure design is mostly about threats and attacks and exploits – risk-focused threat modeling exercises. Developers design something nice, and then a security expert comes in and attacks their design, looks for weaknesses and oversights, enumerates threats and walks through attack trees and vulnerabilities and tells the developers what they missed and what some theoretical attacker might be able to take advantage of to compromise the system.

This is difficult stuff for developers to understand, and difficult for them to get excited about: you’re asking developers – concrete problem solvers – to think about problems that will "probably never" happen. And to do it properly requires that you not only understand how the system works (and the technology that it works on), but also what kind of attacks are possible in what contexts (and how likely they really are), which means you need specialized experience and knowledge that most developers don’t have and can’t get easily.

But even if you know this stuff and follow a structured approach like STRIDE or maybe Trike there’s no way to know if you’ve done a good job of threat modeling, if you've done enough of it and if you've identified all the important problems, or if you’ve missed some important attack vector or critical vulnerability and your gonna be pwned anyway.

Threat modeling, at least the way it is commonly understood, with expensive meetings where architects and developers and testers and security experts and project managers get together to methodically walk through design documents, and then write up CYA paperwork afterwards, doesn’t fit with the way that most developers actually work – especially developers on Agile teams who do most of their design work incrementally and iteratively, constantly refining and filling the design in as they go. Or developers maintaining legacy systems under constant pressure to fix or change something that is already there as fast and cheaply as they can. There isn’t time or space to fit in threat modeling meetings or all that documentation and paperwork, and it’s probably not the best use of time if they could find some.

Even lighter weight threat modeling hasn't made much of an inroad in development shops and I am not convinced that it will.

Secure Design Checklists, Cheat Sheets and Patterns

When developers are designing and building a system, they want to look forward: towards understanding the problem they are trying to solve and what they need to build and how they can get it built quickly. Rather than looking back at what people missed or did wrong, it’s more valuable and practical and cost-effective to focus on what they should and can do upfront as part of the design – the practices and patterns and tools that they should use and what they shouldn’t, the problems that they have to look out for when they are making design decisions and trade-offs.

I've talked before about how important and useful checklists can be in software security: simple steps and things to think about when working on different design problems, to make sure that you aren't missing something important or doing something stupid.

Microsoft’s Patterns and Practices site includes an (unfortunately “retired”) secure architecture and design review checklist which covers most of the things that you need to think about when designing a secure app. In case this checklist disappears some day, a full copy of it is included in Merkow and Raghavan's book on Secure and Resilient Software Development.

OWASP has a secure design checklist, but it is not targeted to developers – it’s a tool to help an auditor run security design reviews in a document-heavy waterfall environment. There is an OWASP Application Architecture Cheat Sheet (currently in draft), which includes some good questions to ask in initial architecture and high-level design. The rest of the OWASP Cheat Sheets can be used to help designers and coders with specific application security problems – as long as you know what problems you need to solve.

There’s also been some work on secure patterns, which could be useful for developers who take a pattern-based approach to software design. The SEI’s Secure Design Pattern catalog is an attempt to include security in some common software design patterns (secure versions of Factory, Strategy, Builder, Chain of Responsibility…), or to apply patterns to some common software security problems. And there are a couple of books like Core Security Patterns (an intimidating 1000+ page list of security patterns for big standards-based J2EE apps) and Security Patterns in Practice (which has just been published). However, these patterns have not made it to the mainstream – I don’t know many real-life developers who are even aware of these patterns, never mind tried to apply them.

One of the most useful tools I've come across in the secure design space is SD Elements, an online software service which helps development teams make application security decisions. You start by describing your project and its security and compliance requirements and the language(s)/platform that you’re using, and SD Elements guides you through a set of questions and options on how to deal with important security aspects of the design, implementation and testing of the system. It helps you to understand the decisions that you need to make, and holds your hand as you make them.

Security in Design, not Secure Design

Secure design shouldn't be about things that you don’t understand or can’t do anything about. Secure design should be about understanding the problems that you can and should take care of on your own and the problems you shouldn't.

Understanding what your system’s attack surface looks like and what to look out for when you change it.

How trust zones work.

Where and why and how you should use proven application security frameworks and libraries like Shiro or ESAPI, or how to properly leverage the security capabilities of your application framework (Rails or Play or Spring or whatever…).

The first step – and the most important step – is to get software designers and architects to think about security when they think about design, in the same way that they think about time-to-market and developer convenience, or performance or reliability or future proofing or technical elegance. Not just the security features that they should a have stories for, but security as a continuous thread in architecture and design.

When they select tools and languages and frameworks and platforms.

When they think about architectural responsibilities and layering and patterns.

And when they work with data: identifying and tracing and protecting confidential and private information and secrets, taking care of data validation properly, and thinking about safe data access and data storage.

Secure design has to fit into design and how design it is done. It has to be part of decisions as design decisions are being made, not bolted on afterwards in audits and reviews.

3 comments:

  1. I would also recommend the videos / presentations available on the http://www.secappdev.org/ site.

    ReplyDelete
  2. I would love to discuss this offline with you. You can contact me at my OWASP address. I look forward to hearing from you.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete

Note: Only a member of this blog may post a comment.