Monday, September 26, 2011

Takeaways from OWASP AppSec USA 2011

Last week I attended the OWASP AppSec USA conference in Minneapolis. It was my first time at an OWASP event, and it was an impressive show. More than 600 attendees, some high-quality speakers, lots of vendor representation.

There were several different tracks over the two days of the conference: attacks and defenses, security issues in Mobile computing and Cloud computing, thought leadership, OWASP tools, security patterns and secure SDLCs. And there were lots of opportunities to talk to vendors and smart people. Here were the hilights for me:

Jim Manico showed that there’s hope for developers to write secure code, at least when it comes to protecting web apps from XSS attacks, if they use common good sense and new frameworks and libraries like Google’s Caja, the technology behind OWASP’s Java HTML Sanitizer.

There is even some hope for writing secure Mobile apps, as long as developers are careful and informed. This is a more reassuring story than what I was hearing from security experts earlier in the year, when everyone was focused on how pathetically insecure the platforms and tools were for writing Mobile apps.

It's also possible to be secure in the Cloud as long as you take the time to understand the Cloud services that you are using and how they are implemented and what assumptions they make, and if you think through your design. Which a lot of people probably aren’t bothering to do.

There's too much code already out there

We’re still trying to find ways to build software that is secure. But a bigger problem for many organizations is that there is already too much code out there that is insecure. Jeff Williams at Aspect Security guesstimates that there are more than 1 trillion lines of code in the world, and around 15 million developers around the world writing more of it. A lot of this legacy code contains serious security problems – but it is too expensive to find these and fix these problems. We need better tools to do this, tools that scale up to enterprises and down to small companies.

Many organizations, especially big enterprises with thousands of applications, are relying heavily on scanning tools to find security vulnerabilities. But it takes too long and costs too much to work through all of the results and to find real risks from the long lists of findings and false positives – to figure out what problems need to be fixed and how to fix them.

HP thinks that they have an answer by correlating dynamic scanning and static code analysis results. The idea is to scan the source code, then run dynamic scans against an application under a run-time monitor which captures problems found by the dynamic scanner and records where the problems occurred in the code. By matching up these results, you can see which static analysis findings are exploitable, and it makes it easier to find and fix problems found in run-time testing. And combining these techniques also makes it possible to find new types of vulnerabilities that can’t be found using just one or other of the approaches. I heard about this idea earlier in the year when Brian Chess presented it at SANS Appsec, but it is interesting to see how far it has come since then. Hybrid testing is limited to HP technology for now (SPI Dynamics and Fortify) and it’s limited by what problems the scanners can find, but it looks like a good step forward in the test tool space.

Good Pen Testing and Bad Pen Testing

It’s clear talking to vendors that there are good pen tests and bad pen tests. In most cases, good pen tests are what they offer, and bad pen tests are what you would get from a competitor. Comparing one vendor to another isn't easy, since an application pen test means different things to different people, and the vendors all had different stories and offerings. Some stressed the quality of their tools, others stressed the experience and ingenuity of their testers. Some offered automated scans with some interpretation, which could work fine for plain-vanilla web apps. Some do combined source code review and scanning. Some don’t want help understanding the code or scoping an engagement – they have a fixed-price menu running from running scans to manual pen tests using scanners and fuzzers and attack proxies. Others expect to spend a lot of time scoping and understanding what to test, to make sure that they focus on what's important.

SSL is broken

Moxie Marlinspike’s keynote on the problems with SSL and the CA model was entertaining but scary. He showed that the circle of trust model that underlies SSL – I trust the host because my browser trusts the CA and the CA trusts the host – is fundamentally broken now that CAs like Comodo and DigiNotar (and maybe others) have been compromised.

His solution is Convergence, an alternative way to establish trust between parties, where the client is responsible for choosing trust providers (notaries) rather than being locked into a single trust provider selected by the host.

Metrics and numbers for fun

I like numbers and statistics, here are a few that I found especially interesting. Some of them were thrown out quickly, I hope that I got them right.

At a panel on secure development, one expert estimated that the tax for introducing a secure SDLC, including training people, getting tools and getting people to use them, was about 15% of the total development costs. Although he qualified this number by saying that it was from a pilot, another panelist agreed that it was reasonable, so it seems a good rule of thumb to use for now.

John Steven at Cigital says they have found that an inexperienced team can expect to have 38-40 security vulnerabilities per KLOC. A mature team, following good security hygiene, can get this down to 7-10 vulnerabilities per KLOC.

Chris Wysopal at Veracode presented some interesting numbers in his presentation on Application Security Debt – you can read more on his ideas on applying the technical debt metaphor to application security and building economic models for the costs of this debt on his blog. He referenced findings from the 2010 Verizon Breach study and tried to correlate these findings with analysis work that Veracode has done using data from testing customer applications. This showed that the most common application-related breach was through Backdoors or hidden Command Channels which I was surprised at, and so was he, since Veracode’s own analysis shows that application Backdoors are not that common. I assumed from this analysis that application Backdoors were so dangerous because they were so easily found and exploited by attackers. However, when I read the Verizon report again, what Verizon calls a Backdoor is a command channel opened up by malware, which is something quite different to a Backdoor left in an application, so the comparison doesn’t stand up. This leaves SQL Injection to be the number one application vulnerability, something that shouldn't surprise anyone.

The “A Word”

Most of the speakers and panelists at the conference were from enterprise companies or from vendors trying to sell to the enterprise. Not surprisingly, as a result there was little said about the needs and constraints of small companies, and at the sessions I attended, the speakers had not come to terms with the reality that most companies are adopting or have already adopted more incremental agile development practices. At one panel, Agile development was referred to as the “A Word”.

We urgently need tools and practices that small, fast-moving teams can use, because these teams are building and maintaining a lot of the world's software. It's time for OWASP and ISC2 and the rest of the application security community to accept this and understand what it means and offer help where it's really needed.

Dumbest thing said at the conference

This award goes to one vendor on a panel on security testing who said that it is irresponsible for developers to work with new technologies where they don't understand the security risks, that developers who work on the bleeding edge are putting their customers and businesses at unnecessary risk. Besides being blindingly obvious, it shows a complete lack of connection to the realities of software development and the businesses that rely on software (which is pretty much every business today). Dinis Cruz pointed out the absurdity of this statement, by admitting that we don’t fully understand the security risks of the common platforms that we use today like Java and .NET. This means that nobody should be writing software at all - or maybe it would be ok if we all went back to writing batch programs in COBOL and FORTRAN.

Impressed and Humbled

Overall I learned some good stuff, and I enjoyed the conference and the opportunity to meet some of the people behind OWASP. I was impressed and humbled by the commitment and accomplishments of the many OWASP contributors. Seeing how much they have done, talking to them, understanding how much they care, gives me some hope that we can all find a way to build better software together.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.