Friday, March 15, 2013

Yes Small Companies Can – and Should – Build Secure Software

"For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that's probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security."
Schneier on Security: Is Software Security a Waste of Time?

Bullshit.

It’s foolish and short sighted to pretend that software security is only a problem for enterprises or enterprise software vendors. Small companies write software that big companies use, which means that these big companies are putting their customers at risk. This is happening all of the time.

And it’s wrong to believe that small shops can’t do anything practical about building secure software. I'm not talking about swallowing something like Microsoft’s SDL whole – for some people, the argument seems to be that

“If you aren't following Microsoft’s SDL then you can’t build secure software, and nobody except Microsoft can follow the SDL, so you might as well give up.”

But you don't need to adopt the SDL, or any other large-scale, expensive, enterprise-quality software security program. Any small shop can take some reasonable steps that will go a long way to building secure software:

  1. First, take some time upfront to understand the business requirements for security and compliance and for handling confidential and private data – what information do you need to protect, who can see and change what data, what data do you have to encrypt, what data should you not store at all, what do you need to log? All of this is just part of understanding what kind of system you need to build.

  2. Think about your application architecture, and choose a good application framework. For all the noise about “emergent design”, almost everybody who builds business apps – even small teams following Agile/Lean methods – use some kind of framework. It’s stupid not to. A good framework takes care of all kinds of problems for you – including security problems – which means that you can get down to delivery features faster, which is after all the point.

    If you’re a Ruby developer, Rails will take care of a lot of security problems for you – as long as you make sure to use Rails properly and you make sure to keep Rails up to date (the Rails community has made some mistakes when it comes to security, but they seem committed to fixing their mistakes).

    Play, a popular application framework for Java and Scala, includes built-in security features and controls, as do many other frameworks for Java, and frameworks for PHP and other languages, and of course there’s .NET for Microsoft platforms, which is loaded with security capabilities.

    None of these frameworks will take care of every security problem for you – even if you use them properly and make sure to keep them patched as security vulnerabilities are found. But using a good framework will reduce risk significantly without adding real costs or time to development. And when you do need to do something about security that may not be included in the framework (like properly handling encryption), there are good security libraries available like Apache Shiro that will make sure that you do things right while still saving time and costs.

  3. Write solid, defensive code: code that works and won’t boink when it is used in the real world. Check input parameters and API return values, do a good job of error handling, use safe libraries. Program responsibly.

  4. Take advantage of static analysis tools to catch bugs, including security bugs. At least understand and use any static analysis checkers that are in your IDE and free, easy to use tools like Findbugs and PMD for Java, or Microsoft’s tools for .NET. They're free, they find bugs so you don't have to - why wouldn't you use them?

    Most commercial tools are too expensive for small teams, although if Cigital comes through with small-bundle pricing for Secure Assist this would finally provide small development teams high-quality feedback on security bugs.

Sure there is a lot more that you could do or should do if you need to. But even modest and reasonable steps will go a long way to making software safer for customers. And there’s no reasons that small teams can’t – or shouldn't – do this.

Friday, March 8, 2013

Book Review: The Phoenix Project

Everyone who attended the “Rugged Devops” panel at RSA this year received a free copy of The Phoenix Project (by Gene Kim, Kevin Behr and George Spafford – the authors of Visible Ops), the fictional story of the education and transformation of an IT manager, an IT organization, and eventually of an entire company.

I'm not sure why they wrote the Phoenix Project as a novel. But they did. So I’ll review it first as a piece of story telling, and then look at the messaging inside.

The reason that I don’t like didactic fiction is that so much of it is so poorly written – generally forced and artificial. I was pleasantly surprised by the Phoenix Project. The first half of the novel tells a story about an IT manager forced into taking on responsibility for saving his company. Our well-meaning hero, an ex-marine sergeant (for some reason unclear to me, the hero, the CEO, and even the mysterious guru all have a military background) with an MBA but without any ambition except to quietly provide for his family. He has been successfully running his own little part of the IT group, so successfully that he is dramatically promoted to take over all of IT operations (and so successfully that his own group is never mentioned again in the story – it seems to run on auto-pilot without him). For a successful manager, our hero knows alarmingly little about how the rest of the IT organization works, or about the business, and so is unprepared for his new responsibility. He reluctantly accepts the big job, and then regrets it immediately as he realizes what a shit storm he has walked into.

It’s a compelling narrative that draws you in and is seems realistic even with the stock characters: the sociopathic SOB CEO, the unpopular everything-by-the-book CISO, the Software Development Manager who only cares about hitting deadlines (but not about delivering software that works), the Machiavellian Marketing executive, and the autistic genius that the entire IT organization of several hundred people all depend on to get all the important stuff done or fixed.

As a pure piece of story telling, things start to unravel in the second half with the emergence of the IT / Lean Manufacturing guru – when the story ends and the devops fable begins. From this point on, the plot depends on several miraculous conversions: everyone except the marketing exec sees the light and literally transform overnight. They even start to groom themselves nicely and dress better. Lifetime friendships are made over a few drinks, and everyone learns to trust and share: there’s a particularly cringe-inducing management meeting in which people bare their souls and weep. Conflicts and problems disappear magically usually because of the guru’s intervention – including an unnecessary scene where the guru shows up at a crucial audit meeting and helps convince the partner of the audit firm (an old buddy of his) that there aren't any real problems with the company’s messed up IT controls (“these aren't the droids you’re looking for”).

But the real heroes are the people running the manufacturing group: the one part of this spectacularly mismanaged company that somehow functions perfectly is a manufacturing plant where everyone can all go to learn about Kanban and Lean process management and so on. Without the help of the smug and smart-alecky guru - who apparently helped create this manufacturing success - and his tiresome Zen master teaching approach (and sometimes even with his help), our hero is unable to understand what’s wrong or what to do about it. He doesn't understand anything about demand management, how to schedule and prioritize project work, that firefighting is actual work, how to get out of firefighting mode, how to recognize and manage bottlenecks in workflow, or even how important it is to align IT with business priorities (where did he get that MBA any ways?). Luckily, the factory is right there for him learn from, if he only opens his eyes and mind.

What will you learn from The Phoenix Project?

The other problem with this story telling approach is that it takes so damn long to get to the point – it’s a 338 page book with about 50 pages of meat. Like Goldratt's The Goal (which is referenced a couple of times in this book), The Phoenix Project leads the reader to understanding through a kind of detective story. You have to be patient and watch as the hero stumbles and goes down blind alleys and ignores obvious clues and only with help eventually figures out the answers. Unfortunately, I'm not a patient reader.

This is a gentle introduction to Lean IT and devops. If you've read anything on Kanban and devops you won’t find anything surprising, although the authors do a good job of explaining how Lean manufacturing concepts can be applied to managing IT work. The ideas covered in the book are standard Lean and Theory of Constraints stuff, with a little of David Anderson’s Kanban and some devops – especially Continuous Deployment as originally described by John Allspaw and Continuous Delivery.

The guru’s lessons are mostly about visualizing and understanding and limiting demand – that you should stop taking on more work until you can get things actually working so that you’re not spending all of your time task-switching and firefighting; identifying workflow bottlenecks and protecting or optimizing around them; how reducing batch size in development will improve control and to get faster feedback; that in order to do this you have to work on simplifying and standardizing deployment; and how valuable it is to get developers and operations to work together.

My complaints aren't with the ideas – I buy into a lot of Devops and agree that Kanban and Lean have a lot to offer to IT ops and support teams (although I'm not sold on Kanban by itself for development, certainly not at scale). But I was disappointed with the unrealistic turnaround in the second half of the book. It’s all rainbows and unicorns at the end. IT, the business, development and security all start working together seamlessly. Management is completely aligned. Performance problems? No problem – just go the Cloud. And they even bring in the famous Chaos Monkey in the last couple of pages just because.

Spoiler Art: Everything goes so well in a few months that the company is back on track, plans to outsource IT and to break up the company are cancelled, the selfish head of marketing is canned, and our hero is promoted to CIO and put on the fast track to corporate second in command. Sorry: nothing this bad gets that good that easily. It is a fable after all, and too hard to swallow.

The Phoenix Project is a unique book – when was the last time that you read an actual novel about IT management?! It was worth reading, and if it introduces devops and Lean ideas to more people in IT, The Phoenix Project will have accomplished something useful. But it’s not a book that you can use to get things done. There are lessons and recipes and patterns but they take work to pull out of the story. There’s no index, no good way to go back to find things that you thought were useful or interesting. So I am looking forward to the Devops Cookbook: something practical that hopefully explains how these ideas can work in businesses that don’t look all like Facebook and Twitter.

Thursday, March 7, 2013

Peer reviews for security are a waste of time?

At this year’s RSA conference, one of the panel’s questioned whether software security is a waste of time. A panellist, John Viega, said a few things that I agreed with, and a lot that I didn't. Especially that

“peer reviews for security are a waste of time.”

This statement is wrong on every level.

Everyone should know by now that code reviews find real bugs – even informal, lightweight code reviews.

“Reviews catch more than half of a product’s defects regardless of the domain, level of maturity of the organization, or lifecycle phase during which they were applied”. What We Have Learned About Fighting Defects

Software security vulnerabilities are like other software bugs – you find them through testing or through reviews. If peer code reviews are a good way to find bugs, why would they be a waste of time for finding security bugs?

There are only a few developers anywhere who write security plumbing: authentication and session management, access control, password management, crypto and secrets handling. Or other kinds of plumbing like the data access layer and data encoding and validators that also have security consequences. All of this is the kind of kind of stuff that should be handled in frameworks anyway – if you’re writing this kind of code, you better have a good reason for doing it and you better know what you are doing. It’s obviously tricky and high-risk high-wire work, so unless you’re a team of one, your code and decisions need to be carefully reviewed by somebody else who is at least as smart as you are. If you don’t have anyone on the team who can review your work, then what the hell are you doing trying to write it in the first place?

Everybody else has to be responsible for writing good, defensive application code. Their responsibilities are:

  • Make sure their code works – that the logic is correct
  • Use the framework properly
  • Check input data and return values
  • Handle errors and exceptions correctly
  • Use safe routines/APIs/libraries
  • Be careful with threading and locking and synchronization

A good developer can review this code for security and privacy requirements: making sure that you are masking or encrypting or – even better – not storing PII and secrets, auditing, properly following access control rules. And they can review the logic and workflow, look for race conditions, check data validation, make sure that error handling and exception handling is done right and that you are using frameworks and libraries carefully.

This is what code reviews are for. To look for and find coding problems. If you find these problems – and code reviews are one of the most effective ways of doing this – your code will be safer and more secure. So I don’t get it. Why are peer reviews for security a waste of time?

Tuesday, March 5, 2013

Appsec at RSA 2013

This was my second time at the RSA conference on IT security. Like last year, I focused on the appsec track, starting with a half-day mini-course on how to write secure applications for developers, presented by Jim Manico and Eoin Keary representing OWASP. It was a well-attended session. Solid, clear guidance from people who really do understand what it takes to write secure code. They explained why relying on pen testing is never going to be enough (your white hat pen tester gets 2 weeks a year to hack your app, the black hats get 52 weeks a year), and covered all of the main problems, including password management (secure storage and forgot password features), how to protect the app from click jacking, proper session management, access control design. They showed code samples (good and bad) and pointed developers to OWASP libraries and Cheat Sheets, as well as other free tools.

We have to solve XSS and SQL Injection

They spent a lot of time on XSS (the most common vulnerability in web apps) and SQL injection (the most dangerous). Keary recommended that a good first step for securing an app is to find and fix all of the SQL injection problems: SQL injection is easy to see and easy to fix (change the code to use prepared statements with bind variables), and getting this done will not only make your app more secure, it also proves your organization’s ability to find security problems and fix them successfully.

SQL injection and XSS kept coming up throughout the conference. In a later session, Nick Galbreath looked deeper into SQL injection attacks and what developers can do to detect and block them. By researching thousands of SQL injection attacks, he found that attackers use constructs in SQL that web application developers rarely use: unions, comments, string and character functions, hex number literals and so on. By looking for these constructs in SQL statements you can easily identify if the system is being attacked, and possibly block the attacks. This is the core idea behind database firewalls like Green SQL and DB Networks, both companies that exhibited their solutions at RSA.

On the last day of the conference, Romain Gaucher from Coverity Research asked “Why haven’t we stamped out SQL Injection and XSS yet?”. He found through a static analysis review of several code bases that while many developers are trying to stop SQL injection by using parameterized queries, it’s not possible to do this in all cases. About 15% of SQL code could not be parameterized properly – or at least it wasn't convenient for the developers to come up with a different approach. Gaucher also reinforced how much of a pain in the butt it is trying to protect an app from XSS:

“XSS is not a single vulnerability”. XSS is a group of vulnerabilities that mostly involve injection of tainted data into various HTML contexts”.
It’s the same problem that Jim Manico explained in the secure development class: in order to prevent XSS you have to understand the context and do context-sensitive encoding, and hope that you don’t make a mistake. To help make this problem manageable, in addition to libraries available from OWASP, Coverity has open sourced a library to protect Java apps from XSS and SQL injection.

The Good

While most of the keynotes offered a chance to catch up on email, the Crypto Panel was interesting. Chinese research into crypto is skyrocketing. Which could be a good thing. Or not. I was interested to hear Dan Boneh at Stanford talk more about the research that he has done into digital certificate handling and SSL outside of browsers. His team found that in almost all cases, people who try to do SSL certificate validation in their own apps do it wrong.

Katie Moussouris at Microsoft presented an update on ISO standards work for vulnerability handling. ISO 30111 lays out a structured process for investigating, triaging and resolving software security vulnerabilities. There were no surprises in the model - the only surprise is that the industry actually needs an ISO standard for the blindingly obvious, but it should set a good bar for people who don't know where to start.

Jeremiah Grossman explained that there are two sides to the web application security problem. One half is weaknesses in web sites like SQL injection and lousy password handling and mistakes in access control. The other half is attacks that exploit fundamental problems in browsers. Attacks that try to break out of the browser – which browser vendors put a lot of attention to containing through sandboxing and anti-phishing and anti-malware protection – and attacks that stay inside the browser but compromise data inside the browser like XSS and CSRF, which get no attention from browser vendors so it’s up to application developers to deal with.

Grossman also presented some statistics on the state of web application security, using data that White Hat Security is collecting from its customer base. Recognizing that their customers are representative of more mature organizations that already do regular security testing of their apps, the results are still encouraging. The average number of vulnerabilities per app is continuing to decline year on year. SQL injection is now the 14th most common vulnerability, found in only 7% of tested apps – although more than 50% of web apps are vulnerable to XSS, for the reasons discussed above.

Gary McGraw from Cigital agreed that as an industry, software is getting better. Defect density is going down (not as fast as it should be, but real progress is being made), but the software security problem isn't going away because we are writing a lot more code, and more code inevitably means more bugs. He reiterated that we need to stay focused on the fundamentals – we already know what to do, we just have to do it.

“The time has come to stop looking for new bugs to add to the list. Just fix the bugs”.

Another highlight was the panel on Rugged Devops, which continued a discussion that started at OWASP Appsec late last year, and covered pretty much the same ground: how important it is to get developers and operations working together to make software run in production safely, that we need more automation (testing, deployment, monitoring), and how devops provides an opportunity to improve system security in many ways and should be embraced, not resisted by the IT security community.

The ideas are based heavily on what Etsy and Netflix and Twitter have done to build security into their rapid development/deployment practices. I agreed with ½ of the panel (Nick Galbreath and David Mortman, who have real experience in software security in Devops shops) almost all of the time, and disagreed with the other ½ of the panel most of the rest of the time. There’s still too much hype over continuously deploying changes 10 or 100 or 1000 times a day, and over the Chaos Monkey. Etsy moved to Continuous Deployment multiple times per day because they couldn’t properly manage their release cycles – that doesn't mean that everyone has to do the same thing or should even try. And you probably do need something like Chaos Monkey if you’re going to trust your business to infrastructure as unreliable as AWS, but again, that’s not a choice that you have to make. There’s a lot more to devops, it’s unfortunate that these ideas get so much attention.

The Bad and the Ugly

There was only one low point for me – a panel with John Viega formerly of McAfee and Brad Arkin from Adobe called “Software Security: a Waste of Time”.

Viega started off playing the devil’s advocate, asserting that most people should do nothing for appsec, it’s better and cheaper to spend their time and money on writing software that works and deal with security issues later. Arkin disagreed, but unfortunately it wasn't clear from the panel what he felt an organization should do instead. Both panellists questioned the value of most of the tools and methods that appsec relies on. Neither believed that static analysis tools scale, or that manual security code audits are worth doing. Viega also felt that “peer reviews for security are a waste of time”. Arkin went on to say:

“I haven’t seen a Web Application Firewall that’s worth buying, and I've stopped looking”
“The best way to make somebody quit is to put them in a threat modelling exercise”
“You can never fuzz and fix every bug”
Arkin also argued against regulation, citing the failure of PCI to shore up security for the retail space – ignoring that the primary reason that many companies even attempt to secure their software is because PCI requires them to take some responsible steps. But Arkin at least does believe that secure development training is important and that every developer should receive some security training. Viega disagreed, and felt that training only matters for a small number of developers who really care.

This panel was like a Saturday Night Live skit that went off the rails. I couldn't tell when the panellists were being honest or when they were ironically playing for effect. This session lived up to its name, and really was a waste of time.

The Toys

This year’s trade show was even bigger than last year, with overflow space across the hall. There were no race cars or sumo wrestlers at the booths this year, and fewer strippers (ahem models) moonlighting (can you call it "moonlighting" if you're doing it during the day?) as booth bunnies although there was a guy dressed like Iron Man and way too many carnival games.

This year’s theme was something to do with Big Data in security so there were lots of expensive analytics tools for sale. For appsec, the most interesting thing that I saw was Cigital Secure Assist a plug-in for different IDEs that provides fast feedback on security problems in code (Java, .NET or PHP) every time you open or close a file. The Cigital staff were careful not to call this a static analysis tool (they’re not trying to compete with Fortify or Coverity or Klocwork), but what excited me was the quality of the feedback, the small client-side footprint, and that they intend to make it available for direct purchase over the web for developers at a very reasonable price point, which means that this could finally be a viable option for smaller development shops that want to take care of security issues in code.

All in all, a good conference and a rare opportunity to meet so many smart people focused on IT security. I still think for pure appsec that OWASP’s annual conference is better, but there's nothing quite like RSA.