Thursday, December 15, 2011

2011: The State of Software Security and Quality

It’s the end of the year. Time to look back on what you’ve done, what you’ve learned, your successes and mistakes, and what you learned from them. I also like to look at the big picture: not just my team and the projects that I manage, or even the company that I work for, but software development in general. How are we doing as an industry, are we getting better, where are we falling behind, what are the main drivers for developers and development managers?

Besides the usual analysis from InformationWeek, and from Forrester and Gartner (if you can afford them), there’s some interesting data available from software quality and software security vendors.

CAST Software
, a vendor of code structural analysis tools, publishes an annual Report on Application Software Health based on an analysis of 745 applications from 160 companies (365 million lines of code) using CAST's static analysis platform. A 25-page executive summary of the report is available free with registration – registration also provides you access to some interesting research from Gartner and Forrester Research, including a pretty good paper from Forrester Research on application development metrics.

The code that CAST analyzed is in different languages, approximately half of it in Java. Their findings are kinda interesting:
  • They found no strong correlation between the size of the system and structural quality except for COBOL apps where bigger apps have more structural quality problems.
  • COBOL apps also tended to have more high-complexity code modules.
  • I thought that the COBOL findings were interesting, although I haven’t worked with COBOL code in a long long time. Maybe too interesting – CAST decided that the findings for COBOL were so far outside of the norm that “consequently we do not believe that COBOL applications should be directly benchmarked against other technologies”.
  • For 204 applications, information was included on what kind of application development method was followed. Applications developed with Agile methods and Waterfall methods had similar profiles for business risk factors (robustness, performance, security) but Agile methods were not as effective when it comes to cost factors (transferability, changeability), which would seem counter-intuitive, given that Agile methods are intended to reduce the cost of change.
  • The more releases per year, the less robust, less secure and less changeable the code was. Even the people at CAST don’t believe this finding.
CAST also attempts to calculate an average technical debt cost for applications, using the following formula:
(10% of low severity findings + 25% of medium severity findings + 50% of high severity findings) * # of hours to fix a problem * cost per hour for development time
The idea is that not all findings from the static analysis tool need to be fixed (maybe 10% of low severity findings and 25% of medium severity findings), but at least half of the high severity issues found need to be fixed. They assume that on average each of these fixes can be made in 1 hour, and estimate that the average development cost for doing this work is $75 per hour. This results in an average technical debt cost of $3.61 per LOC. For Java apps, the cost is much higher, at $5.42 per LOC. So your average 100 KLOC Java system carries about $500,000 worth of technical debt....

It’s difficult to say how real or meaningful these findings are. The analysis depends a lot on the extensiveness of CAST’s static analysis checkers (with different checking done for different languages, making it difficult to compare findings across languages) and the limited size of their customer base. As their tools improve and their customer base grows, this analysis will become more interesting, more accurate and more useful.

The same goes for the State of Software Security Report from Veracode, a company that supplies static and dynamic software security testing services. Like the CAST study, this report attempts to draw wide-ranging conclusions from a limited data set – in this case, the analysis of almost 10,000 “application builds” over the last 18 months (which is a lot less than 10,000 applications, as the same application may be analyzed at least twice if not more in this time window). The analysis focused on web apps (75% of the applications reviewed were web apps). Approximately half of the code was in Java, one quarter in .NET, the rest in C/C++, PHP, etc.

Their key findings:
  • 8 out of 10 apps fail to pass Veracode’s security tests on the first pass - the app contains at least 1 high-risk vulnerability.
  • For web apps, the top vulnerability is still XSS. More than half of the vulnerabilities found are XSS, and 68% of web apps are vulnerable to XSS attacks.
  • 32% of web apps were vulnerable to SQL Injection, even though only 5% of all vulnerabilities found were SQL Injection issues.
  • For other apps, the most common problems were in error handling (19% of all issues found) and cryptographic mistakes (more than 46% of these apps had issues with crypto).
Another interesting analysis is available from SmartBear Software, a vendor of software quality tools, which recently sponsored a webinar by Capers Jones on The State of Software Quality in 2011. Capers Jones draws from a much bigger database of analysis over a much longer period of time. He provides a whirlwind tour of facts and findings in this webinar, summarizing and updating some of the material that you can find in his books.

SmartBear provides some other useful free resources, including a good white paper on best practices for code review. This is further explored in the chapter on Modern Code Reviews by SmartBear’s Jason Cohen in the book Making Software: What Really Works, and Why we Believe it, a good collection of essays on software development practices and the state of research in software engineering.

All of these reports are free – you need to sign up, but you will get minimal hassle – at least I haven’t been hassled much so far. I’m not a customer of any of these companies (we use competing or alternative solutions that we are happy with for now) but I am impressed with the work that these companies do to make this information available to the community.

There were no real surprises from this analysis. We already know what it takes to build good software, we just have to make sure that we actually do it.

Tuesday, December 6, 2011

Devops has made Release and Deployment Cool

Back 10 years or so when Extreme Programming came out, it began to change the way that programmers thought about testing. XP made software developers accountable for testing their own code. XPers gave programmers practices like Test-First Development and simple, free community-based automated testing tools like xUnit and FIT and Fitnesse.

XP made testing cool. Programmers started to care about how to write good automated tests and achieving high levels of test code coverage and about optimizing feedback loops from testing and continuous integration. Instead of throwing code over the wall to a test team, programmers began to take responsibility for reviewing and testing their own code and making sure that it really worked. It’s taken some time, but most of these ideas have gone mainstream, and the impact has been positive for software development and software quality.

Now Devops is doing the same thing with release and deployment. People are finding new ways to make it simpler and easier to release and deploy software, using better tools and getting developers and ops staff together to do this.

And this is a good thing. Because release and deployment, maybe even more than testing, has been neglected by developers. It’s left to the end because it’s the last thing that you have to do – on some big, serial life cycle projects, you can spend months designing and developing something before you get around to releasing the code. Release and deployment is hard – it involves all sorts of finicky technical details and checks. To do it you have to understand how all the pieces of the system are laid out, and you need to understand the technology platform and operational environment for the system, and how Ops needs the system to be setup and monitored, how they need it wired in, what tools they use and how these tools work, and work through operational dependencies, and compliance and governance requirements. You have to talk to different people in a different language, learn and care about their wants and needs and pain points. It’s hard to get all of this right, and it’s hard to test it, and you’re under pressure to get the system out. Why not just give Ops the JARs and EARs and WARs and ZIPs (and your phone number in case anything goes wrong) and let them figure it out? We’re back to throwing the code over a wall again.

Devops, by getting Developers and Operations staff working together and sharing technology and solving problems together, is changing this. It’s making developers, and development managers like me, pay more attention to release and deployment (and post-deployment) requirements. Not just getting it done. Getting developers and QA and Operations staff to think together about how to make release and deployment and configuration simpler and faster, about what could go wrong and then making sure that it doesn’t go wrong, for every release – not just when there is a problem or if Ops complains. Replacing checklists and instructions with automated steps. Adding post-release health checks. Building on Continuous Integration to Continuous Delivery, making it easier and safer and less expensive to release to test as well as production. This is all practical, concrete work, and a natural next step for teams that are trying to design and build software faster.

One difference between the XP and Devops stories is that there’s a lot more vendor support in Devops than there was in the early days of Agile development. Commercial vendors for products like Chef and Puppet and UrbanCode (which has rebranded their Anthill Pro build and release toolset the DevOps Platform) and ThoughtWorks Studios with Go, and even IBM and HP are involved in Devops and pushing Devops ideas forward.

This is good and bad. Good – because this means that there are tools that people can use and people who can help them understand how to use them. And there’s somebody to help sponsor the conferences and other events that bring people together to explore Devops problems. Bad, because in order to understand and appreciate what’s going on and what’s really useful in Devops you have to wade through a growing amount of marketing noise. It’s too soon to say yet whether the real thought leaders and evangelists will be drowned out by vendor product managers and consultants – much like the problem that the Agile development community faces today.
Site Meter