Managers don’t want to think harder than they have to. They like simple rules of thumb, quick and straightforward ways of looking at problems and getting pointed in the right direction. The simpler, the better.
One of the most useful rules of thumb is the 80:20 rule:
80% of effects come from 20% of causes and 80% of results come from 20% of effort.It’s the flip side of diminishing returns: instead of getting less out of doing more, you can get more from doing less, by working smarter, not harder.
You can see obvious cases where the 80:20 rule applies in software without looking too hard. For example, 80% of performance improvements are found by optimizing 20% of the code – although the actual ratio is probably much closer to 90:10 or even 99:1 when it comes to performance optimization. But whether it's 80:20 or 90:10 or 70:30, the rule works essentially the same.
80:20 Who uses What, What do you Really have to Deliver
Another well-known 80:20 rule in software is that 80% of users only use 20% of features. This came out of research from the Standish Group back in 2002, where they found that:
- 45% of features were never used;
- 19% used rarely;
- 16% sometimes;
- only 20% were used frequently or always.
Like the like the cost of change curve, this is another example of a widely-held “truth” in software development which is based on limited evidence – it would be good to see more research that backs this claim up.
This finding has heavily influenced Agile and Lean development, encouraging people to focus on delivering minimum marketable features or defining a minimum viable product, even in large-scale enterprise projects. Instead of trying to design and plan out all of the features that a system may need, come up with the smallest, tightest possible definition of what people think is important and useful in itself, prioritize the features and deliver in steps as quickly as possible.
Standish Group’s latest research shows that thinking smaller and delivering less is a key to improving the success of software projects: While more than 70% of small projects are delivered successfully, large projects have “virtually no chance of success: … more than twice the chance of being late, over budget, and missing critical features”.
“In summary, there is no doubt that focusing on the 20% of the features that give you 80% of the value will maximize the investment in software development and improve overall user satisfaction. After all, there is never enough time or money to do everything. The natural expectation is for executives and stakeholders to want it all and want it all now. Therefore, reducing scope and not doing 100% of the features and functions is not only a valid strategy, but a prudent one.”
But thinking small and delivering less faster can also come with a downside: a “reduction in value and innovation”, when people play too safe and set the bar too low. Delivering 20% or 50% of a system won’t always be enough to succeed, even supposing that you can figure out what the right 20-50% is – some of those “extra features” are still important and necessary to somebody even if they aren't used much. You’ll need more than a minimum viable product to redefine a market or how people work or play, to set the world on fire.
80:20 Bugs and Testing
Code quality, bugs and testing is another area where the 80:20 rule is especially useful:
80% of bugs are found in 20% of the code
90% of downtime comes from 10% (or less) of defects
Bugs cluster in certain parts of code, especially serious bugs. Most of your most serious problems will come from a small number of bugs.
“80% of the errors and crashes in Windows and Office were caused by 20% of the entire pool of bugs detected.”Understanding where most of your most serious bugs are and why they got there and what you need to do to prevent more of them is where you should be spending a lot of your time.
Microsoft’s CEO: 80-20 Rule Applies to Bugs, Not Just Features Oct 2002
Some studies have found that half of your code might not have any bugs at all, while most bugs will be found in only 10-20% of the code – often the 10-20% of the code that is changed most often (see “80:20 Which code gets changed and how often” below).
Each time that you find a bug in this code, chances are that it means there are still more bugs left to find and fix. The more bugs you find, the more chances there are that there are still more bugs to be found, in a downward spiral.
Capers Jones says that having to work on – or work around – high-risk error-prone code is the single largest drag on developer productivity over the life of a system and that not figuring out what code is causing you the most trouble and doing something about it is one of the most expensive mistakes that a development team can make.
Each time that you touch this code, even when you’re trying to fix it, there is a good chance that you are making it worse, not better: there is more than a 20% chance that a developer trying to fix a bug in error-prone code will accidentally introduce a new bug as a side-effect. Most of the effort put into trying to understand this code and fix it and understand it and fix it over and over, is wasted:
“Most error-prone modules are so complex and so difficult to understand that they cannot be repaired once they are created“.
When code gets this bad, it needs extensive and “brutal refactoring” to make it understandable and safer to work with, or it needs to be “surgically removed and replaced” with new code written from scratch by somebody who knows what they are doing.
It’s not hard to identify what parts of the code are bad if you have the same people working on the same code for a while – ask anyone on the team and they’ll know where that nasty stink is coming from. In big systems and big organizations with lots of turnover, you’ll probably need to track bugs over time and mine defect data for bug clusters, rather than just fixing bugs and moving on.
80% of time spent fixing bugs is on 20% of the bugs
Some bugs are much harder to fix than others. Sometimes because the code is so bad (see the rule above). Sometimes because the problems are so hard to reproduce and debug. Sometimes because they are much deeper than they appear to be – fundamental bugs in design, bugs that you can’t code your way out of. Be prepared for those times when even your best developers won’t be able to tell you when – or even if – some bugs will be fixed.
80:20 Which code gets changed and how often
Michael Feathers has found more 80:20 power law distributions by looking at changes to code bases over time (“Discovering Startling Things from your Version Control System”):
80% of changes are made in 20% of the code
A lot of code is written once, and never changed: static and standardized interfaces, basic wiring and config, back office functions. Then there’s other code that changes all of the time: the 20% of features which are used 80% of the time and need to be tweaked and tuned and occasionally overhauled as needs change; core code that needs to be optimized; and other code that needs to be fixed a lot because it contains too many bugs (back again to the 80:20 bug cluster rule above).
Feathers has found that code that gets changed a lot also tends to get bigger as time gets on, because of a simple, built-in bias:
it is easier to add code to an existing method than to add a new method and easier to add another method to an existing class than to add a new class.As a result, many systems end up with a few very large classes which contain a few very large methods, and which keep getting bigger and bigger as code gets changed.
Hot spots in code are easy to find by reviewing check-in history for areas with high churn and through simple static analysis of the code base. This is where you get the most value out of refactoring, where you can do the most to keep the code from losing structure and becoming dangerously unmaintainable – and it is also the code that naturally should get refactored most often as part of making changes (changed more often = refactored more often if you’re refactoring properly).
80:20 and Programming Time
The first 80% of code is done in 20% of time…the remaining 20% of the code takes the other 80% of the time
It usually doesn't take long to get something almost working, or something that looks like it works, especially if you’re working iteratively and incrementally, delivering frequently and fast.
But there’s a lot of work that still needs to be done “behind the scenes” to finish things up, to catch the edge cases and handle errors, make sure that the system performs and scales, find and fix all of the little bugs, get the code into shape before it can be deployed. Product Owners/Customers (and managers) often don’t understand why it takes so long to get the “last 20%” of the work done. And programmers often forget how long this takes too, and don’t include this work in their estimates. This is why a developer’s estimates are so often wrong. And why prototyping can be so dangerous in setting unrealistic expectations.
80:20 and Managing Software Development
Keeping the 80:20 rough rule in mind can save you money and time, and improve your chance of success by keeping you focused on what’s important: the features that really matter, the parts of the code where most of your most serious bugs are (and the bugs that take the most time to fix); the parts of the code that are changing the most; and how and where your team really spends their time.