Posts Tagged ‘Capers Jones’
Using 3σ Control Limits in Software Engineering
Source: Wikipedia; Control Chart
The July/August 2010 issue of IEEE Software features an article entitled “Monitoring Software Quality Evolution for Defects” by Hongyu Zhang and Sunghun Kim. The article is of interest to the software developer/tester/manager in quite a few ways. In particular, the authors report on their successful use of 3σ control limits in c-charts used to plot defects in software projects.
To put things in perspective, consider my recent assessment of the results accomplished by Quick Solutions (QSI) in two of their projects:
One to one-and-a-half standard deviation better than the mean might not seem like much to six-sigma black belts. However, in the context of typical results we see in the software industry the QSI results are outstanding. I have not done the exact math whether those results are superior to 95%, 97% or 98% of software projects in Michael Mah‘s QSMA database as the very exact figure almost does not matter when you achieve this level of excellence.
A complementary perspective is provided by Capers Jones in Estimating Software Costs: Bringing Realism to Estimating:
Another way of looking at six-sigma in a software context would be to achieve a defect-removal efficiency level of about 99.9999 percent. Since the average defect-removal efficiency level in the United States is only about 85 percent, and less than one project in 1000 has ever topped 98 percent, it can be seen that actual six-sigma results are beyond the current state of the art.
The setting of control limits is, of course, quite a different thing from the actual defect-removal efficiency numbers reported by Jones for the US and the very low number of defects reported by Mah for QSI. Having said that, driving a continuous improvement process through using 3σ control limits is the best recipe toward eventually reaching six-sigma results. For example, one could drive the development process by using Cyclomatic complexity per Java class as the quality characteristic in the figure at the top of this post. In this figure, a Cyclomatic complexity reading higher than 10.860 (the Upper Control Limit) will indicate a need to “stop the line” and attend to reducing complexity before resuming work on functions and features.
Coming on the heels of the impressive results reported by David Joyce on the use of statistical process control (SPC) techniques by the BBC, the article by Zhang and Kim is another encouraging report on the successful application of manufacturing techniques to software (and to knowledge work in general). I am not at liberty to quote from this just published IEEE article, but here is the abstract:
Quality control charts, especially c-charts, can help monitor software quality evolution for defects over time. c-charts of the Eclipse and Gnome systems showed that for systems experiencing active maintenance and updates, quality evolution is complicated and dynamic. The authors identify six quality evolution patterns and describe their implications. Quality assurance teams can use c-charts and patterns to monitor quality evolution and prioritize their efforts.
Should You Ship This Code Before Reducing Technical Debt?!
Source: JulesH, Wikipedia, A control flow graph of a simple function
Technical debt is usually perceived as a measure of expediency. You borrow a little (time) with the intent of paying it back as soon as possible. To quote Ward Cunnigham:
Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… I thought that rushing software out the door to get some experience with it was a good idea, but that of course, you would eventually go back and as you learned things about that software you would repay that loan by refactoring the program to reflect your experience as you acquired it.
As is often the case with financial debt, technical debt accrues with compound interest. Once it reaches a certain level (e.g. $1 per line of code) you stare at a difficult question:
Should I ship this code before reducing the accrued technical debt?!
The Figure below, taken from An Objective Measure of Code Quality by Mark Dixon, answers the question with respect to one important component of technical debt – cyclomatic complexity. Once complexity per source code file exceeds 74, the file is for most practical purposes guaranteed to contain errors. Some of the errors in such a file might be trivial. However, a 2007 study by Capers Jones indicates about a third of the errors found in released code are likely to be serious enough to stop an application from running or create erroneous outputs.
To answer the question cited above – Should You Ship This Software Before Reducing Technical Debt?! – examine both cost and risk for the number of error-prone files you are about to unleash:
- The economics of defect removal clearly favor early defect removal over late defect removal. The cost of removal grows exponentially as function of time.
- Brand risk should be first and foremost on your mind. If complexity figures higher than 74 per file are more of the norm than the exception, you are quite likely to tarnish your image due to poor quality.
If you decide to postpone the release date until the technical debt has been reduced, you can apply yourself to technical debt reduction in a biggest-bang-for-the-buck manner. The analysis of complexity can identify the hot spots in your code, giving you a de-facto roadmap you would be wise to follow.
Conversely, if you opt to ship the code without reducing technical debt, you might lose this degree of freedom to prioritize your “fix it” work. Customer situations and pressures might force you to attend to fixing modules that do not necessarily provide as much bang for the buck.
Postscript: Please note that the discussion in this post is strictly limited to intrinsic quality. It does not address at all extrinsic quality. In other words, reducing/eliminating technical debt does not guarantee that the customer will find the code valuable. I would suggest reading Beyond Scope, Schedule and Cost: Measuring Agile Performance in the Cutter Blog for a more detailed analysis of the distinction between the two.
Erratum: The figure above is actually taken from a blog post on the Mark Dixon paper cited in my post. See McCabe Cyclomatic Complexity: the proof is in the pudding. My apology for the error.
Using Credit Limits to Constrain “Development on Margin”
Buying (stocks) on margin is broadly recognized as a risky investment strategy. Funding long-term investments with short-term debt exposes the investor to margin calls as he/she might not be able to secure more financing when needed. The resultant margin call is never pleasant.
The accrual of technical debt in the course of aggressively developing functions and features is quite a similar phenomenon. The CTO is betting the functionality he/she is developing will pay off before the need to “pay back” the technical debt becomes imperative. The temptation to do so is particularly strong due to the lack of credit limits on technical debt. For all practical purposes the CTO is “developing on margin.”
In his comprehensive studies of the economics of software, Capers Jones has actually put a 3-5 year ceiling on the economical viability of developing on margin:
Indeed, the economic value of lagging applications is questionable after about three to five years. The degradation of initial structure and the increasing difficulty of making updates without “bad fixes” tends towards negative returns on investment (ROI) within a few years.
As the CEO leading a company, or the venture capitalist funding it, you can restrain development on margin by establishing credit limits. Use a combination of static code analysis with dynamic program analysis to calculate the amount of accrued technical debt in $$ terms. (An illustration of such calculation as well as a breakdown of the technical debt is given in the Sonar chart above). Set a limit (say $0.25 per line of code) on the amount of permitted technical debt. Once the limit is reached, developers are not allowed to continue developing new functionality – they have to first reduce (and hopefully eliminate) their technical debt.
A very simple “Lacmus test” is available to the CEO/VC until the code is instrumented and the analytics illustrated above generated. Ask your CTO about unit test coverage. If the coverage is low (say <30%), chances are the technical debt is high. Whether the CTO realizes it or not, low unit test coverage is a good indicator of technical debt of all kinds. Moreover, the investment required to develop a full-fledged suite of unit tests is often the largest component of the technical debt to be paid back.
Is There Something Inherently un-Agile About ERP Software?
A reader of the post Make the Hairs on the Back of Your Neck Stand Up posed the following question:
I wonder if there’s something inherently un-Agile (and thus, unable to change fast enough to meet new business demands) about older enterprise software, or just ERP software?
The answer IMHO is size. To quote Capers Jones:
Since defect potentials tend to rise with the overall size of the application, and since defect removal efficiency levels tend to decline with the overall size of the application, the overall volume of latent defects delivered with the application rises with size. This explains why super-large applications in the range of 100,000 function points, such as Microsoft Windows and many enterprise resource planning (ERP) applications may require years to reach a point of relative stability. These large systems are delivered with thousands of latent bugs or defects.
The phenomenon described by Jones is often exacerbated through the “ship more infrequently” syndrome. IMVU’s Timothy Fritz describes it as follows:
While this may decrease downtime (things break and you roll back), the cost on development time from work and rework will be large, and mistakes will continue to slip through. The natural tendency will be to ship even more infrequently, until you aren’t shipping at all. Then you’ve gone and forced yourself into a total rewrite. Which will also be doomed.
You might choose to reduce your technical debt instead of trying total rewrite. Chances are you will struggle to find Elbow Room for Handling Technical Debt. My hunch is that once >50% of development resources are assigned to maintaining the software on an on-going basis, it is time to get into refactoring big time. If you don’t, sooner or later you are likely to find you can’t afford the software you developed.
Make the Hairs on the Back of Your Neck Stand Up
Cote sent me the recent CIO Magazine article entitled ERP’s Paralysis Problem and the Repercussions for Business Everywhere. The article discusses the findings from a December 2009 study conducted by IDC and sponsored by ERP vendor Agresso, as follows:
A couple of verbatim responses from respondents should make the hairs on the back of your neck stand up: “Capital expenditure priorities are shifted into IT from other high-payback projects” just to perform necessary ERP changes, noted one respondent. Said another: “Change to ERP paralyzes the entire organization in moving forward in other areas that can bring more value.”
To make doubly certain the message gets across, the article finishes with the following “nocturnal” paragraph:
As the sun finally sets on the first decade in the new millennium, it’s high time we say good night to ERP. A new day will be starting soon, and the blemished legacy and failings of ERP’s nearly four-decade-long reign will be a distant memory.
Maybe. While ERP systems no doubt have their own particular twists, the sorry state of affairs described above is true of various industries that have developed complex software systems over prolonged periods of time. Just in the past few months I have witnessed such situations in banking and health care. In previous life I had been exposed to more of the same in other industries. The software decayed and decayed but technical debt had never been reduced. Consequently, the cost of change, any change, today is horrendous. As Jim Highsmith‘s chart below indicates, “once on the right of the curve, all choices are hard.”
In Estimating Software Costs, author Capers Jones quantifies five-year cost of software application ownership (for the vendor). He examines three similar applications, each of nominal size of 1000 function points, as a function of the sophistication of the corresponding projects. The respective life cycle costs are as follows:
- Lagging projects: $2,316,000
- Average projects: $1,860,000
- Leading projects: $1,312,000
Jones goes on to issue the following stern warning:
All known compound objects decay and become more complex with the passage of time unless effort is exerted to keep them repaired and updated. Software is no exception… Indeed, the economic value of lagging applications is questionable after about three to five years. The degradation of initial structure and the increasing difficulty of making updates without “bad fixes” tends towards negative returns on investment (ROI) within a few years.
Enough, indeed, to make the hairs on the back of your neck stand up…
Batten Down the Hatches
“As you might expect, we are in a batten down the hatches mode.” These words, taken from an email an executive just sent, prefaced his decision not to go ahead with planned Agile projects due to the need to cut costs. His operating assumption is simple: His company must cut costs now and will somehow manage without investing in Agile software and consulting.
Real $$ are hidden in your software
In Estimating Software Costs, author Capers Jones quantifies five year cost of software application ownership (for the vendor). He examines three similar applications, each of nominal size of 1000 function points, as a function of the sophistication of the corresponding projects. The respective life cycle costs are as follows:
- Lagging projects: $2,316,000
- Average projects: $1,860,000
- Leading projects: $1,312,000
Jones goes on to issue the following stern warning:
All known compound objects decay and become more complex with the passage of time unless effort is exerted to keep them repaired and updated. Software is no exception… Indeed, the economic value of lagging applications is questionable after about three to five years. The degradation of initial structure and the increasing difficulty of making updates without “bad fixes” tends towards negative returns on investment (ROI) within a few years.
Dwindling maintenance revenue streams complicate the situation
Revenues from maintenance services are subject to severe pressures these days as many customers are renegotiating service contracts. Igor Stenmark foresaw and foretold the phenomenon in November 2008. To quote Igor:
…sacred cows like software maintenance can become hamburger meat if users feel enough of a budget pressure.
Net net, as they say, the exec mentioned above is likely to face rising maintenance costs due to software decay over time. At the same time, revenues from maintenance contracts are bound to fall short of projections due to customers renegotiating their contracts.
The shiny new toy is not a cure
Recognizing the software maintenance conundrum they are caught in, many executives are pushing hard to develop new software to increase sales in order to compensate for the decline in revenues from software maintenance services.
What is missing is a serious commitment to improve the underlying software process. Software developed today might indeed be sold as a shiny new toy tomorrow. But, unless the software process is improved, a little down the road the new software will have similar maintenance problems. The corrosive effect of software decay on the shiny new software will become more and more severe as a function of time. The current business environment, hopefully, will change in a few years. However, the underlying software development and maintenance laws will not. No getting around it.
No better time than now
The numbers from Capers Jones cited above are illustrative of the cost savings you could expect to attain by modest investments in improving software process and practices. You might perhaps have more attractive cost saving alternatives available to you. However, if you don’t, there is no better time to invest in software process and practices than now.