The Agile Executive

Making Agile Work

Posts Tagged ‘Statistical Process Control

Technical Debt: Assessment and Reduction

with one comment

Below is the detailed outline for my August 8, 1:30-5:00PM Technical Debt Workshop in Agile 2011. I look forward to meeting you and interacting with you in the conference before, during and after this workshop!



Technical Debt: Assessment and Reduction

Part I: Technical Debt in the Overall Context of the Software Process

  • A Holistic Model of the Software Process
  • Two Aspects of Output
  • Three Aspects of Technical Debt
  • Six Aspects of Software

Part II: What Really is Technical Debt?

  • What’s in a Metaphor?
  • Code Analysis
  • Time is Money
  • Monetizing Technical Debt
  • Typical Stakeholder Dialog Around Technical Debt
  • Analysis of the Cassandra Code
  • Project Dashboard

Part III : Case Study – NotMyCompany, Inc.

  • NotMyCompany Highlights
  • Modernizing Legacy Code
  • Error Proneness

Part IV: The Tricky Nature of Technical Debt

  • The Explicit Form of Technical Debt
  • The Implicit Form of Technical Debt
  • The Strategic Impact of Technical Debt
  • No Good Strategy Following Prolonged Neglect

Part V: Unified Governance

  • How We View Success
  • Three Core Metrics
  • Productivity, Affordability, Risk
  • What is the Real ROI?

Part VI: Process Control Models

  • A Typical Technical Debt Pattern
  • Process Control View of Scrum
  • Integration of Technical Debt in the Agile Process
  • Using Statistical Process Control Methods

Part VII: Reducing Technical Debt

  • A Framework for Thinking about and Acting on Technical Debt Issues
  • Portfolio Governance

Part VIII: Takeaways

  • Nine Simple Takeaway
  • Connecting the dots

The Real Cost of One Trillion Dollars in IT Debt: Part II – The Performance Paradox

with 7 comments

Some of the business ramifications of the $1 trillion in IT debt have been explored in the first post of this two-part analysis. This second post focuses on “an ounce of prevention is worth a pound of cure” aspects of IT debt. In particular, it proposes an explanation why prevention was often neglected in the US over the past decade and very possibly longer. This explanation is not meant to dwell on the past. Rather, it studies the patterns of the past in order to provide guidance for what you could do and should do in the future to rein in technical debt.

The prevention vis-a-vis cure trade-off  in software was illustrated by colleague and friend Jim Highsmith in the following figure:

Figure 1: The Technical Debt Curve

As Jim astutely points out, “once on far right of curve all choices are hard.” My experience as well as those of various Cutter colleagues have shown it is actually very hard. The reason is simple: on the far right the software controls you more than you control it. The manifestations of technical debt [1] in the form of pressing customer problems in the production environment force you into a largely reactive mode of operation. This reactive mode of operation is prone to a high error injection rate – you introduce new bugs while you fix old ones. Consequently, progress is agonizingly slow and painful. It is often characterized by “never-ending” testing periods.

In Measure and Manage Your IT Debt, Gartner’s Andrew Kyte put his finger on the mechanics that lead to the accumulation of technical debt – “when budget are tight, maintenance gets cut.” While I do not doubt Andrew’s observation, it does not answer a deeper question: why would maintenance get cut in the face of the consequences depicted in Figure 1? Most CFOs and CEOs I know would get quite alarmed by Figure 1. They do not need to be experts in object-oriented programming in order to take steps to mitigate the risks associated with slipping to the far right of the curve.

I believe the deeper answer to the question “why would maintenance get cut in the face of the consequences depicted in Figure 1?” was given by John Seely Brown in his 2009 presentation The Big Shift: The Mutual Decoupling of Two Sets of Disruptions – One in Business and One in IT. Brown points out five alarming facts in his presentation:

  1. The return on assets (ROA) for U.S. firms has steadily fallen to almost one-quarter of 1965 levels.
  2. Similarly, the ROA performance gap between corporate winners and losers has increased over time, with the “winners” barely maintaining previous performance levels while the losers experience rapid performance deterioration.
  3. U.S. competitive intensity has more than doubled during that same time [i.e. the US has become twice as competitive – IG].
  4. Average Lifetime of S&P 500 companies [declined steadily over this period].
  5. However, in those same 40 years, labor productivity has doubled – largely due to advances in technology and business innovation.

Discussion of the full-fledged analysis that Brown derives based on these five facts is beyond the scope of this blog post [2]. However, one of the phenomena he highlights –  “The performance paradox: ROA has dropped in the face of increasing labor productivity” – is IMHO at the roots of the staggering IT debt we are staring at.

Put yourself in the shoes of your CFO or your CEO, weighing the five facts highlighted by Brown in the context of Highsmith’s technical debt curve. Unless you are one of the precious few winner companies, the only viable financial strategy you can follow is a margin strategy. You are very competitive (#3 above). You have already ridden the productivity curve (#5 above). However, growth is not demonstrable or not economically feasible given the investment it takes (#1 & #2 above). Needless to say, just thinking about being dropped out of the S&P 500 index sends cold sweat down your spine. The only way left to you to satisfy the quarterly expectations of Wall Street is to cut, cut and cut again anything that does not immediately contribute to your cashflow. You cut on-going refactoring of code even if your CTO and CIO have explained the technical debt curve to you in no uncertain terms. You are not happy to do so but you are willing to pay the price down the road. You are basically following a “survive to fight another day” strategy.

If you accept this explanation for the level of debt we are staring at, the core issue with respect to IT debt at the individual company level [3] is how “patient” (or “impatient”) investment capital is. Studies by Carlota Perez seem to indicate we are entering a phase of the techno-economic cycle in which investment capital will shift from financial speculation toward (the more “patient”) production capital. While this shift is starting to happens, you have the opportunity to apply “an ounce of prevention is worth a pound of cure” strategy with respect to the new code you will be developing.

My recommendation would be to combine technical debt measurements with software process change. The ability to measure technical debt through code analysis is a necessary but not sufficient condition for changing deep-rooted patterns. Once you institute a process policy like “stop the line whenever the level of technical debt rose,” you combine the “necessary” with the “sufficient” by tying the measurement to human behavior. A possible way to do so through a modified Agile/Scrum process is illustrated in Figure 2:

Figure 2: Process Control Model for Controlling Technical Debt

As you can see in Figure 2, you stop the line and convene an event-driven Agile meeting whenever the technical debt of a certain build exceeds that of the previous build. If ‘stopping the line’ with every such build is “too much of a good thing” for your environment, you can adopt statistical process control methods to gauge when the line should be stopped. (See Using 3σ  Control Limits in Software Engineering for a discussion of the settings appropriate for your environment.)

An absolutely critical question this analysis does not cover is “But how do we pay back our $1 trillion debt?!I will address this most important question in a forthcoming post which draws upon the threads of this post plus those in the preceding Part I.


[1] Kyte/Gartner define IT Debt as “the costs for bringing all the elements [i.e. business applications] in the [IT] portfolio up to a reasonable standard of engineering integrity, or replace them.” In essence, IT Debt differs from the definition of Technical Debt used in The Agile Executive in that it accounts for the possible costs associated with replacing an application. For example, the technical debt calculated through doing code analysis on a certain application might amount to $500K. In contrast, the cost of replacement might be $250K, $1M or some other figure that is not necessarily related to intrinsic quality defects in the current code base.

[2] See Hagel, Brown and Davison: The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion.

[3] As distinct from the core issue at the national level.

What 108M Lines of Code Tell Us

with 16 comments

Results of the first annual report on application quality have just been released by CAST. The company analyzed 108M lines of code in 288 applications from 75 companies in various industries. In addition to the ‘usual suspects’ –  COBOL, C/C++, Java, .NET – CAST included Oracle 4GL and ABAP in the report.

The CAST report is quite important in shedding light on the code itself. As explained in various posts in this blog, this transition from the process to its output is of paramount importance. Proficiency in the software process is a bit allusive. The ‘proof of the pudding’ is in the output of the software process. The ability to measure code quality enables effective governance of the software process. Moreover, Statistical Process Control methods can be applied to samples of technical debt readings. Such application is most helpful in striking a good balance in ‘stopping the line’ – neither too frequently nor too rarely.

According to CAST’s report, the average technical debt per line of code across all application is $2.82.  This figure, depressing that it might be, is reasonably consistent with quick eyeballing of Nemo. The figure is somewhat lower than the average technical debt figure reported recently by Cutter for a sample of the Cassandra code. (The difference is probably attributable to the differences in sample sizes between the two studies). What the data means is that the average business application in the CAST study is saddled with over $1M in technical debt!

An intriguing finding in the CAST report is the impact of size on the quality of COBOL applications.  This finding is demonstrated in Figure 1. It has been quite a while since I last saw such a dramatic demonstration of the correlation between size and quality (again, for COBOL applications in the CAST study).

Source: First Annual CAST Worldwide Application Software Quality Study – 2010

One other intriguing findings in the CAST study is that “application in government sector show poor changeability.” CAST hypothesizes that the poor changeability might be due to higher level of outsourcing in the government sector compared to the private sector. As pointed out by Amy Thorne in a recent comment posted in The Agile Executive, it might also be attributable to the incentive system:

… since external developers often don’t maintain the code they write, they don’t have incentives to write code that is low in technical debt…

Congratulations to Vincent Delaroche, Dr. Bill Curtis, Lev Lesokhin and the rest of the CAST team. We as an industry need more studies like this!

Schedule Constraints in the Devops Triangle

leave a comment »

Last week’s post “The Devops Triangle” demonstrated the extension of Jim Highsmith‘s Agile Triangle to devops. The extension relied on adding compliance to the three traditional constraints of software development: scope, schedule, cost. A graphical representation of this extension is given in Figure 1.

Figure 1: Compliance as the Fourth Constraint in Devops Projects

This blog post examines how time/schedule should be governed in the devops context. It does so by building on the concluding observation in the previous post:

The Devops Triangle and the corresponding Tradeoff Matrix demonstrate how governance a la Agile can be extended to devops projects as far as compliance goes. The proposed governance framework however is incomplete in the following sense: schedule in devops projects can be a much more granular and stringent constraint than schedule in “dev only” projects.

For the schedule constraint in devops, I propose a schedule set.  It consists of  four components:

  • Lead Time or Engineering Time
  • Time to change
  • Time to deploy
  • Time to roll back

Lead Time/Engineering Time: These are customary metrics used in Kanban software development, as demonstrated in Figure 3.

Figure 3: The Engineering Time Metric Used by the BBC (David Joyce in the LSSC10 Conference)

Time to change: The amount of time it takes for the various stakeholders (e.g., dev, test, ops, customer support) to review the code to be deployed, approve its deployment and assign a time window for the deployment.

Time to deploy: The amount of time from (metaphorically speaking) pushing the Deploy “button” to completion of deployment.

Time to roll back: The amount of time to undo a deployment. (Rigorous that the engineering practices and IT processes might be, the time to roll back a deployment can’t be ignored – it is a critical risk parameter).

A graphical representation of these four schedule metrics together with the Devops Triangle is given in the figure below:

Figure 4: The Devops Triangle with a Schedule Set

Using hours as the common unit of measure, a typical schedule set could be {100, 48, 3, 2}. In this hypothetical example, it takes a little over 4 days to carry out the development of the code increment; 2 days to get approval for the change; 3 hours to deploy the code; and, 2 hours to roll back.

Whatever your specific schedule numbers might be, it is highly recommended you apply value stream mapping (see Figure 5 below) to your schedule set. Based on the findings of the value stream mapping, apply statistical process control methods like those illustrated in Figure 3 to continuously improving both the mean and the variances of the four schedule components.

Figure 5: An Example of Value Stream Mapping (Source: Wikipedia entry on the subject)

Using 3σ Control Limits in Software Engineering

with 2 comments

Source: Wikipedia; Control Chart

The  July/August 2010 issue of IEEE Software features an article entitled “Monitoring Software Quality Evolution for Defects” by Hongyu Zhang and Sunghun Kim. The article is of interest to the software developer/tester/manager in quite a few ways. In particular, the authors report on their successful use of 3σ control limits in c-charts used to plot defects in software projects.

To put things in perspective, consider my recent assessment of the results accomplished by Quick Solutions (QSI) in two of their projects:

One to one-and-a-half standard deviation better than the mean might not seem like much to six-sigma black belts. However, in the context of typical results we see in the software industry the QSI results are outstanding.  I have not done the exact math whether those results are superior to 95%, 97% or 98% of software projects in Michael Mah‘s QSMA database as the very exact figure almost does not matter when you achieve this level of excellence.

A complementary perspective is provided by Capers Jones in Estimating Software Costs: Bringing Realism to Estimating:

Another way of looking at six-sigma in a software context would be to achieve a defect-removal efficiency level of about 99.9999 percent. Since the average defect-removal efficiency level in the United States is only about 85 percent, and less than one project in 1000 has ever topped 98 percent,  it can be seen that actual six-sigma results are beyond the current state of the art.

The setting of control limits is, of course, quite a different thing from the actual defect-removal efficiency numbers reported by Jones for the US and the very low number of defects reported by Mah for QSI. Having said that, driving a continuous improvement process through using 3σ control limits is the best recipe toward eventually reaching six-sigma results. For example, one could drive the development process by using Cyclomatic complexity per Java class as the quality characteristic in the figure at the top of this post. In this figure, a Cyclomatic complexity reading higher than 10.860 (the Upper Control Limit) will indicate a need to “stop the line” and attend to reducing complexity before resuming work on functions and features.

Coming on the heels of the impressive results reported by David Joyce on the use of statistical process control (SPC) techniques by the BBC, the article by Zhang and Kim is another encouraging report on the successful application of manufacturing techniques to software (and to knowledge work in general). I am not at liberty to quote from this just published IEEE article, but here is the abstract:

Quality control charts, especially c-charts, can help monitor software quality evolution for defects over time. c-charts of the Eclipse and Gnome systems showed that for systems experiencing active maintenance and updates, quality evolution is complicated and dynamic. The authors identify six quality evolution patterns and describe their implications. Quality assurance teams can use c-charts and patterns to monitor quality evolution and prioritize their efforts.