Posts Tagged ‘Gartner Group’
Y2K vis-a-vis IT Debt
http://www.flickr.com/photos/plural/4279707276/
Andrew Dailey of MGI Research and Andy Kyte of Gartner Group kindly did some digging for me on the total amount of money that was spent on Y2K. Here is the bottom line from Andy concluding our email thread on the subject of Y2K expenditures:
I have remained comfortable with our estimate of $300B to $600B.
In other words, it will take an effort comparable to the Y2K effort at the turn of the century to ‘pay back’ the current IT Debt.
____________________________________________________________________________________________________
Considering modernization of your legacy code? Let me know if you would like assistance in monetizing your technical debt, devising plans to reduce it and governing the debt reduction process. Click Services for details.
____________________________________________________________________________________________________
The Real Cost of One Trillion Dollars in IT Debt: Part II – The Performance Paradox
Some of the business ramifications of the $1 trillion in IT debt have been explored in the first post of this two-part analysis. This second post focuses on “an ounce of prevention is worth a pound of cure” aspects of IT debt. In particular, it proposes an explanation why prevention was often neglected in the US over the past decade and very possibly longer. This explanation is not meant to dwell on the past. Rather, it studies the patterns of the past in order to provide guidance for what you could do and should do in the future to rein in technical debt.
The prevention vis-a-vis cure trade-off in software was illustrated by colleague and friend Jim Highsmith in the following figure:
Figure 1: The Technical Debt Curve
As Jim astutely points out, “once on far right of curve all choices are hard.” My experience as well as those of various Cutter colleagues have shown it is actually very hard. The reason is simple: on the far right the software controls you more than you control it. The manifestations of technical debt [1] in the form of pressing customer problems in the production environment force you into a largely reactive mode of operation. This reactive mode of operation is prone to a high error injection rate – you introduce new bugs while you fix old ones. Consequently, progress is agonizingly slow and painful. It is often characterized by “never-ending” testing periods.
In Measure and Manage Your IT Debt, Gartner’s Andrew Kyte put his finger on the mechanics that lead to the accumulation of technical debt – “when budget are tight, maintenance gets cut.” While I do not doubt Andrew’s observation, it does not answer a deeper question: why would maintenance get cut in the face of the consequences depicted in Figure 1? Most CFOs and CEOs I know would get quite alarmed by Figure 1. They do not need to be experts in object-oriented programming in order to take steps to mitigate the risks associated with slipping to the far right of the curve.
I believe the deeper answer to the question “why would maintenance get cut in the face of the consequences depicted in Figure 1?” was given by John Seely Brown in his 2009 presentation The Big Shift: The Mutual Decoupling of Two Sets of Disruptions – One in Business and One in IT. Brown points out five alarming facts in his presentation:
- The return on assets (ROA) for U.S. firms has steadily fallen to almost one-quarter of 1965 levels.
- Similarly, the ROA performance gap between corporate winners and losers has increased over time, with the “winners” barely maintaining previous performance levels while the losers experience rapid performance deterioration.
- U.S. competitive intensity has more than doubled during that same time [i.e. the US has become twice as competitive – IG].
- Average Lifetime of S&P 500 companies [declined steadily over this period].
- However, in those same 40 years, labor productivity has doubled – largely due to advances in technology and business innovation.
Discussion of the full-fledged analysis that Brown derives based on these five facts is beyond the scope of this blog post [2]. However, one of the phenomena he highlights – “The performance paradox: ROA has dropped in the face of increasing labor productivity” – is IMHO at the roots of the staggering IT debt we are staring at.
Put yourself in the shoes of your CFO or your CEO, weighing the five facts highlighted by Brown in the context of Highsmith’s technical debt curve. Unless you are one of the precious few winner companies, the only viable financial strategy you can follow is a margin strategy. You are very competitive (#3 above). You have already ridden the productivity curve (#5 above). However, growth is not demonstrable or not economically feasible given the investment it takes (#1 & #2 above). Needless to say, just thinking about being dropped out of the S&P 500 index sends cold sweat down your spine. The only way left to you to satisfy the quarterly expectations of Wall Street is to cut, cut and cut again anything that does not immediately contribute to your cashflow. You cut on-going refactoring of code even if your CTO and CIO have explained the technical debt curve to you in no uncertain terms. You are not happy to do so but you are willing to pay the price down the road. You are basically following a “survive to fight another day” strategy.
If you accept this explanation for the level of debt we are staring at, the core issue with respect to IT debt at the individual company level [3] is how “patient” (or “impatient”) investment capital is. Studies by Carlota Perez seem to indicate we are entering a phase of the techno-economic cycle in which investment capital will shift from financial speculation toward (the more “patient”) production capital. While this shift is starting to happens, you have the opportunity to apply “an ounce of prevention is worth a pound of cure” strategy with respect to the new code you will be developing.
My recommendation would be to combine technical debt measurements with software process change. The ability to measure technical debt through code analysis is a necessary but not sufficient condition for changing deep-rooted patterns. Once you institute a process policy like “stop the line whenever the level of technical debt rose,” you combine the “necessary” with the “sufficient” by tying the measurement to human behavior. A possible way to do so through a modified Agile/Scrum process is illustrated in Figure 2:
Figure 2: Process Control Model for Controlling Technical Debt
As you can see in Figure 2, you stop the line and convene an event-driven Agile meeting whenever the technical debt of a certain build exceeds that of the previous build. If ‘stopping the line’ with every such build is “too much of a good thing” for your environment, you can adopt statistical process control methods to gauge when the line should be stopped. (See Using 3σ Control Limits in Software Engineering for a discussion of the settings appropriate for your environment.)
An absolutely critical question this analysis does not cover is “But how do we pay back our $1 trillion debt?!” I will address this most important question in a forthcoming post which draws upon the threads of this post plus those in the preceding Part I.
Footnotes:
[1] Kyte/Gartner define IT Debt as “the costs for bringing all the elements [i.e. business applications] in the [IT] portfolio up to a reasonable standard of engineering integrity, or replace them.” In essence, IT Debt differs from the definition of Technical Debt used in The Agile Executive in that it accounts for the possible costs associated with replacing an application. For example, the technical debt calculated through doing code analysis on a certain application might amount to $500K. In contrast, the cost of replacement might be $250K, $1M or some other figure that is not necessarily related to intrinsic quality defects in the current code base.
[2] See Hagel, Brown and Davison: The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion.
[3] As distinct from the core issue at the national level.
Three Criteria for Qualifying as Agile
Agile methods have been gaining popularity to the extent that one sees the term Agile used beyond the domain of software methods. Agile Infrastructure and Agile Business Service Management were used in this blog and elsewhere. Recently I have seen the term used in the domain of Business Process Management (BPM). For example, a presentations entitled Best Practices for Agile BPM will be delivered in the forthcoming Gartner Group Business Process Management Summit 2010.
I have no doubt the term Agile will be adopted in various fields. Using BPM as an example, I propose the following three criteria to differentiate between agile (small A) and Agile (capital A):
- Beyond software: A software team carrying out a BPM initiative might use Agile methods. This fact to itself does not suffice to make the initiative Agile BPM.
- Methodical specificity: Roles, forums/ceremonies and artifacts for the BPM initiative must be specified. Folks might be already applying Lean, TOC or other approaches to BPM, but a definitive Agile BPM method has not crystalized yet.
- Values: Adherence in spirit to the four principles of the Agile Manifesto. Replace the word “software” with “product” in the manifesto (just two occurences!) and you get a universal value statement that is not restricted to “just” software. It applies to BPM as well as to any other field in which products are produced and used.
You might be impressively agile in what you do but it does not necessarily make you Agile. The pace by which you do things must be anchored in a broader perspective that incorporates customers and employees. A forthcoming post entitled Indivisibility of the Principles of Operation will explore the connection between the Agile values (plural) you hold and the business value (singular) you generate.
Software Moulding Methods
Christian Sarkar and I started an e-dialog on Agile Business Service Management in BSMReview. Both of us are keenly interested in exploring the broad application of Agile BSM in the context of Gartner’s Top Ten Technologies for 2010. To quote Christian:
Israel, where do agile practices fit into this? Just about everywhere as well?
The short answer to Christian’s good question is as follows:
I consider the principles articulated in the Manifesto For Agile Software Development http://agilemanifesto.org universal and timeless. They certainly apply just about everywhere. As a matter of fact, we are seeing the Manifesto principles applied more and more to the development of hardware and content.
The fascinating thing in what we are witnessing (see, for example: Scale in London – Part II, An Omen in Chicago, Depth in Seattle, and Richness and Vibrancy in Boston) is the evolution of the classical problem of managing multiple Software Development Life Cycles. Instead of dealing with one ‘material’ (software), we handle multiple ‘materials’ (software, hardware, content, business initiative, etc.) of dissimilar characteristics. The net effect is as follows:
The challenge then becomes the simultaneous and synchronized management of two or more ‘substances’ (e.g. software and content; software, content and business initiative; or, software, hardware, content and business initiative) of different characteristics under a unified process. It is conceptually fairly similar to the techniques used in engineering composite materials.
Ten years have passed since Evans and Wurster demonstrated the effects of separating the virtual from the physical. As software becomes pervasive, we are now starting to explore putting the virtual back together with the physical through a new generation of software moulding methods.