The Agile Executive

Making Agile Work

Archive for the ‘Agile Performance Management’ Category

Cutter’s Technical Debt Assessment and Valuation Service

with 3 comments

 

 

Source: Cutter Technical Debt and Valuation Service

The Cutter Consortium has announced the availability of the Technical Debt Assessment and Valuation Service. The service combines static code analytics with dynamic program analytics to give the client “x-rays” of the software being examined at any desired granularity – from the whole project portfolio to a single instruction. It breaks down technical debt into the areas of coverage, complexity, duplication, violations and comments. Clients get an aggregate dollar figure for “paying back” debt that they can then plug into their financial models to objectively analyze their critical software assets. Based on these metrics, they can make the best decisions about their ongoing strategy for the software development effort under scrutiny.

This new service is an important addition to the enlightened software governance framework that Jim Highsmith, Michael Mah and I have been thinking about and contributing to for sometime now (see Beyond Scope, Schedule and Cost: Measuring Agile Performance and Quantifying the Start Afresh Option). The heart of both the technical debt service and the enlightened governance framework is captured by the following words from the press release:

Executives in charge of software governance have long dealt with two kinds of dollar figures: One, the cost of producing and maintaining the software; and two, the value of the software, which is usually expressed in terms of the net present value associated with the expected value stream the product will generate. Now we can deal with technical debt in the same quantitative manner, regardless of the software methods a company uses.

When expressed in terms of dollars, technical debt ties neatly into value vis-à-vis cost considerations. For a “well behaved” software project, three factors — value, cost, and technical debt — have to satisfy the equation Value >> Cost > Technical Debt. Monitoring the balance between value, cost, and technical debt on an ongoing basis is an effective way for organizations to stay on top of their real progress, and for stakeholders and investors to ensure their investment is sound.

By boiling down technical debt to dollars and tying it to cost and value, the service enables a metrics-driven governance framework for the use of five major constituencies, as follows:

Technical debt assessments and valuation can specifically help CIOs ensure alignment of software development with IT Operations; give CTOs early warning signs of impending project trouble; assure those involved in due diligence for M&A activity that the code being acquired will adapt to meet future needs; enables CEOs to effectively govern the software development process; and, it provides critical information as to whether software under consideration constitutes an asset or a liability for venture capitalists who need to make informed investment decisions.

It should finally be pointed out that the technical debt assessment service and the governance framework it enables are applicable to any software method. They can be used to:

  • Govern a heterogeneous environment in which multiple software methods are used
  • Make apples-to-apples comparisons between disparate software projects
  • Assess project performance vis-a-vis industry norms

Forthcoming Cutter Executive Reports, Executive Updates and Email Advisors on the technical debt service are restricted to Cutter clients. As appropriate, I will publish the latest and greatest news on the subject in the Cutter Blog (which is an open forum I highly recommend).

Acknowledgements: I would like to wholeheartedly thank the following colleagues for inspiring, enlightening and supporting me during the preparation of the service:

  • Karen Coburn
  • Jennifer Flaxman
  • Jonathon Golden
  • John Heintz
  • Jim Highsmith
  • Ken Collier
  • Kim Leonard
  • Kara Letourneau
  • Michal Mah
  • Anne Mullaney
  • Chris Sterling
  • Cindy Swain
  • Sarah Wiesbrock

Standish Group Chaos Reports Revisited

with 11 comments

The Standish Group “Chaos” reports have been mentioned in various posts in this blog and elsewhere. The following figure from the 2002 study is quite representative of the data provided in the Standish annual surveys of the state of software projects:

Standish Group

The January/February 2010 issue of IEEE Software features an article entitled The Rise and Fall of the Chaos Report Figures. The authors – J. Laurenz Eveleens and Chris Verhoef of the VU University, Amsterdam – give the following summary of their findings:

In 1994, Standish published the Chaos report that showed a shocking 16 percent project success. This and renewed figures by Standish are often used to indicate that project management of application software development is in trouble. However, Standish’s definitions have four major problems. First, they’re misleading because they’re based solely on estimation accuracy of cost, time, and functionality. Second, their estimation accuracy measure is one-sided, leading to unrealistic success rates. Third, steering on their definitions perverts good estimation practice. Fourth, the resulting figures are meaningless because they average numbers with an unknown bias, numbers that are introduced by different underlying estimation processes. The authors of this article applied Standish’s definitions to their own extensive data consisting of 5,457 forecasts of 1,211 real-world projects, totaling hundreds of millions of Euros. The Standish figures didn’t reflect the reality of the case studies at all.

I will leave it to the reader to draw his/her conclusion with respect to the differences between the Standish Group and the authors. I would, however, quote Jim Highsmith‘s deep insight on the value system within its context we measure performance. Following excerpt is from Agile Project Management: Creating Innovative Products:

It we are ultimately to gain the full range of benefits of agile methods, if we are ultimately to grow truly agile, innovative organizations, then, as these stories show, we will have to alter our performance management systems…. We have to be as innovative with our measurement systems as we are with our development methodology.

See pp. 335-358 of Jim’s book for details on transforming performance management systems. His bottom line is elusively simple:

The Standish data are NOT a good indicator of poor software development performance. However, they ARE an indicator of systemic failure of our planning and measurement processes.

Jim is referring to the standard definition of  project “success” – on time, on budget, all specified features.

I will be working with a client to carry out the performance management ideas articulated by Jim later this month. Jim indicated he has a customer engagement in February where he expects to learn about  interesting ways in which the client is using the Agile Triangle (which is conceptually quite related to the fundamental question what to measure). Client confidentiality permitting, I am confident we will soon be able  to brief readers of The Agile Executive on our progress.