The Agile Executive

Making Agile Work

Posts Tagged ‘Complexity

What 108M Lines of Code do not Tell Us

leave a comment »

Source: Nemo

Coming on the heels of Gartner’s research note projecting $1 trillion in IT Debt by 2015, CAST’s study provided a more granular view of the debt, estimating an average of over $1 million in technical debt per application in a sample of 288 applications. Between these two studies, the situation examined at the micro-level seems to be quite consistent with the state of affairs estimated and projected at the macro-level.

My hunch is that the gravity of the situation from a software quality and maintenance perspective is actually masked by efforts of IT staffs to compensate for programming problems through operational excellence. For example, carefully staged deployment and quick rollback often enable coping with defects that could/should have been handled through higher test coverage, lesser complexity or a more acceptable level of code duplication.

Part of the reason that the masking effects of IT staffs are not always fully appreciated is that they are embedded in the business design of IT Outsourcing companies. The company to which you outsourced your IT is ‘making a bet’ it can run your IT better than you can. It often succeeds in so doing. The unresolved defects in your old code plus those that evolved over time through software decay have not necessarily been fixed. Rather, the manifestations of these defects are  handled operationally in a more efficient manner.

Think again if your visceral reaction to the technical debt situation described in the Gartner research note and the CAST study is of the “This can’t possibly be true” variety. It is what it is – just take a quick look at Nemo to see representative technical debt data with your own eyes. And, as indicated in this post, it might even be worse than what it looks. As Gartner puts it:

The results of such [IT Debt] an assessment will be, at best, unsettling and, at worst, truly shocking.

Cutter’s Technical Debt Assessment and Valuation Service

with 3 comments

 

 

Source: Cutter Technical Debt and Valuation Service

The Cutter Consortium has announced the availability of the Technical Debt Assessment and Valuation Service. The service combines static code analytics with dynamic program analytics to give the client “x-rays” of the software being examined at any desired granularity – from the whole project portfolio to a single instruction. It breaks down technical debt into the areas of coverage, complexity, duplication, violations and comments. Clients get an aggregate dollar figure for “paying back” debt that they can then plug into their financial models to objectively analyze their critical software assets. Based on these metrics, they can make the best decisions about their ongoing strategy for the software development effort under scrutiny.

This new service is an important addition to the enlightened software governance framework that Jim Highsmith, Michael Mah and I have been thinking about and contributing to for sometime now (see Beyond Scope, Schedule and Cost: Measuring Agile Performance and Quantifying the Start Afresh Option). The heart of both the technical debt service and the enlightened governance framework is captured by the following words from the press release:

Executives in charge of software governance have long dealt with two kinds of dollar figures: One, the cost of producing and maintaining the software; and two, the value of the software, which is usually expressed in terms of the net present value associated with the expected value stream the product will generate. Now we can deal with technical debt in the same quantitative manner, regardless of the software methods a company uses.

When expressed in terms of dollars, technical debt ties neatly into value vis-à-vis cost considerations. For a “well behaved” software project, three factors — value, cost, and technical debt — have to satisfy the equation Value >> Cost > Technical Debt. Monitoring the balance between value, cost, and technical debt on an ongoing basis is an effective way for organizations to stay on top of their real progress, and for stakeholders and investors to ensure their investment is sound.

By boiling down technical debt to dollars and tying it to cost and value, the service enables a metrics-driven governance framework for the use of five major constituencies, as follows:

Technical debt assessments and valuation can specifically help CIOs ensure alignment of software development with IT Operations; give CTOs early warning signs of impending project trouble; assure those involved in due diligence for M&A activity that the code being acquired will adapt to meet future needs; enables CEOs to effectively govern the software development process; and, it provides critical information as to whether software under consideration constitutes an asset or a liability for venture capitalists who need to make informed investment decisions.

It should finally be pointed out that the technical debt assessment service and the governance framework it enables are applicable to any software method. They can be used to:

  • Govern a heterogeneous environment in which multiple software methods are used
  • Make apples-to-apples comparisons between disparate software projects
  • Assess project performance vis-a-vis industry norms

Forthcoming Cutter Executive Reports, Executive Updates and Email Advisors on the technical debt service are restricted to Cutter clients. As appropriate, I will publish the latest and greatest news on the subject in the Cutter Blog (which is an open forum I highly recommend).

Acknowledgements: I would like to wholeheartedly thank the following colleagues for inspiring, enlightening and supporting me during the preparation of the service:

  • Karen Coburn
  • Jennifer Flaxman
  • Jonathon Golden
  • John Heintz
  • Jim Highsmith
  • Ken Collier
  • Kim Leonard
  • Kara Letourneau
  • Michal Mah
  • Anne Mullaney
  • Chris Sterling
  • Cindy Swain
  • Sarah Wiesbrock

Measuring Agile Success Rate the Right Way

with 5 comments

Much has been said recently about the success/failure rate of Agile projects. In particular, a debate arose around the success rate of Scrum vis-a-vis Kanban.  For example, in a post entitled Some Day Kanban will fail 75% of the Time, colleague Jurgen Appelo states as follows:

Unfortunately, some people arguing against Scrum include these ScrumBut teams in their evaluations of the “high failure rate” of Scrum. They love quoting that “at least 75 percent of Scrum implementations fail.” And I think “Yes of course, 75% fails when that includes the teams that don’t understand what they’re doing.”

I would like to add one other “dimension” to the discussion: boundary conditions.

Any Agile initiative – Crystal, Scrum, Kanban, etc. – typically starts from a certain state of affairs of the code that has already been developed using a Waterfall method or no method at all. Even brand new projects produce code that invariably interacts with other software components that are already deployed, warts and everything. Pristine environments with no technical debt for the Agile initiative to deal with are rare.

Like it or not, the Agile initiative is saddled from the outset with a certain amount of technical debt. Code has been duplicated, rules violated, complexity ran amuck, etc. A typical enterprise software team starts with hundreds of thousands $$ in technical debt, if not millions. This debt needs to be “paid back.” Probably not over night, but certainly over a period of time. As illustrated by the following figure from Jim Highsmith, things get ugly if the debt is not paid back over an extended period of time.

in-can-you-afford-the-software-you-are-developing

The evaluation of success or failure of the Agile initiative needs to take technical debt into account. A team of 50 with an accrued technical debt of $100,000 has a much easier job on its hands transitioning to Agile than a similar size team starting with $1M in technical debt on its hands.

Whatever criteria you use to determine whether an Agile initiative has been successful, I would suggest the following boundary condition needs to be satisfied:

Technical debt at the end of the project/initiative must be significantly lower than technical debt at the start of the project.

Use the techniques outlined in Using Credit Limits to Constrain Development on Margin to calculate technical debt before and after. In addition to qualifying your Agile success, quantifying technical debt will do a lot towards improving the quality of your software.

A Question of Correctness

leave a comment »

Colleague John Heintz brought up a question about the the post Uncertainty, Complexity, Correctness. To quote John:

How are “uncertainty” and “correctness” related?

Doesn’t uncertainty mean my definition of correct may change? Could I have totally correct direction, but still have uncertainty?

What gives?!?

John is referring to the way I try to pinpoint the exact “pain” Agile is expected to address by an executive considering an Agile implementation. Specifically:

Agile is all about effectively addressing uncertainty, I say. I stress that Agile does not address complexity per se. It might indirectly help with complexity if it leads you towards deeper thinking about Complex Adaptive Systems. For example, you might consider evolving the product architecture in the course of your Agile project instead of pre-defining it. However, Agile is not a “medicine” for complexity pains.

Nor is Agile about correctness. A hyper-productive Agile team could actually go fast nowhere implementing a poorly conceived product. The “real time” feedback  loops of  the project team might help uncover that a product is mis-conceived. However, independent of the team feedback, you still need to determine what correctness means to you and how you would assess it as the product evolves.

The answer to John’s good question is that correctness is a matter of the level of abstraction as defined in Hardware Engineering: A DEC View of Hardware Systems Design. Suppose you are coding a service enabling passengers to check in for flights. At the functional level, the correctness of the coded service is fairly unambiguous and (hopefully) will be established through testing. The check in service might be correct functionally, but still subject to change. For example, an airline could aspire to enable its passengers to check in through any mobile device whose volume of sales exceeded one million units. Such an aspiration will necessitate fairly frequent changes to support new mobile devices as they cross a threshold of popularity.

So, yes: one could have a totally correct direction, but still have uncertainty.

Written by israelgat

May 1, 2009 at 5:11 pm