The Agile Executive

Making Agile Work

Archive for the ‘Technical Debt’ Category

What 108M Lines of Code Tell Us

with 16 comments

Results of the first annual report on application quality have just been released by CAST. The company analyzed 108M lines of code in 288 applications from 75 companies in various industries. In addition to the ‘usual suspects’ –  COBOL, C/C++, Java, .NET – CAST included Oracle 4GL and ABAP in the report.

The CAST report is quite important in shedding light on the code itself. As explained in various posts in this blog, this transition from the process to its output is of paramount importance. Proficiency in the software process is a bit allusive. The ‘proof of the pudding’ is in the output of the software process. The ability to measure code quality enables effective governance of the software process. Moreover, Statistical Process Control methods can be applied to samples of technical debt readings. Such application is most helpful in striking a good balance in ‘stopping the line’ – neither too frequently nor too rarely.

According to CAST’s report, the average technical debt per line of code across all application is $2.82.  This figure, depressing that it might be, is reasonably consistent with quick eyeballing of Nemo. The figure is somewhat lower than the average technical debt figure reported recently by Cutter for a sample of the Cassandra code. (The difference is probably attributable to the differences in sample sizes between the two studies). What the data means is that the average business application in the CAST study is saddled with over $1M in technical debt!

An intriguing finding in the CAST report is the impact of size on the quality of COBOL applications.  This finding is demonstrated in Figure 1. It has been quite a while since I last saw such a dramatic demonstration of the correlation between size and quality (again, for COBOL applications in the CAST study).

Source: First Annual CAST Worldwide Application Software Quality Study – 2010

One other intriguing findings in the CAST study is that “application in government sector show poor changeability.” CAST hypothesizes that the poor changeability might be due to higher level of outsourcing in the government sector compared to the private sector. As pointed out by Amy Thorne in a recent comment posted in The Agile Executive, it might also be attributable to the incentive system:

… since external developers often don’t maintain the code they write, they don’t have incentives to write code that is low in technical debt…

Congratulations to Vincent Delaroche, Dr. Bill Curtis, Lev Lesokhin and the rest of the CAST team. We as an industry need more studies like this!

Technical Debt Assessment, Sterling Barton LLC and the Moussaka

with one comment

A few month ago Chris Sterling and I were carrying out a Cutter Technical Debt Assessment and Valuation engagement for a venture capitalist who was considering a certain company. We discovered various things in the code of this company. More noteworthy, my deep domain expertise led to Chris discovering the great Greek dish Moussaka.

I have eaten a lot of good Moussakas over the years. Even against this solid gastronomic background I can’t forget how the eyes of Chris lit up when he took the first bite. It took him only a tiny little time to get on his iPhone and tweet on the culinary aspects of our engagement. I then knew it was going to be a very successful engagement…

The relationship with Chris deepened since this episode. For example, in collaboration with Brent Barton Chris contributed a great article to the forthcoming issue of the Cutter IT Journal on Technical Debt. In this article Chris and Brent  demonstrate how technical debt techniques can be applied at the portfolio level. They make the reader step into the shoes of the project portfolio planner and walk him through their approach to enhancing the decision-making process by using the software debt dashboard.

Chris has just published an excellent post entitled “Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments” in Getting Agile and was kind enough to suggest I cross-post it in The Agile Executive. Here it is (please note that the examples given below by Chris have nothing to do with the engagement described above):

“For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out-of-the-way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

  248390
  com.adobe:as3corelib
  AS3 Core Lib
  AS3 Core Lib
  PRJ
  TRK
  flex
  1.0
  2010-09-19T01:55:06+0000

    technical_debt_ratio
    12.4
    12.4%

Within this XML, there is a section called  that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.”

Thank you, Chris!

Why Spend the Afternoon as well on Technical Debt?

with 2 comments

Source: http://www.flickr.com/photos/pinksherbet/233228813/

Yesterday’s post Why Spend a Whole Morning on Technical Debt? listed eight characteristics of the technical debt metric that will be discussed during the morning of October 27 when Jim Highsmith and I deliver our joint Cutter Summit seminar. This posts adds to the previous post by suggesting a related topic for the afternoon.

No, I am not trying to “hijack” the Summit agenda messing with the afternoon sessions by colleagues Claude Baudoin and Mitchell Ummel. I am simply pointing out a corollary to the morning seminar that might be on your mind in the afternoon. Needless to say, thinking about it in the afternoon of the 28th instead of the afternoon of the 27th is quite appropriate…

Yesterday’s post concluded with a “what it all means” statement, as follows:

Technical debt is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

If you accept this premise, you can use the technical debt metric to construct boundary objects between various departments in your company/organization. The metric could serve as the heart of boundary objects between dev and IT ops, between dev and customer support, between dev and a company to which some development tasks are outsourced, etc. The point is the enablement of working agreements between multiple stakeholders through the technical debt metric. For example, dev and IT ops might mutually agree that the technical debt in the code to be deployed to the production environment will be less than $3 per line of code. Or, dev and customer support might agree that enhanced refactoring will commence if the code decays over time to more than $4 per line of code.

You can align various departments by by using the technical debt metric. This alignment is particularly important when the operational balance between departments has been disrupted. For example, your developers might be coding faster than your ITIL change managers can process the change requests.

A lot more on the use of the technical debt metric to mitigate cross-organizational dysfunctions, including some Outmodel aspects, will be covered in our seminar in Cambridge, MA on the morning of the 27th. We look forward to discussing this intriguing subject with you there!

Israel

Why Spend a Whole Morning on Technical Debt?

with one comment

In a little over a month Jim Highsmith and I will deliver our joint seminar on technical debt in the Cutter Summit. Here are eight characteristics of the technical debt metric that make it clear why you should spend 3.5 precious hours on the topic:

  1. The technical debt metric shifts the emphasis in software development from proficiency in the software process to the output of the process.
  2. It changes the playing fields from qualitative assessment to quantitative measurement of the quality of the software.
  3. It is an effective antidote to the relentless function/feature pressure.
  4. It can be used with any software method, not “just” Agile.
  5. It is applicable to any amount of code.
  6. It can be applied at any point in time in the software life-cycle.
  7. These six characteristics of the technical debt metric enable effective governance of the software process.
  8. The above  characteristics of the technical debt metric enable effective governance of the software product portfolio.

The eight characteristics in the aggregate amount to technical debt metric as a ‘universal source of truth.’ It is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

Jim and I look forward to meeting you at the summit and interacting with you in the technical debt seminar!

Written by israelgat

September 22, 2010 at 7:32 am

How to Break the Vicious Cycle of Technical Debt

with 10 comments

The dire consequences of the pressure to quickly deliver more functions and features to the market have been described in detail in various posts in this blog (see, for example, Toxic Code). Relentless pressure forces the development team to take technical debt. The very same pressure stands in the way of paying back the debt in a timely manner. The accrued technical debt reduces the velocity of the development team. Reduced development velocity leads to increased pressure to deliver, which leads to taking additional technical debt, which… It is a vicious cycle that is extremely difficult to break.

Figure 1: The Vicious Cycle of Technical Debt

The post Using Credit Limits to Constrain “Development on Margin” proposed a way of coping with the vicious cycle of technical debt – placing a limit on the amount of technical debt a development team is allowed to accrue. Such a limit addresses the demand side of the software development process. Once a team reaches the pre-determined technical debt limit (such as $3 per line of code) it cannot continue piling on new functions and features. It must attend to reducing the technical debt.

A complementary measure can be applied to the supply side of the software development process. For example, one can dynamically augment the team by drawing upon on-demand testing. uTest‘s recent announcement about securing Series C financing explains the rationale for the on-demand paradigm:

“The whole ‘appification’ of software platforms, whether it’s for social platforms like Facebook or mobile platforms like the iPhone or Android or Palm, or even just Web apps, creates a dramatically more complex user-testing matrix for software publishers, which could mean media companies, retailers, enterprise software companies,” says Wienbar. “Anybody who has to interact with consumers needs a service to help with that testing. You can’t cover that whole matrix with your in-house test team.”

Likewise, on-demand development can augment the development team whenever the capacity of the in-house team is insufficient to satisfy demand. IMHO it is only a matter of little time till marketplaces for on-demand development will evolve. All the necessary ‘ingredients’ for so doing – Agile, Cloud, Mobile and Social – are readily available. It is merely a matter of putting them together to offer on-demand development as a commercial service.

Whether you do on-demand testing, on-demand development or both, you will soon be able to address the supply side of software development in a flexible and cost-effective manner. Between curtailing demand through technical debt limits and expanding supply through on-demand testing/development, you will be better able to cope with the relentless pressure to deliver more and quicker than the capacity of your team allows.

Forrester on Managing Technical Debt

leave a comment »

Forrester Research analysts Dave West and Tom Grant just published their report on Agile 2010. Here is the section in their report on managing technical debt:

Managing technical debt

Dave: The Agile community has faced a lot of hard questions about how a methodology that breaks development into short iterations can maintain a long-term view on issues like maintainability. Does Agile unintentionally increase the risk of technical debt? Israel Gat is leading some breakthrough thinking in the financial measures and ramifications of technical debt. This topic deserves the attention it’s beginning to receive, in part because of its ramifications for backlog management and architecture planning. Application development professionals should :-

  • Starting captured debt. Even if it is just by encouraging developers to note issues as they are writing code in the comments of that code, or putting in place more formal peer review processes where debt is captured it is important to document debt as it accumulates.
  • Start measuring debt. Once captured, placing a value / cost to the debt created enables objective discussions to be made. It also enables reporting to provide the organization with transparency of their growing debt. I believe that this approach would enable application and product end of life discussions to be made earlier and with more accuracy.
  • Adopt standard architectures and opensource models. The more people that look at a piece of code the more likely debt will be reduced. The simple truth of many people using the same software makes it simpler and less prone to debt.

Tom: Since the role I serve, the product manager in technology companies, sites on the fault line between business and technology, I’m really interested in where Israel Gat and others take this discussion. The era of piling up functionality in the hopes that customers will be impressed with the size of the pile are clearly ending. What will replace it is still undetermined.

I will be responding to Tom’s good question in various posts along the way. For now I would just like to mention the tremendous importance of automated technical debt assessment. Typical velocity of formal code inspection is 100-200 lines of code per hour. Useful and important that formal code inspection is, there is only so much that can be inspected through our eyes, expertise and brains. The tools we use nowadays to do code analysis apply to code bases of any size. Consequently, the assessment of quality (or lack thereof) shifts from the local to the global. It is no more no a matter of an arcane code metric in an esoteric Java class that precious few folks ever hear of. Rather, it is a matter of overall quality in the portfolios of projects/products a company possesses. As mentioned in an earlier post, companies who capitalize software will sooner or later need to report technical debt as line item on their balance sheet. It will simply be listed as a liability.

From a governance perspective, technical debt techniques give us the opportunity to carry out consistent governance of the software process based on a single source of truth. The single source of truth is, of course, the code itself. The very same truth is reflected at every level in the organization. For the developer in the trenches the truth manifests itself as a blocking violation in a specific line of code. For the CFO it is the need to “pay back” $500K in the very same project. Different that the two views are, they are absolutely consistent. They merely differ in the level of aggregation.

Outline of the Technical Debt Seminar at the Cutter Summit

leave a comment »



Pictured above are speakers of the forthcoming Cutter Summit. Between the seventeen of us we will cover a broad spectrum of IT topics such as Agile, Enterprise Architecture, Business Strategy, Cloud Computing, Collaboration, Governance and Security. Inter-disciplinary seminars, panels and case studies will weave all those threads together to give participants a clear view of the unfolding transformation in IT and of the new way(s) companies are starting to utilize IT. Click here for a details.

As Jim Highsmith and I continue to develop our joint seminar on technical debt for the summit, I would like to give readers of this blog a sense of where we are and ask for feedback. Right now we are considering the following building blocks for the seminar:

  • The Nature of Technical Debt
  • Technical Debt Metrics
  • Monetizing Technical Debt
  • Constructing Roadmaps for Paying Back Technical Debt
  • Risk Assessment and Mitigation
  • A Simple Software Governance Framework
  • Schedule in the Simple Governance Framework
  • Enlightened Governance
  • Baking in Quality One Build at a Time
  • How Often Should the Project Team Regroup?
  • Multi-Level Governance
  • Extending  Technical Debt Techniques to Devops
  • Use of Technical Debt Techniques in Agile Portfolio Management
  • The Start Afresh Option
  • Technical Debt as an Integral Part of a Value Delivery Culture

In the course of going through a subset of these building blocks, we will cover the latest and greatest from the October issue of the Cutter IT Journal on technical debt, present two case studies, and conduct a few group exercises.

From Vivek Kundra to Devops and Compounding Interest: Cutter’s Forthcoming Special Issue on Technical Debt

leave a comment »

Source: http://www.flickr.com/photos/wallyg/152453473/

In a little over a month the Cutter Consortium will publish a special issue of the Cutter IT Journal (CITJ) on Technical Debt. As the guest editor for this issue I had the privilege to set the direction for it and now have early exposure to the latest and greatest in research and field work from the various authors. This short post is intended to share with you some of the more exciting findings you could expect in this issue of the CITJ.

The picture above of the debt clock is a common metaphor that runs through all articles. The various authors are unanimously of the opinion that one must measure his/her technical debt, embed the measurements in the software governance process and relentlessly push hard to reduce technical debt. One can easily extrapolate this common thread to conjecture an initiative by Vivek Kundra to assess technical debt and its ramification at the national level.

Naturally, the specific areas of interest with respect to technical debt vary from one author to another. From the broad spectrum of topics addressed in the journal, I would like to mention two that are quite representative:

  • One of the authors focuses on the difference between the manifestation of technical debt in dev versus its manifestation in devops, reaching the conclusion that the change in context (from dev to devops) makes quite a difference. The author actually doubts that the classical differentiation between “building the right system” and “building the system right” holds in devops.
  • Another author derives formulas for calculating  Recurring Interest and Compounding Interest in technical debt. The author uses these formulas to demonstrate two scenario: Scenario A in which technical debt as % of total product revenue is 12% and Scenario B in which technical debt as % of total product revenue is 280%. The fascinating thing is that this dramatic difference (12% v. 280%) is induced through much smaller variances in the Recurring Interest and the Compounding Interest.

I will blog much more on the subject when the CITJ issue is published in October. In addition, Jim Highsmith and I will discuss the findings of the various authors as part of our joint seminar on the subject in the forthcoming Cutter Summit.

Stay tuned!

Forthcoming Technical Debt Events

leave a comment »

In just about a week I will be sharing the latest and greatest in technical debt techniques through a Cutter webinar in which colleague John Heintz and I will be speaking . In a little over a month a special issue of the Cutter IT Journal [CITJ] on technical debt will be published. And, in a couple of months Jim Highsmith and I will deliver a workshop on the subject in the Cutter Summit.

Shifting from the process to its output (i.e. the code) is the common thread that runs through the three events. Rigorous that your implementation of the software process is, the proof of the pudding is the quality of the code your teams produce. The technical debt accrued in the code is the ultimate acid test for your success with the Agile roll-out and/or with any other software method you might be using.

Another important thread in all three events is a single source of truth. The technical debt data seen by the developer in the trenches, his/her project leader, the mid-level manager on the project, the vice president of engineering and the CFO/CEO represents different views of the realities of the code. Each level sees a different aggregation of data – all the way from a blocking violation at a specific line of code to the aggregate $$ amount required to “pay back” the debt. But, there is no distortion between the five levels of the technical debt data – all draw upon the code itself as the single source of truth.

Here is the announcement of the first event – the  Reining in Technical Debt webinar scheduled for August 19, 12:00PM EDT:

Do you really govern the software development process in your IT organization or do its uncertainty and unpredictability leave you aghast? Do you manage to bake in quality in every build? Can you assess the quality of your software in a way that quantifies the risk?

Recent developments in software engineering and in software governance enable you to tie quality, cost, and value together to form a simple and effective governance framework for software. This webinar will provide you with a preliminary understanding of how to assess quality through technical debt techniques, will familiarize you with state-of-the-art tools for measuring technical debt, and will demonstrate the effect on value delivery when technical debt is not “paid back” promptly.

Israel and John will also introduce a governance framework that ensures you can rigorously manage your software development process from a business perspective. This framework reduces a large number of complex technical considerations to a common denominator that is easily understood by both technical and non-technical people — dollars.

Get Your Questions Answered

Don’t miss your chance to get specific advice from Cutter’s experts on technical debt and toxic code. Join us on Thursday, August 19 at 12:00 EDT (see your local time here) to learn how both your software development process(es) and the corresponding governance process can be transformed in a manner that will make a big difference to your software developers and testers, to key stakeholders in your company, and to your firm’s customers.

Register Now!

Register to attend so you’ll have the opportunity to have your specific questions answered. We’ll send you the login instructions a day prior to the webinar.

As always, this Cutter Webinar is not vendor sponsored, and is available to Cutter clients and our guests at no charge. Register here.

Pass this invitation along!

Be sure to extend our invitation to your CIO, CFO and the other senior business-IT leaders and trainers in your organization who you think could benefit from this discussion.

If you have any questions prior to the program, please contact Kim Leonard at kleonard@cutter.com or call her directly at +1 781 648 8700.

Can’t Make the Live Event?

You won’t miss out — the recording will be added to the webinarsonline resource center for client access, along with the rest of these past events.

Written by israelgat

August 12, 2010 at 8:17 am

Round Two: Can Technical Debt Constitute a Breach of Implied Warranties?

with 2 comments

In a previous post I discussed whether technical debt could under some conditions constitute a breach of implied warranties. Examining the subject with respect to intent, I made the following observation:

It is a little tricky (though not impossible – see Using Credit limits to Constrain Development on Margin) to define the precise point where technical debt becomes “unmanaged.” One needs to walk a fine line between technical/methodical incompetence and resource availability to determine technical fraud. For example, if your code has 35% coverage, is it or is not unmanaged? Does the answer to this question change if your cyclomatic complexity per class exceeds 30? I would think the courts might be divided for a very long time on the question when does hidden technical debt represent a fraudulent misrepresentation.

One component  of technical debt deserves special attention in the context of this post. I am referring to the conscious decision not to do unit testing at all… Such a conscious decision IMHO indicates no intention to pay back this category of technical debt – unit test coverage. It is therefore quite incompatible with the nature of an implied warranty:

Responses to my post were mixed. Various readers who are much more knowledgeable in the law than I am pointed out various legal defenses a software vendor could use. Click here for a deeper understanding of these subtle legal points.

Imagine my delight reading yesterday’s uTest interview with Cem Kaner. Cem makes the following statement in his interview:

ALI [American Law Institute] … started writing the Principles of the Law of Software Contracts. One of its most important rules is one that I advocated: a seller of software who knows about a defect of the software but does not disclose the defect to the customer will be held liable for damages caused to that customer by that defect. Note that this does not apply to free software (not sold). And if the seller discloses the defect, it becomes part of the product’s specification (it’s a feature). And if the seller doesn’t know about the defect there is no liability (once customers tell you about a defect, you have knowledge, so you cannot avoid knowledge for long by not testing). ALI adopted it unanimously last year. This is not law, but until the legislatures pass statutes, the Principles will be an important guide for judges. Even though I am a minor contributor to this work, I think the defect-disclosure requirement might be my career’s most important contribution to software quality. [Highlights by IG].

While not (yet?) a law, the Principles could indeed lead us where IMHO we as an industry should be. Technical debt manifests itself as bugs. By making the specific bugs part of the spec the software evolves from a quality standpoint. Moreover, the defect-disclosure requirement practically force software vendors to address their bugs within a “reasonable” amount of time. Customer bugs become an antidote to the relentless pressure to add functions and features without paying due attention to software quality.

I guess I missed the opportunity to include technical debt in the 2010 revision of the Principles. As we always do in software, I will target the next rev…