The Agile Executive

Making Agile Work

Posts Tagged ‘Metrics

Surfing Technical Debt

with one comment

The Second Workshop on Managing Technical Debt will be held on May 23, 2011 in Honolulu, Hawaii. It is part of and co-located with the 33rd International Conference on Software Engineering (ICSE2011).  Between the workshop and the conference you can rest assured any aspect of software engineering known to mankind will be amply covered.

The workshop is quite unique in its strong emphasis on rigorizing the foundations of technical debt and unifying the ways in which the generic concept is being applied. The reason for so doing is quite straightforward.  The term ‘technical debt’ has, no doubt, proven intuitively compelling. The various intuitive interpretations, however, differ in various subtle nuances. The Overview of the workshop points out:

Yet, it leaves many questions open, such as

  • How do you identify technical debt? What are the different kinds of debt? What are its parameters that help projects elicit, communicate, and manage it?
  • What is the lifetime of technical debt?
  • How is technical debt related to evolution and maintenance activities?
  • How can information about technical debt empirically be collected for developing conceptual models?
  • How do you measure and payoff technical debt? What metrics need to be collected so that key analysis can be conducted?
  • How can technical debt be visualized and analyzed?

As readers of this blog know, I love the combination of intellectual challenge with pragmatic utility that characterizes technical debt. Doing technical debt in Hawaii adds a dimension of pleasure to the mix. The mental image I have for the workshop is ‘Surfing Technical Debt.

On a more prosaic note, the due date for submitting a paper to the workshop is January 21, 2011. Please do not hesitate to contact me or other members of the program committee for any questions you might have on your paper.

Written by israelgat

December 26, 2010 at 9:52 am

SPaMCAST 112 – Israel Gat, Technical Debt

leave a comment »

http://www.flickr.com/photos/pumpkinjuice/229764922/

Click here for my just published interview on Technical Debt. Major themes discussed in the interview are as follows:

  • The nature of technical debt
  • Tactical and strategic effects of technical debt
  • How the technical debt metric enables you to communicate across levels and functions
  • What Toxic Code is and how it is related to Net Present Value
  • The atrocious nature of code with a high Error Feedback Ratio
  • Cyclomatic complexity as a predictor of error-proneness
  • Use of heat maps in reducing technical debt
  • Use of density of technical debt as a risk indicator
  • How and when to use technical debt to ‘stop-the-line’
  • Use of technical debt in governing software

To illuminate various subtle aspects of technical debt, I use the following metaphors in the interview:

  • The rusty automobiles metaphor
  • The universal source of truth metaphor
  • The Russian dolls metaphor
  • The mine field metaphor
  • The weight reduction metaphor
  • The teeth flossing metaphor

Between the themes and the metaphors, the interview combines theory with pragmatic advice for both the technical and the non-technical listener.

The Nine Transformative Aspects of the Technical Debt Metric

with 3 comments

No, the technical debt metric will not improve your tennis game. However, using it could help you generate time for practicing the game due to its nine transformative aspects:
  1. The technical debt metric enables Continuous Inspection of the code through ultra-rapid feedback to the software process (see Figure 1 below).
  2. It shifts the emphasis in software development from proficiency in the software process to the output of the process.
  3. It changes the playing fields from qualitative assessment to quantitative measurement of the quality of the software.
  4. It is an effective antidote to the relentless function/feature pressure.
  5. It can be used with any software method, not “just” Agile.
  6. It is applicable to any amount of code.
  7. It can be applied at anypoint in time in the software life-cycle.
  8. These seven characteristics of the technical debt metric enable effective governance of the software process.
  9. The above  characteristics of the technical debt metric enable effective governance of the software product portfolio.

Figure 1: Continuous Inspection

Written by israelgat

October 28, 2010 at 8:40 am

How to Use Technical Debt Data in the M&A Process

leave a comment »

http://www.flickr.com/photos/brajeshwar/266749872/

As a starting point, please read Implication of Technical Debt Uncertainty for Software Licensing Negotiations. Everything stated there holds for negotiating M&A deals. In particular:

  • You (as the buyer) should insist on conducting a Technical Debt Assessment as part of the due diligence process.
  • You should be able to deduct the monetized technical debt figure from the price of the acquisition.
  • You should be able to quantify the execution risk (as far as software quality is concerned).

An important corollary holds with respect to acquiring a company who is in the business of doing maintenance on an open source project, helping customers deploy it and training them in its use. You can totally eliminate uncertainty about the quality of the open source project without needing to negotiate permission to conduct technical debt assessment. Actually, you will be advised to conduct the assessment of the software prior to approaching the target company. By so doing, you start negotiations from a position of strength, quite possibly having at your disposal (technical debt) data that the company you consider acquiring does not possess.

Action item: Supplement the traditional due diligence process with a technical debt assessment. Use the monetized technical debt figure to assess execution risk and drive the acquisition price down.

http://www.flickr.com/photos/tantek/254940135/

____________________________________________________________________________________________________

Negotiating a major M&A deal? Let me know if you would like assistance in conducting a technical debt assessment and bringing up technical debt issues with the target company. I will help you with negotiating the acquisition price down. Click Services for details and contact information.

____________________________________________________________________________________________________

Written by israelgat

October 20, 2010 at 5:37 am

The Gat/Highsmith Joint Seminar on Technical Debt and Software Governance

leave a comment »

Jim and I have finalized the content and the format for our forthcoming Cutter Summit seminar. The seminar is structured around a case study which includes four exercise. We expect the case study/exercises will take close to two-thirds of the allotted time (the morning of October 27). In the other third we will provide the theory and practices to be used in the seminar exercises and (hopefully) in many future technical debt engagements participants in the workshop will oversee.

The seminar does not require deep technical knowledge. It targets participants who possess conceptual grasp of software development, software governance and IT operations/ITIL. If you feel like reading a little about technical debt prior to the Summit, the various posts on technical debt in this blog will be more than sufficient.

We plan to go with the following agenda (still subject to some minor tweaking):

Agenda for the October 27, 9:30AM to 1:00PM Technical Debt Seminar

  • Setting the Stage: Why Technical Debt is a Strategic Issue
  • Part I: What is Technical Debt?
  • Part II : Case Study – NotMyCompany, Inc.
    • Exercise #1 – Modernizing NotMyCompany’s Legacy Code
  • Part III: The Nature of Technical Debt
  • Part IV: Unified Governance
    • Exercise #2 – The acquisition of SocialAreUs by NotMyCompany
  • Part V: Process Control Models
    • Exercise #3 – How Often Should NotMyCompany Stop the Line?
  • (Time Permitting – Part VI: Using Technical Debt in Devops
    • Exercise #4 – The Agile Versus ITIL Debate at NotMyCompany)

By the end of the seminar you will know how to effectively apply technical debt techniques as an integral part of software governance that is anchored in business realities and imperatives.

Written by israelgat

September 30, 2010 at 3:20 pm

Technical Debt Assessment, Sterling Barton LLC and the Moussaka

with one comment

A few month ago Chris Sterling and I were carrying out a Cutter Technical Debt Assessment and Valuation engagement for a venture capitalist who was considering a certain company. We discovered various things in the code of this company. More noteworthy, my deep domain expertise led to Chris discovering the great Greek dish Moussaka.

I have eaten a lot of good Moussakas over the years. Even against this solid gastronomic background I can’t forget how the eyes of Chris lit up when he took the first bite. It took him only a tiny little time to get on his iPhone and tweet on the culinary aspects of our engagement. I then knew it was going to be a very successful engagement…

The relationship with Chris deepened since this episode. For example, in collaboration with Brent Barton Chris contributed a great article to the forthcoming issue of the Cutter IT Journal on Technical Debt. In this article Chris and Brent  demonstrate how technical debt techniques can be applied at the portfolio level. They make the reader step into the shoes of the project portfolio planner and walk him through their approach to enhancing the decision-making process by using the software debt dashboard.

Chris has just published an excellent post entitled “Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments” in Getting Agile and was kind enough to suggest I cross-post it in The Agile Executive. Here it is (please note that the examples given below by Chris have nothing to do with the engagement described above):

“For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out-of-the-way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

  248390
  com.adobe:as3corelib
  AS3 Core Lib
  AS3 Core Lib
  PRJ
  TRK
  flex
  1.0
  2010-09-19T01:55:06+0000

    technical_debt_ratio
    12.4
    12.4%

Within this XML, there is a section called  that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.”

Thank you, Chris!

Why Spend the Afternoon as well on Technical Debt?

with 2 comments

Source: http://www.flickr.com/photos/pinksherbet/233228813/

Yesterday’s post Why Spend a Whole Morning on Technical Debt? listed eight characteristics of the technical debt metric that will be discussed during the morning of October 27 when Jim Highsmith and I deliver our joint Cutter Summit seminar. This posts adds to the previous post by suggesting a related topic for the afternoon.

No, I am not trying to “hijack” the Summit agenda messing with the afternoon sessions by colleagues Claude Baudoin and Mitchell Ummel. I am simply pointing out a corollary to the morning seminar that might be on your mind in the afternoon. Needless to say, thinking about it in the afternoon of the 28th instead of the afternoon of the 27th is quite appropriate…

Yesterday’s post concluded with a “what it all means” statement, as follows:

Technical debt is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

If you accept this premise, you can use the technical debt metric to construct boundary objects between various departments in your company/organization. The metric could serve as the heart of boundary objects between dev and IT ops, between dev and customer support, between dev and a company to which some development tasks are outsourced, etc. The point is the enablement of working agreements between multiple stakeholders through the technical debt metric. For example, dev and IT ops might mutually agree that the technical debt in the code to be deployed to the production environment will be less than $3 per line of code. Or, dev and customer support might agree that enhanced refactoring will commence if the code decays over time to more than $4 per line of code.

You can align various departments by by using the technical debt metric. This alignment is particularly important when the operational balance between departments has been disrupted. For example, your developers might be coding faster than your ITIL change managers can process the change requests.

A lot more on the use of the technical debt metric to mitigate cross-organizational dysfunctions, including some Outmodel aspects, will be covered in our seminar in Cambridge, MA on the morning of the 27th. We look forward to discussing this intriguing subject with you there!

Israel

Why Spend a Whole Morning on Technical Debt?

with one comment

In a little over a month Jim Highsmith and I will deliver our joint seminar on technical debt in the Cutter Summit. Here are eight characteristics of the technical debt metric that make it clear why you should spend 3.5 precious hours on the topic:

  1. The technical debt metric shifts the emphasis in software development from proficiency in the software process to the output of the process.
  2. It changes the playing fields from qualitative assessment to quantitative measurement of the quality of the software.
  3. It is an effective antidote to the relentless function/feature pressure.
  4. It can be used with any software method, not “just” Agile.
  5. It is applicable to any amount of code.
  6. It can be applied at any point in time in the software life-cycle.
  7. These six characteristics of the technical debt metric enable effective governance of the software process.
  8. The above  characteristics of the technical debt metric enable effective governance of the software product portfolio.

The eight characteristics in the aggregate amount to technical debt metric as a ‘universal source of truth.’ It is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

Jim and I look forward to meeting you at the summit and interacting with you in the technical debt seminar!

Written by israelgat

September 22, 2010 at 7:32 am

A Devops Case Study

leave a comment »

An outline of my forthcoming Agile 2010 workshop was given in the post “A Recipe for Handling Cultural Conflicts in Devops and Beyond” earlier this week. Here is the case study around which the workshop is structured:

NotHere, Inc. Case Study

NotHere, Inc. is a $500M company based in Jerusalem, Israel. The company developed an eCommerce platform for small to medium retailers. Through a combination of this platform and its hosting data center, NotHere provides online store fronts, shopping carts, order processing, inventory, billing and marketing services to tens of thousands of retailers in a broad spectrum of verticals. For these retailers, NotHere is a one-stop “shopping” for all their online needs. In particular, instead of partnering with multiple companies like Amazon, Ebay, PayPal and Shopzilla, a retailer merely needs to partner with NotHere (who partners with these four companies and many others).

The small to medium retailers that use the good services of NotHere are critically dependent on the availability of its data center. For all practical purposes retailers are (temporarily) dead when the NotHere data center is not available. In recognition of the criticality of this aspect of its IT operations, NotHere invested a lot of effort in maturing its ITIL[i] processes. Its IT department successfully implements the ITIL service support and service delivery functions depicted in the figure below. From an operational perspective, an overall availability level of four nines is consistently attained. The company advertises this availability level as a major market differentiator.

In response to the accelerating pace in its marketplace, NotHere has been quite aggressive and successful in transitioning to Agile in product management, dev and test. Code quality, productivity and time-to-producing-code have been much improved over the past couple of years. The company measures those three metrics (quality, productivity, time-to-producing-code) regularly. The metrics feed into whole-hearted continuous improvement programs in product management, dev and test. They also serve as major components in evaluating the performance of the CTO and of the EVP of marketing.

NotHere has recently been struggling to reconcile velocity in development with availability in IT operations. Numerous attempts to turn speedy code development into fast service delivery have not been successful on two accounts:

  • Technical:  Early attempts to turn Continuous Integration into Continuous Deployment created numerous “hiccups” in both availability and audit.
  • Cultural: Dev is a competence culture; ops is a control culture.

A lot of tension has arisen between dev and ops as a result of the cultural differences compounding the technical differences. The situation deteriorated big time when the “lagging behind” picture below leaked from dev circles to ops.

The CEO of the company is of the opinion NotHere must reach the stage of Delivery over Development. She is not too interested in departmental metrics like the time it takes to develop code or the time it takes to deploy it. From her perspective, overall time-to-delivery (of service to the retailers) is the only meaningful business metric.

To accomplish Delivery over Development, the CEO launched a “Making Cats Work with Dogs[ii]” project. She gave the picture above to the CTO and CIO, making it crystal clear that the picture represents the end-point with respect to the relationship she expects the two of them and their departments to reach. Specifically, the CEO asked the CTO and the CIO to convene their staffs so that each department will:

  • Document its Outmodel (in the sense explored in the “How We Do Things Around Here In Order to Succeed” workshop) of the other department.
  • Compile a list of requirements it would like to put on the other group “to get its act together.”

The CEO also indicated she will convene and chair a meeting between the two departments. In this meeting she would like each department to present its two deliverables (world view of the other department & and the requirements to be put on it) and listen carefully to reflections and reactions from the other department. She expects the meeting will be the first step toward a mutual agreement between the two departments how to speed up overall service delivery.


[i] “Information Technology Infrastructure library – a set of concepts and practices for Information Technology Services Management (ITSM), Information Technology (IT) development and IT operations” [Wikipedia].

[ii] I am indebted to Patrick DeBois for suggesting this title.

© Copyright 2010 Israel Gat

Beautiful Quality

leave a comment »

Figure 1: Agile Assessment – Quality (Source: QSMA)

Colleague and friend Michael Mah has kindly shared with me the figure above – quality assessment for a sample of Agile projects in the QSMA metrics database of more than 8000 software projects. The two red squares in this figure represent the recent results Mah measured on two projects carried out by Quick Solutions (QSI) – a Westerville, OH company offering a broad range IT services.

One to one-and-a-half standard deviation better than the mean might not seem like much to six sigma black belts. However, in the context of typical results we see in the software industry the QSI results are outstanding.  I have not done the exact math whether those results are superior to 95%, 97% or 98% of software projects in the QSMA database as the very exact figure almost does not matter when you achieve this level of excellence.

I asked Bart Murphy – QSI’s Vice President of Delivery and Operations – for the ‘secret sauce.’ Here is the QSI recipe:

For the projects referenced in Michael’s evaluation, our primary focus was quality…  Our team was tasked with not only a significant Innovation effort, but we also managed all aspects of supporting and stabilizing the application (production support, triage, infrastructure, database support, etc.).  Our ‘secret sauce’ was assembling a world class team and executing our Agile methodology.  We organized the team efforts to focus on the three major initiatives; Innovation, Stabilization (Technical Debt), and Production Support.  We developed a release plan and coordinated efforts to deploy releases that resolved a significant number of defects, introduced market differentiating features, and addressed massive amount of technical debt.  The team was able to accomplish this without introducing additional defects into the production system.   Our success can be attributed to the commitment from the business to understand our Agile methodology and being highly engaged throughout the project (co-located with the team).  In addition, the focus of quality that is integrated into our process through the use of Test Driven Development, Continuous Integration, Test Automation, Quality Assurance, and Show & Tells.  Lastly, this would not have been possible without the co-location of the entire team given the significant issues and time constraints for delivery.

Bart also provided me with a table  ‘a la Capers Jones’ in which he elaborates on the factors that helped QSI achieve these results and those that stood in the way. I will publish and discuss Bart’s table in a forthcoming post. As part of this post I will also compare the factors identified by Bart with those reported by Capers Jones.