Archive for September 2010
The Gat/Highsmith Joint Seminar on Technical Debt and Software Governance
Jim and I have finalized the content and the format for our forthcoming Cutter Summit seminar. The seminar is structured around a case study which includes four exercise. We expect the case study/exercises will take close to two-thirds of the allotted time (the morning of October 27). In the other third we will provide the theory and practices to be used in the seminar exercises and (hopefully) in many future technical debt engagements participants in the workshop will oversee.
The seminar does not require deep technical knowledge. It targets participants who possess conceptual grasp of software development, software governance and IT operations/ITIL. If you feel like reading a little about technical debt prior to the Summit, the various posts on technical debt in this blog will be more than sufficient.
We plan to go with the following agenda (still subject to some minor tweaking):
Agenda for the October 27, 9:30AM to 1:00PM Technical Debt Seminar
- Setting the Stage: Why Technical Debt is a Strategic Issue
- Part I: What is Technical Debt?
- Part II : Case Study – NotMyCompany, Inc.
- Exercise #1 – Modernizing NotMyCompany’s Legacy Code
- Part III: The Nature of Technical Debt
- Part IV: Unified Governance
- Exercise #2 – The acquisition of SocialAreUs by NotMyCompany
- Part V: Process Control Models
- Exercise #3 – How Often Should NotMyCompany Stop the Line?
- (Time Permitting – Part VI: Using Technical Debt in Devops
- Exercise #4 – The Agile Versus ITIL Debate at NotMyCompany)
By the end of the seminar you will know how to effectively apply technical debt techniques as an integral part of software governance that is anchored in business realities and imperatives.
What 108M Lines of Code do not Tell Us
Source: Nemo
Coming on the heels of Gartner’s research note projecting $1 trillion in IT Debt by 2015, CAST’s study provided a more granular view of the debt, estimating an average of over $1 million in technical debt per application in a sample of 288 applications. Between these two studies, the situation examined at the micro-level seems to be quite consistent with the state of affairs estimated and projected at the macro-level.
My hunch is that the gravity of the situation from a software quality and maintenance perspective is actually masked by efforts of IT staffs to compensate for programming problems through operational excellence. For example, carefully staged deployment and quick rollback often enable coping with defects that could/should have been handled through higher test coverage, lesser complexity or a more acceptable level of code duplication.
Part of the reason that the masking effects of IT staffs are not always fully appreciated is that they are embedded in the business design of IT Outsourcing companies. The company to which you outsourced your IT is ‘making a bet’ it can run your IT better than you can. It often succeeds in so doing. The unresolved defects in your old code plus those that evolved over time through software decay have not necessarily been fixed. Rather, the manifestations of these defects are handled operationally in a more efficient manner.
Think again if your visceral reaction to the technical debt situation described in the Gartner research note and the CAST study is of the “This can’t possibly be true” variety. It is what it is – just take a quick look at Nemo to see representative technical debt data with your own eyes. And, as indicated in this post, it might even be worse than what it looks. As Gartner puts it:
The results of such [IT Debt] an assessment will be, at best, unsettling and, at worst, truly shocking.
What 108M Lines of Code Tell Us
Results of the first annual report on application quality have just been released by CAST. The company analyzed 108M lines of code in 288 applications from 75 companies in various industries. In addition to the ‘usual suspects’ – COBOL, C/C++, Java, .NET – CAST included Oracle 4GL and ABAP in the report.
The CAST report is quite important in shedding light on the code itself. As explained in various posts in this blog, this transition from the process to its output is of paramount importance. Proficiency in the software process is a bit allusive. The ‘proof of the pudding’ is in the output of the software process. The ability to measure code quality enables effective governance of the software process. Moreover, Statistical Process Control methods can be applied to samples of technical debt readings. Such application is most helpful in striking a good balance in ‘stopping the line’ – neither too frequently nor too rarely.
According to CAST’s report, the average technical debt per line of code across all application is $2.82. This figure, depressing that it might be, is reasonably consistent with quick eyeballing of Nemo. The figure is somewhat lower than the average technical debt figure reported recently by Cutter for a sample of the Cassandra code. (The difference is probably attributable to the differences in sample sizes between the two studies). What the data means is that the average business application in the CAST study is saddled with over $1M in technical debt!
An intriguing finding in the CAST report is the impact of size on the quality of COBOL applications. This finding is demonstrated in Figure 1. It has been quite a while since I last saw such a dramatic demonstration of the correlation between size and quality (again, for COBOL applications in the CAST study).
Source: First Annual CAST Worldwide Application Software Quality Study – 2010
One other intriguing findings in the CAST study is that “application in government sector show poor changeability.” CAST hypothesizes that the poor changeability might be due to higher level of outsourcing in the government sector compared to the private sector. As pointed out by Amy Thorne in a recent comment posted in The Agile Executive, it might also be attributable to the incentive system:
… since external developers often don’t maintain the code they write, they don’t have incentives to write code that is low in technical debt…
Congratulations to Vincent Delaroche, Dr. Bill Curtis, Lev Lesokhin and the rest of the CAST team. We as an industry need more studies like this!
Why Spend the Afternoon as well on Technical Debt?
Source: http://www.flickr.com/photos/pinksherbet/233228813/
Yesterday’s post Why Spend a Whole Morning on Technical Debt? listed eight characteristics of the technical debt metric that will be discussed during the morning of October 27 when Jim Highsmith and I deliver our joint Cutter Summit seminar. This posts adds to the previous post by suggesting a related topic for the afternoon.
No, I am not trying to “hijack” the Summit agenda messing with the afternoon sessions by colleagues Claude Baudoin and Mitchell Ummel. I am simply pointing out a corollary to the morning seminar that might be on your mind in the afternoon. Needless to say, thinking about it in the afternoon of the 28th instead of the afternoon of the 27th is quite appropriate…
Yesterday’s post concluded with a “what it all means” statement, as follows:
Technical debt is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.
If you accept this premise, you can use the technical debt metric to construct boundary objects between various departments in your company/organization. The metric could serve as the heart of boundary objects between dev and IT ops, between dev and customer support, between dev and a company to which some development tasks are outsourced, etc. The point is the enablement of working agreements between multiple stakeholders through the technical debt metric. For example, dev and IT ops might mutually agree that the technical debt in the code to be deployed to the production environment will be less than $3 per line of code. Or, dev and customer support might agree that enhanced refactoring will commence if the code decays over time to more than $4 per line of code.
You can align various departments by by using the technical debt metric. This alignment is particularly important when the operational balance between departments has been disrupted. For example, your developers might be coding faster than your ITIL change managers can process the change requests.
A lot more on the use of the technical debt metric to mitigate cross-organizational dysfunctions, including some Outmodel aspects, will be covered in our seminar in Cambridge, MA on the morning of the 27th. We look forward to discussing this intriguing subject with you there!
Israel
Why Spend a Whole Morning on Technical Debt?
In a little over a month Jim Highsmith and I will deliver our joint seminar on technical debt in the Cutter Summit. Here are eight characteristics of the technical debt metric that make it clear why you should spend 3.5 precious hours on the topic:
- The technical debt metric shifts the emphasis in software development from proficiency in the software process to the output of the process.
- It changes the playing fields from qualitative assessment to quantitative measurement of the quality of the software.
- It is an effective antidote to the relentless function/feature pressure.
- It can be used with any software method, not “just” Agile.
- It is applicable to any amount of code.
- It can be applied at any point in time in the software life-cycle.
- These six characteristics of the technical debt metric enable effective governance of the software process.
- The above characteristics of the technical debt metric enable effective governance of the software product portfolio.
The eight characteristics in the aggregate amount to technical debt metric as a ‘universal source of truth.’ It is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.
Jim and I look forward to meeting you at the summit and interacting with you in the technical debt seminar!
Making code reviews not suck
Not all Agile teams practice strong code reviews, but one of the original Agile practices (sort of long forgotten, it seems) of paired programming was all about code review. As such, I thought I’d cross-post these two videos going over what one RedMonk client, SmartBear, does to help make code reviews suck less. –Coté
Yesterday SmartBear released version 6.0 of CodeCollaborator, the popular code reviewing tool. They’ve added in numerous features, of course, with highlights like handling asset as an item to review (like a Word doc), enhancements to the Eclipse plugin, and integration with VisualStudio. Check out the 6.0 feature list for more details.
I discussed the new features with them in the below interview and then got a quick demo:
New features interview
You can download the video directly as well.
Also, a full transcript of the video:
Michael Coté: Well, hello everybody! Here we are in lovely Austin, Texas, at what we have dubbed the SmartBear studio. This is Michael Coté, of course, of RedMonk. And today I am joined by a guest to go over a new release that SmartBear has out. You want to introduce yourself?
Gregg Sporar: Thank you, Michael. Yes, my name is Gregg Sporar. I have a face made for radio, but yet we are going to record this on video.
Actually, that was the great thing about doing the podcast, because it’s a podcast and it’s just my voice.
Michael Coté: …[the podcast] on code reviewing.
Gregg Sporar: But then, you took this giant picture of me and put that on the blog, and I am thinking, dude, more of a Gravatar. We don’t need — here, we are. We are talking about Code Collaborator v6.0.
Michael Coté: That’s right. Just to give us like a really quick introduction, like what does CodeCollaborator do for people who don’t know off the top of their head?
Gregg Sporar: CodeCollaborator’s sole goal in life is to automate the grunt work parts of the peer code review process. So the collecting of the files and making them available on a central location. Coordinating the communication between the review participants and tracking what everybody says and where defects are found and that kind of thing, and then reporting the statistics at the end.
Michael Coté: What release number is this one?
Gregg Sporar: We are talking about version 6.0.
Michael Coté: So in 6.0, so tell us what the new features are? What’s going to get people excited about this release?
Gregg Sporar: There are several things. Let’s back up for just a second to version 5.0 last year, when we added support for reviewing materials other than just source files. The reason for that was, because a lot of our customers, you are dealing with a software development team, but there are other people on the periphery as well, that might want to be involved in the review. If you’re building embedded software, it might be that the firmware guys or the hardware design guy, he wants to be involved and he wants to see his schematic in that.
Michael Coté: Right.
Gregg Sporar: Then we also had just regular software development teams coming to us saying, well, this is great, but I would like to add the design document to the review and look at it and reference it from within the same tool.
So last year we added support for PDF files, for example, and for image files, for JPEGs, and PNGs, and GIFs, and that kind of thing.
Michael Coté: And that allows you to add in all the commentary and the usual meta information on a piece of code?
Gregg Sporar: Exactly! So whenever I would show the PDF feature to people, they would say, well, that’s great, but I would really like to do this with a Word document.
Michael Cote:: Sure.
Gregg Sporar: So there is a plug-in that Microsoft makes available to create a PDF off of a Word document, but people don’t want to do that. They just want to take their document and put it in place. So that’s one of the key features in 6.0.
The way we actually implemented that is kind of interesting. We built a Windows printer driver, because, again, at the end of the day, we just need to be able to render something and paginate it and put it into the tool. The best way to do that really in that environment is a printer driver. So it’s not just Word, it’s not just Microsoft Office apps, it’s any Windows app that can render paginated output.
Other major features. A significant enhancement to our Eclipse plug-in, which we have had for a few years now. In the past it was limited to just being able to show you or actually just being able to create a review or add materials to a review, and now we have actually brought the entire review experience directly into the IDE.
Another thing that we have gotten a lot of request for is for Visual Studio. Not everybody uses Eclipse, believe it or not. So what we have done is just sort of an initial entrée into the Visual Studio world, we have essentially got functionality equivalent to what we had in our old Eclipse plug-in.
So again, I don’t have that ability to bring the review experience in, like I have done now and with the guys that built for our Eclipse plug-in, but it’s a start. It gets you partway there.
Again, to let you stay within your working context, to at least be able to create the review or add materials to that.
Then there are what I call Red Meat features. Not RedMonk, Red Meat. This is the real type stuff, because these are features that you don’t have to be an Eclipse user or Visual Studio user, this is something that’s going to affect everybody.
This is, for example, one of the things that a lot of people have asked for, for a long time, the ability to delete a comment after you put it in to the tool. We are not actually going to allow you to delete, we are going to allow you to redact the comment.
Michael Cote:: Right.
Gregg Sporar: I will show that to you during the demo. The reason for that is, we can’t really completely remove it, because it would break the IM type paradigm for our real-time track capability. I mean, think about it, if you were IMing with somebody —
Michael Cote:: And it just disappeared.
Gregg Sporar: And it just disappeared while you were reading it, that would kind of freak you out. That breaks that paradigm.
Michael Cote:: It’s kind of like that email recall feature, which is a little strange in its own right.
Gregg Sporar: Which is a little strange in its own right. Then the other issue of course is, we have a lot of customers who, auditability, traceability, that kind of thing, is really important. Nothing can ever be deleted.
Michael Coté: Sure.
Gregg Sporar: So we are going to allow you to redact a comment. We have changed the way that we display defects within the file comparison window. We have added, again, some of these usability type features that affect everybody.
We have made some enhancements to our ClearCase Integration. We have also put in some pretty important enhancements to our integration at the other end of the spectrum to Git and Mercurial.
Michael Coté: And how many version control systems do you guys work with now?
Gregg Sporar: 16.
Michael Coté: 16? That’s pretty nice.
Gregg Sporar: That’s a rather large number.
Michael Coté: That’s probably more than most people could name off.
Gregg Sporar: So a fun drinking game is to get a couple of beers in me and then try to get me to list the 16 in reverse alphabetical order, because I can do it in alphabetical order, but reverse alphabetical order is a little more difficult.
One last [thing], maybe, to mention really quickly. We did, and this is again something that you can’t appreciate unless you are an existing user of the product, we have significantly enhanced some of our reporting capabilities. That’s kind of that third pillar of what it is we do.
For the customizable reports, where the user can build their own query, we have added some additional fields that they can now filter on and select and that kind of thing.
Then we put a lot of effort into adding what we call user-oriented reports. So we have always had review-oriented reports and defect-oriented reports, that again, primary key is information about the review overall or primary key is, tell me about the defects that were found, but now we have added a third category, which is, tell me what I have been doing.
Michael Coté: I always think of that as self-micromanagement. Sort of optimize your own self.
Gregg Sporar: So one of the features, it is the ability for me to come in after the fact into the tool and find out, well, what reviews did I work on during the last week? I had a guy at a customer site explain to me this feature, because when he is doing his weekly status report, he knows he typically spends about 10-20% of his time during a week doing code review. Well, he wants to know, what reviews he did.
Michael Coté: Yeah.
Gregg Sporar: So again, user-oriented reporting.
Michael Coté: Well, great! Well, let’s check out a demo of those features and see what we get to see.
Demo
You can download the video directly as well.
There’s also a nice wrap-up of posts detailing features on the Smartbear blog.