The Agile Executive

Making Agile Work

Archive for September 2010

The Gat/Highsmith Joint Seminar on Technical Debt and Software Governance

leave a comment »

Jim and I have finalized the content and the format for our forthcoming Cutter Summit seminar. The seminar is structured around a case study which includes four exercise. We expect the case study/exercises will take close to two-thirds of the allotted time (the morning of October 27). In the other third we will provide the theory and practices to be used in the seminar exercises and (hopefully) in many future technical debt engagements participants in the workshop will oversee.

The seminar does not require deep technical knowledge. It targets participants who possess conceptual grasp of software development, software governance and IT operations/ITIL. If you feel like reading a little about technical debt prior to the Summit, the various posts on technical debt in this blog will be more than sufficient.

We plan to go with the following agenda (still subject to some minor tweaking):

Agenda for the October 27, 9:30AM to 1:00PM Technical Debt Seminar

  • Setting the Stage: Why Technical Debt is a Strategic Issue
  • Part I: What is Technical Debt?
  • Part II : Case Study – NotMyCompany, Inc.
    • Exercise #1 – Modernizing NotMyCompany’s Legacy Code
  • Part III: The Nature of Technical Debt
  • Part IV: Unified Governance
    • Exercise #2 – The acquisition of SocialAreUs by NotMyCompany
  • Part V: Process Control Models
    • Exercise #3 – How Often Should NotMyCompany Stop the Line?
  • (Time Permitting – Part VI: Using Technical Debt in Devops
    • Exercise #4 – The Agile Versus ITIL Debate at NotMyCompany)

By the end of the seminar you will know how to effectively apply technical debt techniques as an integral part of software governance that is anchored in business realities and imperatives.

Written by israelgat

September 30, 2010 at 3:20 pm

What 108M Lines of Code do not Tell Us

leave a comment »

Source: Nemo

Coming on the heels of Gartner’s research note projecting $1 trillion in IT Debt by 2015, CAST’s study provided a more granular view of the debt, estimating an average of over $1 million in technical debt per application in a sample of 288 applications. Between these two studies, the situation examined at the micro-level seems to be quite consistent with the state of affairs estimated and projected at the macro-level.

My hunch is that the gravity of the situation from a software quality and maintenance perspective is actually masked by efforts of IT staffs to compensate for programming problems through operational excellence. For example, carefully staged deployment and quick rollback often enable coping with defects that could/should have been handled through higher test coverage, lesser complexity or a more acceptable level of code duplication.

Part of the reason that the masking effects of IT staffs are not always fully appreciated is that they are embedded in the business design of IT Outsourcing companies. The company to which you outsourced your IT is ‘making a bet’ it can run your IT better than you can. It often succeeds in so doing. The unresolved defects in your old code plus those that evolved over time through software decay have not necessarily been fixed. Rather, the manifestations of these defects are  handled operationally in a more efficient manner.

Think again if your visceral reaction to the technical debt situation described in the Gartner research note and the CAST study is of the “This can’t possibly be true” variety. It is what it is – just take a quick look at Nemo to see representative technical debt data with your own eyes. And, as indicated in this post, it might even be worse than what it looks. As Gartner puts it:

The results of such [IT Debt] an assessment will be, at best, unsettling and, at worst, truly shocking.

What 108M Lines of Code Tell Us

with 16 comments

Results of the first annual report on application quality have just been released by CAST. The company analyzed 108M lines of code in 288 applications from 75 companies in various industries. In addition to the ‘usual suspects’ –  COBOL, C/C++, Java, .NET – CAST included Oracle 4GL and ABAP in the report.

The CAST report is quite important in shedding light on the code itself. As explained in various posts in this blog, this transition from the process to its output is of paramount importance. Proficiency in the software process is a bit allusive. The ‘proof of the pudding’ is in the output of the software process. The ability to measure code quality enables effective governance of the software process. Moreover, Statistical Process Control methods can be applied to samples of technical debt readings. Such application is most helpful in striking a good balance in ‘stopping the line’ – neither too frequently nor too rarely.

According to CAST’s report, the average technical debt per line of code across all application is $2.82.  This figure, depressing that it might be, is reasonably consistent with quick eyeballing of Nemo. The figure is somewhat lower than the average technical debt figure reported recently by Cutter for a sample of the Cassandra code. (The difference is probably attributable to the differences in sample sizes between the two studies). What the data means is that the average business application in the CAST study is saddled with over $1M in technical debt!

An intriguing finding in the CAST report is the impact of size on the quality of COBOL applications.  This finding is demonstrated in Figure 1. It has been quite a while since I last saw such a dramatic demonstration of the correlation between size and quality (again, for COBOL applications in the CAST study).

Source: First Annual CAST Worldwide Application Software Quality Study – 2010

One other intriguing findings in the CAST study is that “application in government sector show poor changeability.” CAST hypothesizes that the poor changeability might be due to higher level of outsourcing in the government sector compared to the private sector. As pointed out by Amy Thorne in a recent comment posted in The Agile Executive, it might also be attributable to the incentive system:

… since external developers often don’t maintain the code they write, they don’t have incentives to write code that is low in technical debt…

Congratulations to Vincent Delaroche, Dr. Bill Curtis, Lev Lesokhin and the rest of the CAST team. We as an industry need more studies like this!

The Supply Side of the Consumerization of Enterprise Software

with 10 comments

Source: http://www.flickr.com/photos/bertboerland/2944895894/

In my recent post about the consumerization of enterprise software I discussed two factors that are likely to accelerate the pace toward such consumerization:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

In this post I would like to add a third factor – the buying pattern. My contention is that the buying pattern for micro-apps will spread to enterprise application. Potential demand for buying in this way is huge. Supply for buying enterprise software as micros-apps is not quite there yet, but it would take only one smart vendor to start transforming the traditional pattern how enterprise software is chunked, offered and sold.

Think about your recent experience downloading an application to your smart mobile phone. You did not go through a six-month evaluation period; you did not do a comprehensive competitive analysis; you did not check how well the seller does customer support in Sumatra. You simply paid something like $7.99 and downloaded the application. You are more than happy if it fulfills your needs in a reasonable manner. If it does not, you simply buy another application with the functionality you desire. Maybe you are a little more cautious now and ask a friend or send an inquiry to your Twitter followers before you pick the new application. Whatever you might choose to do, the fundamental facts are: A) you can afford to lose $7.99; and, B) your time is more precious than the sunk cost of the application. You simply move on.

This buying pattern is not something that you are going to forget when you step into your office in the morning. It makes perfect sense to you and it would be good for your company. You would rather concentrate on your business than on the tricky language of clause number 734 in the contract that your department’s attorney prepared for licensing yet another piece of enterprise software.

The ‘$7.99 experience’ you and zillion other folks like you had over the past week or the past month makes enterprise software vendors extremely vulnerable. The “high-touch; high-margin; high-commitment” [1] business design is not sustainable once the purchase model changes.  The expensive machinery of professional services, system engineering and customer support is not affordable at the face of competition that constructs modular chunks of enterprise software and sells them at a price the customer can afford to write off (if they do not perform to satisfaction). Maybe the ceiling in the enterprise to ‘forget about this application and move on’ is no higher than $1,000 (instead of ‘no higher than $7.99’ for the private citizen), but a smart vendor can still make a lot of money on selling at one thousand dollars a pop to the enterprise.

The growing gap between “this lovely application on my iPhone” and the “headache of licensing traditional enterprise software” is an immense incentive for up-and-coming software vendors to use the ‘$7.99 experience’ as the heart of a new business design. This new business design can be simply summarized as “low-touch; low-margin; low commitment” [2]. And, yes, it is very disruptive to the incumbents…

My hunch is that the IT Service Management (ITSM) industry will be the first to crumble. The premise of “service delivery” sounds a little hollow in a cloud computing world characterized by “everything as a service” [3]. Would a buyer be really willing to pay for “service for the service” from a vendor who does not actually provide the underlying service?! It sounds like paying a Fidelity or a Vanguard investment manager to manage a portfolio of their own mutual funds for you…

All it takes for this shift to start – in ITSM or in another part of enterprise software –  is one successful vendor.

Footnotes:

[1] I am indebted to Annie Shum for this phrase.

[2] Ibid.

[3] I am indebted to Russ Daniels for this phrase.

Technical Debt Assessment, Sterling Barton LLC and the Moussaka

with one comment

A few month ago Chris Sterling and I were carrying out a Cutter Technical Debt Assessment and Valuation engagement for a venture capitalist who was considering a certain company. We discovered various things in the code of this company. More noteworthy, my deep domain expertise led to Chris discovering the great Greek dish Moussaka.

I have eaten a lot of good Moussakas over the years. Even against this solid gastronomic background I can’t forget how the eyes of Chris lit up when he took the first bite. It took him only a tiny little time to get on his iPhone and tweet on the culinary aspects of our engagement. I then knew it was going to be a very successful engagement…

The relationship with Chris deepened since this episode. For example, in collaboration with Brent Barton Chris contributed a great article to the forthcoming issue of the Cutter IT Journal on Technical Debt. In this article Chris and Brent  demonstrate how technical debt techniques can be applied at the portfolio level. They make the reader step into the shoes of the project portfolio planner and walk him through their approach to enhancing the decision-making process by using the software debt dashboard.

Chris has just published an excellent post entitled “Using Sonar Metrics to Assess Promotion of Builds to Downstream Environments” in Getting Agile and was kind enough to suggest I cross-post it in The Agile Executive. Here it is (please note that the examples given below by Chris have nothing to do with the engagement described above):

“For those of you that don’t already know about Sonar you are missing an important tool in your quality assessment arsenal. Sonar is an open source tool that is a foundational platform to manage your software’s quality. The image below shows one of the main dashboard views that teams can use to get insights into their software’s health.

The dashboard provides rollup metrics out of the box for:

  • Duplication (probably the biggest Design Debt in many software projects)
  • Code coverage (amount of code touched by automated unit tests)
  • Rules compliance (identifies potential issues in the code such as security concerns)
  • Code complexity (an indicator of how easy the software will adapt to meet new needs)
  • Size of codebase (lines of code [LOC])

Before going into how to use these metrics to assess whether to promote builds to downstream environments, I want to preface the conversation with the following note:

Code analysis metrics should NOT be used to assess teams and are most useful when considering how they trend over time

Now that we have this important note out-of-the-way and, of course, nobody will ever use these metrics for “evil”, lets discuss pulling data from Sonar to automate assessments of builds for promotion to downstream environments. For those that are unfamiliar with automated promotion, here is a simple, happy example:

A development team makes some changes to the automated tests and implementation code on an application and checks their changes into source control. A continuous integration server finds out that source control artifacts have changed since the last time it ran a build cycle and updates its local artifacts to incorporate the most recent changes. The continuous integration server then runs the build by compiling, executing automated tests, running Sonar code analysis, and deploying the successful deployment artifact to a waiting environment usually called something like “DEV”. Once deployed, a set of automated acceptance tests are executed against the DEV environment to validate that basic aspects of the application are still working from a user perspective. Sometime after all of the acceptance tests pass successfully (this could be twice a day or some other timeline that works for those using downstream environments), the continuous integration server promotes the build from the DEV environment to a TEST environment. Once deployed, the application might be running alongside other dependent or sibling applications and integration tests are run to ensure successful deployment. There could be more downstream environments such as PERF (performance), STAGING, and finally PROD (production).

The tendency for many development teams and organizations is that if the tests pass then it is good enough to move into downstream environments. This is definitely an enormous improvement over extensive manual testing and stabilization periods on traditional projects. An issue that I have still seen is the slow introduction of software debt as an application is developed. Highly disciplined technical practices such as Test-Driven Design (TDD) and Pair Programming can help stave off extreme software debt but these practices are still not common place amongst software development organizations. This is not usually due to lack of clarity about these practices, excessive schedule pressure, legacy code, and the initial hurdle to learning how to do these practices effectively. In the meantime, we need a way to assess the health of our software applications beyond just tests passing and in the internals of the code and tests themselves. Sonar can be easily added into your infrastructure to provide insights into the health of your code but we can go even beyond that.

The Sonar Web Services API is quite simple to work with. The easiest way to pull information from Sonar is to call a URL:

http://nemo.sonarsource.org/api/resources?resource=248390&metrics=technical_debt_ratio

This will return an XML response like the following:

  248390
  com.adobe:as3corelib
  AS3 Core Lib
  AS3 Core Lib
  PRJ
  TRK
  flex
  1.0
  2010-09-19T01:55:06+0000

    technical_debt_ratio
    12.4
    12.4%

Within this XML, there is a section called  that includes the value of the metric we requested in the URL, “technical_debt_ratio”. The ratio of technical debt in this Flex codebase is 12.4%. Now with this information we can look for increases over time to identify technical debt earlier in the software development cycle. So, if the ratio to increase beyond 13% after being at 12.4% 1 month earlier, this could tell us that there is some technical issues creeping into the application.

Another way that the Sonar API can be used is from a programming language such as Java. The following Java code will pull the same information through the Java API client:

Sonar sonar = Sonar.create("http://nemo.sonarsource.org");
Resource commons = sonar.find(ResourceQuery.createForMetrics("248390",
        "technical_debt_ratio"));
System.out.println("Technical Debt Ratio: " +
        commons.getMeasure("technical_debt_ratio").getFormattedValue());

This will print “Technical Debt Ratio: 12.4%” to the console from a Java application. Once we are able to capture these metrics we could save them as data to trend in our automated promotion scripts that deploy builds in downstream environments. Some guidelines we have used in the past for these types of metrics are:

  • Small changes in a metric’s trend does not constitute immediate action
  • No more than 3 metrics should be trended (the typical 3 I watch for Java projects are duplication, class complexity, and technical debt)
  • The development should decide what are reasonable guidelines for indicating problems in the trends (such as technical debt +/- .5%)

In the automated deployment scripts, these trends can be used to stop deployment of the next build that passed all of its tests and emails can be sent to the development team regarding the metric culprit. From there, teams are able to enter the Sonar dashboard and drill down into the metric to see where the software debt is creeping in. Also, a source control diff can be produced to go into the email showing what files were changed between the successful builds that made the trend go haywire. This might be a listing per build and the metric variations for each.

This is a deep topic that this post just barely introduces. If your organization has a separate configuration management or operations group that managed environment promotions beyond the development environment, Sonar and the web services API can help further automate early identification of software debt in your applications before they pollute downstream environments.”

Thank you, Chris!

Why Spend the Afternoon as well on Technical Debt?

with 2 comments

Source: http://www.flickr.com/photos/pinksherbet/233228813/

Yesterday’s post Why Spend a Whole Morning on Technical Debt? listed eight characteristics of the technical debt metric that will be discussed during the morning of October 27 when Jim Highsmith and I deliver our joint Cutter Summit seminar. This posts adds to the previous post by suggesting a related topic for the afternoon.

No, I am not trying to “hijack” the Summit agenda messing with the afternoon sessions by colleagues Claude Baudoin and Mitchell Ummel. I am simply pointing out a corollary to the morning seminar that might be on your mind in the afternoon. Needless to say, thinking about it in the afternoon of the 28th instead of the afternoon of the 27th is quite appropriate…

Yesterday’s post concluded with a “what it all means” statement, as follows:

Technical debt is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

If you accept this premise, you can use the technical debt metric to construct boundary objects between various departments in your company/organization. The metric could serve as the heart of boundary objects between dev and IT ops, between dev and customer support, between dev and a company to which some development tasks are outsourced, etc. The point is the enablement of working agreements between multiple stakeholders through the technical debt metric. For example, dev and IT ops might mutually agree that the technical debt in the code to be deployed to the production environment will be less than $3 per line of code. Or, dev and customer support might agree that enhanced refactoring will commence if the code decays over time to more than $4 per line of code.

You can align various departments by by using the technical debt metric. This alignment is particularly important when the operational balance between departments has been disrupted. For example, your developers might be coding faster than your ITIL change managers can process the change requests.

A lot more on the use of the technical debt metric to mitigate cross-organizational dysfunctions, including some Outmodel aspects, will be covered in our seminar in Cambridge, MA on the morning of the 27th. We look forward to discussing this intriguing subject with you there!

Israel

Why Spend a Whole Morning on Technical Debt?

with one comment

In a little over a month Jim Highsmith and I will deliver our joint seminar on technical debt in the Cutter Summit. Here are eight characteristics of the technical debt metric that make it clear why you should spend 3.5 precious hours on the topic:

  1. The technical debt metric shifts the emphasis in software development from proficiency in the software process to the output of the process.
  2. It changes the playing fields from qualitative assessment to quantitative measurement of the quality of the software.
  3. It is an effective antidote to the relentless function/feature pressure.
  4. It can be used with any software method, not “just” Agile.
  5. It is applicable to any amount of code.
  6. It can be applied at any point in time in the software life-cycle.
  7. These six characteristics of the technical debt metric enable effective governance of the software process.
  8. The above  characteristics of the technical debt metric enable effective governance of the software product portfolio.

The eight characteristics in the aggregate amount to technical debt metric as a ‘universal source of truth.’ It is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

Jim and I look forward to meeting you at the summit and interacting with you in the technical debt seminar!

Written by israelgat

September 22, 2010 at 7:32 am

How to Break the Vicious Cycle of Technical Debt

with 10 comments

The dire consequences of the pressure to quickly deliver more functions and features to the market have been described in detail in various posts in this blog (see, for example, Toxic Code). Relentless pressure forces the development team to take technical debt. The very same pressure stands in the way of paying back the debt in a timely manner. The accrued technical debt reduces the velocity of the development team. Reduced development velocity leads to increased pressure to deliver, which leads to taking additional technical debt, which… It is a vicious cycle that is extremely difficult to break.

Figure 1: The Vicious Cycle of Technical Debt

The post Using Credit Limits to Constrain “Development on Margin” proposed a way of coping with the vicious cycle of technical debt – placing a limit on the amount of technical debt a development team is allowed to accrue. Such a limit addresses the demand side of the software development process. Once a team reaches the pre-determined technical debt limit (such as $3 per line of code) it cannot continue piling on new functions and features. It must attend to reducing the technical debt.

A complementary measure can be applied to the supply side of the software development process. For example, one can dynamically augment the team by drawing upon on-demand testing. uTest‘s recent announcement about securing Series C financing explains the rationale for the on-demand paradigm:

“The whole ‘appification’ of software platforms, whether it’s for social platforms like Facebook or mobile platforms like the iPhone or Android or Palm, or even just Web apps, creates a dramatically more complex user-testing matrix for software publishers, which could mean media companies, retailers, enterprise software companies,” says Wienbar. “Anybody who has to interact with consumers needs a service to help with that testing. You can’t cover that whole matrix with your in-house test team.”

Likewise, on-demand development can augment the development team whenever the capacity of the in-house team is insufficient to satisfy demand. IMHO it is only a matter of little time till marketplaces for on-demand development will evolve. All the necessary ‘ingredients’ for so doing – Agile, Cloud, Mobile and Social – are readily available. It is merely a matter of putting them together to offer on-demand development as a commercial service.

Whether you do on-demand testing, on-demand development or both, you will soon be able to address the supply side of software development in a flexible and cost-effective manner. Between curtailing demand through technical debt limits and expanding supply through on-demand testing/development, you will be better able to cope with the relentless pressure to deliver more and quicker than the capacity of your team allows.

Making code reviews not suck

leave a comment »

Not all Agile teams practice strong code reviews, but one of the original Agile practices (sort of long forgotten, it seems) of paired programming was all about code review. As such, I thought I’d cross-post these two videos going over what one RedMonk client, SmartBear, does to help make code reviews suck less. –Coté

Yesterday SmartBear released version 6.0 of CodeCollaborator, the popular code reviewing tool. They’ve added in numerous features, of course, with highlights like handling asset as an item to review (like a Word doc), enhancements to the Eclipse plugin, and integration with VisualStudio. Check out the 6.0 feature list for more details.

I discussed the new features with them in the below interview and then got a quick demo:

New features interview

You can download the video directly as well.

Also, a full transcript of the video:

Michael Coté: Well, hello everybody! Here we are in lovely Austin, Texas, at what we have dubbed the SmartBear studio. This is Michael Coté, of course, of RedMonk. And today I am joined by a guest to go over a new release that SmartBear has out. You want to introduce yourself?

Gregg Sporar: Thank you, Michael. Yes, my name is Gregg Sporar. I have a face made for radio, but yet we are going to record this on video.

Actually, that was the great thing about doing the podcast, because it’s a podcast and it’s just my voice.

Michael Coté:[the podcast] on code reviewing.

Gregg Sporar: But then, you took this giant picture of me and put that on the blog, and I am thinking, dude, more of a Gravatar. We don’t need — here, we are. We are talking about Code Collaborator v6.0.

Michael Coté: That’s right. Just to give us like a really quick introduction, like what does CodeCollaborator do for people who don’t know off the top of their head?

Gregg Sporar: CodeCollaborator’s sole goal in life is to automate the grunt work parts of the peer code review process. So the collecting of the files and making them available on a central location. Coordinating the communication between the review participants and tracking what everybody says and where defects are found and that kind of thing, and then reporting the statistics at the end.

Michael Coté: What release number is this one?

Gregg Sporar: We are talking about version 6.0.

Michael Coté: So in 6.0, so tell us what the new features are? What’s going to get people excited about this release?

Gregg Sporar: There are several things. Let’s back up for just a second to version 5.0 last year, when we added support for reviewing materials other than just source files. The reason for that was, because a lot of our customers, you are dealing with a software development team, but there are other people on the periphery as well, that might want to be involved in the review. If you’re building embedded software, it might be that the firmware guys or the hardware design guy, he wants to be involved and he wants to see his schematic in that.

Michael Coté: Right.

Gregg Sporar: Then we also had just regular software development teams coming to us saying, well, this is great, but I would like to add the design document to the review and look at it and reference it from within the same tool.

So last year we added support for PDF files, for example, and for image files, for JPEGs, and PNGs, and GIFs, and that kind of thing.

Michael Coté: And that allows you to add in all the commentary and the usual meta information on a piece of code?

Gregg Sporar: Exactly! So whenever I would show the PDF feature to people, they would say, well, that’s great, but I would really like to do this with a Word document.

Michael Cote:: Sure.

Gregg Sporar: So there is a plug-in that Microsoft makes available to create a PDF off of a Word document, but people don’t want to do that. They just want to take their document and put it in place. So that’s one of the key features in 6.0.

The way we actually implemented that is kind of interesting. We built a Windows printer driver, because, again, at the end of the day, we just need to be able to render something and paginate it and put it into the tool. The best way to do that really in that environment is a printer driver. So it’s not just Word, it’s not just Microsoft Office apps, it’s any Windows app that can render paginated output.

Other major features. A significant enhancement to our Eclipse plug-in, which we have had for a few years now. In the past it was limited to just being able to show you or actually just being able to create a review or add materials to a review, and now we have actually brought the entire review experience directly into the IDE.

Another thing that we have gotten a lot of request for is for Visual Studio. Not everybody uses Eclipse, believe it or not. So what we have done is just sort of an initial entrée into the Visual Studio world, we have essentially got functionality equivalent to what we had in our old Eclipse plug-in.

So again, I don’t have that ability to bring the review experience in, like I have done now and with the guys that built for our Eclipse plug-in, but it’s a start. It gets you partway there.

Again, to let you stay within your working context, to at least be able to create the review or add materials to that.

Then there are what I call Red Meat features. Not RedMonk, Red Meat. This is the real type stuff, because these are features that you don’t have to be an Eclipse user or Visual Studio user, this is something that’s going to affect everybody.

This is, for example, one of the things that a lot of people have asked for, for a long time, the ability to delete a comment after you put it in to the tool. We are not actually going to allow you to delete, we are going to allow you to redact the comment.

Michael Cote:: Right.

Gregg Sporar: I will show that to you during the demo. The reason for that is, we can’t really completely remove it, because it would break the IM type paradigm for our real-time track capability. I mean, think about it, if you were IMing with somebody —

Michael Cote:: And it just disappeared.

Gregg Sporar: And it just disappeared while you were reading it, that would kind of freak you out. That breaks that paradigm.

Michael Cote:: It’s kind of like that email recall feature, which is a little strange in its own right.

Gregg Sporar: Which is a little strange in its own right. Then the other issue of course is, we have a lot of customers who, auditability, traceability, that kind of thing, is really important. Nothing can ever be deleted.

Michael Coté: Sure.

Gregg Sporar: So we are going to allow you to redact a comment. We have changed the way that we display defects within the file comparison window. We have added, again, some of these usability type features that affect everybody.

We have made some enhancements to our ClearCase Integration. We have also put in some pretty important enhancements to our integration at the other end of the spectrum to Git and Mercurial.

Michael Coté: And how many version control systems do you guys work with now?

Gregg Sporar: 16.

Michael Coté: 16? That’s pretty nice.

Gregg Sporar: That’s a rather large number.

Michael Coté: That’s probably more than most people could name off.

Gregg Sporar: So a fun drinking game is to get a couple of beers in me and then try to get me to list the 16 in reverse alphabetical order, because I can do it in alphabetical order, but reverse alphabetical order is a little more difficult.

One last [thing], maybe, to mention really quickly. We did, and this is again something that you can’t appreciate unless you are an existing user of the product, we have significantly enhanced some of our reporting capabilities. That’s kind of that third pillar of what it is we do.

For the customizable reports, where the user can build their own query, we have added some additional fields that they can now filter on and select and that kind of thing.

Then we put a lot of effort into adding what we call user-oriented reports. So we have always had review-oriented reports and defect-oriented reports, that again, primary key is information about the review overall or primary key is, tell me about the defects that were found, but now we have added a third category, which is, tell me what I have been doing.

Michael Coté: I always think of that as self-micromanagement. Sort of optimize your own self.

Gregg Sporar: So one of the features, it is the ability for me to come in after the fact into the tool and find out, well, what reviews did I work on during the last week? I had a guy at a customer site explain to me this feature, because when he is doing his weekly status report, he knows he typically spends about 10-20% of his time during a week doing code review. Well, he wants to know, what reviews he did.

Michael Coté: Yeah.

Gregg Sporar: So again, user-oriented reporting.

Michael Coté: Well, great! Well, let’s check out a demo of those features and see what we get to see.

Demo

You can download the video directly as well.

There’s also a nice wrap-up of posts detailing features on the Smartbear blog.

Written by Coté

September 15, 2010 at 8:56 am

Consumerization of Enterprise Software

with 7 comments

Source: http://www.flickr.com/photos/ross/3055802287/

Figure 1: Consumerization of IT

The devastation in traditional Publishing needs precious little mentioning. Just think about a brand like BusinessWeek selling for a meager cash offer in the $2 million to $5 million range, McGraw Hill getting into interactive text books through Inkling or Flipboard delivering “… your personalized social magazine” to your iPad. This devastation might not have gotten the attention that the plight of the ‘big three’ automobile manufacturers got, but in its own way it is as shocking as a visit to the abandoned properties in Detroit is.

As most of my clients do enterprise software, many of my discussions with them is about the consumerization of IT. From a day-to-day perspective this consumerization is primarily about six aspects:

  • Use of less expensive/consumer-focused components as infrastructure
  • ‘Pay as you go’ pricing (through Cloud pricing mechanisms/policies)
  • Use of web application interfaces to monitor IT infrastructure
  • Use of mobile and consumer based devices for accessing IT alerts and interfacing with systems
  • Use of the fast growing number of mobile applications to enhance productivity
  • Application of enterprise social networks and social software in the data center

From a strategic perspective, IT consumerization IMHO is all about the transformation toward “everything as a service” [1]. The virtuous cycle driven by Cloud, Mobile and Social manifests itself at three levels:

  • It obviously affects the IT folks with whom I discuss the subject. Immense changes are already taking place in many IT departments.
  • It affects their company. For example, the company might need to change the business design in order to optimize its supply chain.
  • It affects the clients of their company. Their definition of value changes these days faster than the time it takes the CIO I speak with to say “value.”

© Copyright 2010 Israel Gat

Figure 2: The Virtuous Cycle of Cloud, Mobile and Social

Sometimes I get a push-back from my clients on this topic. The push-back is usually rooted in the immense complexity (and fragility) of the enterprise software systems that had been built over the past ten, twenty or thirty years. The folks who push back on me point out that consumerization of IT will not scale big time until enterprise software gets “consumerized” or at least modernized.

I agree with this good counter-point but only up to a point. I believe two factors are likely to accelerate the pace toward “consumerization” of enterprise software:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

If you are in enterprise software you need to start modernizing your applications today. The reason is the imperative need to mitigate risk prior to reaching the end-point, almost irrespective of how far down the road the end-point might be.  See Llewellyn Falco‘s excellent video clip Rewriting Vs Refactoring for a crisp articulation of the risk involved in rewriting and why starting to refactor now is the best way to mitigate the risk.

Footnotes:

[1] The phrase “Everything as a Service” has been coined by Russ Daniels.