The Agile Executive

Making Agile Work

Archive for the ‘Companies’ Category

Through the Prism of IT Transformation for Tomorrow’s Enterprise Datacenters: Interview with Annie Shum

with one comment

As indicated in our recent post “Extending the Scope of the Agile Executive”, Cote and I have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. We start the coverage of structural changes that are relevant to Agile with an interview with Annie Shum, VP of Advanced Technology, Amdocs Corp.

We cover a broad panorama in this interview with Annie. Here are some items that may be of special interest to the reader who focuses on Agile methods, processes and governance in a broad sense – from programming to IT operations and anything in between:

  • Unleashing disruptive transformations
  • Supply and demand – the two sides of the IT “coin”
  • Open source software in general and OpenStack in particular
  • The impact of social networking and other Web 2.0 tools
  • Three billion downloads and counting…
  • Finding the “right” balance between hierarchical command-control and bottom-up empowerment
  • “Self-service” IT service delivery/deployment
  • Forthcoming changes in IT system administration and the rise of DevOps
  • How to gain freedom from a variety of low-level operational tasks and controls of physical infrastructure
  • Provisioning and over provisioning
  • Many others…

Annie answers all questions with data, insights and passion. No surprises there…

Israel: Nancy Foy immortalized the monolithic International Business Machines Corporation in her classic “The Sun Never Sets on IBM.” Much has changed, of course, since the book was published in the 70’s. For quite a few years IBM has been deconstructing its business design, its organizational structure and both internal and external processes. By some accounts, prior to Gerstner IBM had even been contemplating reforming itself as a bunch of independent companies. The contrast to IBM’s announcement a couple of weeks ago about putting both software and hardware under one hand is noteworthy. What do you make of it, Annie? Is this a new development? Or is it a blast from the past?

Annie: Interesting question but I would be remiss if I failed to point out that I don’t have a crystal ball or the expertise to predict reliably whether this will be an isolated case or a trend-setter.  Although the arguably radical IBM organizational restructuring in management is newsworthy, I am not especially interested in looking at it purely from the perspective of vendor management structure because it is merely a means to an end.  What intrigues me is the rationale behind this key announcement.  In particular, I am interested in envisioning the more profound and potentially game-changing, if not disruptive, transformation that IBM hopes to unleash by adopting this bold organizational restructuring with likely (significant) risks.

To better understand this new undertaking, I think it would be instructive to analyze it from the supply side as well as the demand side.  So let’s break up the narrative: first by looking at the supply side, namely the IT service providers/system vendors, followed by the second half of the narrative, the customers/consumers.

Israel: I am intrigued by your supply side/demand side approach. Please elaborate.

Annie: To understand the supply side, consider the three major IT vendor announcements made during the week of July 19, 2010.  Not as three disparate events. Instead, by putting them in context and connecting the dots among them, we can uncover some very interesting insights into emerging trends of the IT industry in general and actionable guidelines for tomorrow’s enterprise datacenters in particular. 

Let’s begin with the May 2010 report from Saugatuck Research titled, “Gorillas In the Cloud: Applying Saugatuck’s “Master Brand” Model to Cloud IT” whereby “Master Brands” refer to those vendors (and service providers) that dominate and influence IT marketplaces, technologies and/or user accounts. This May report sets the stage for the latest Saugatuck research alert titled, “One-Stop Shopping – Major Vendors Acquire Assets for the Cloud”. This research alert describes how increasing numbers of major vendors are striving to become the “sole source for offerings up and down the IT EcoStack™ targeting the Cloud.”

As if on cue, IBM released two major announcements just this past week. First, on July 20, 2010, InformationWeek reported that IBM plans[i] to combine hardware and software to spur the company’s efforts to deliver bundled, plug-and-play systems. According to Sam Palmisano, the core strategy pivots on producing tightly bundled computer systems that “feature chips, middleware, and business software designed from the ground up to support Cloud Computing and other new-wave IT architectures.”

To some long-standing industry observers, this strategy may appear to be “back to the future” and IBM is simply returning to its roots after a prolonged hiatus from its original business model.  There is, however, an important historical footnote. Almost five decades ago, due to concerns of monopoly antitrust abuses stemming from the bundling of hardware and software in the IBM mainframe systems, the US government took legal action leading to IBM’s acceptance of the 1956 Consent Decree.

Today, unlike the past, IBM no longer dominates the computer systems market. In fact, there is a growing trend towards bundled systems, mainly by the “Master Brands”, to “mask” complexity for customers as they embark on implementing complex IT endeavors including key programs such as datacenter consolidation, server/storage virtualization, predictive analytics, SOA/BPM, Cloud Computing (public, private or hybrid), and Green IT. For example, Oracle acquired Sun Microsystems in 2009 for $7.4 billion to support what InformationWeek described as Larry Ellison’s “applications-to-disk” strategy, while HP and Microsoft earlier this year unveiled a multi-million dollar initiative under which they will jointly engineer servers and software.

It is likely that the timeline of the July 19 IBM announcement was influenced (perhaps even pressured) by its rivals taking a similar approach to address evolving enterprise datacenters. To expedite this strategy to deliver bundled “plug-and-play” systems, IBM first announced sweeping organizational restructuring to foster internal collaboration and harness synergies across products and LOBs. Clearly, the biggest change is the management restructuring by consolidating key hardware and software divisions under the watch of a single executive, Steve Mills who’s a longtime IBM software chief.

Next, just three days later on the heels of this organizational makeover, IBM made another major announcement on July 22, 2010 amidst much fanfare and hype. Presenting the vision of a new “Dimension in Computing” designed to control multi-platform datacenter operational costs and (significantly reduce complexity), IBM announced a new hybrid “system of systems” platform that unifies IT for efficient service delivery and large-scale datacenter simplification. Dubbed a “datacenter in a box” or a “cloud in a box[ii]”, it integrates the new super powerful and energy-efficient mainframe zEnterprise, 196  running z/OS and the zEnterprise  BladeCenter Extension zBX, running Linux and AIX. By extending the System Z’s qualities of service (spanning security, scalability, availability, efficiency and virtualization) to enable Cloud readiness and optimized service delivery for enterprises, IBM likely is promoting its strength in building private Clouds for large enterprises.  See the following two slides from the IBM July 22 announcement.


Israel: So it looks like the IT industry is heading towards more “power” consolidation of mega vendors or as you referenced earlier, “Master Brands”. Is this a fait accompli? If so, is it a matter of channeling demand toward one-stop-shopping irrespective of integration realities underneath? Isn’t there a danger to this trend?”

Annie: Despite these high profile announcements by the major vendors, it is far from fait accompli. And yes, your comments are only too real especially for those who have lived through the era of monopolies and antitrust concerns. Frankly, many people believe that such a trend may be a clear threat in the presently emerging era.  While I don’t want to downplay the risk and potential damage of antitrust abuses, I believe there are some factors at work here to counteract, or at least limit, unchecked monopolies in the IT industry.

In this Internet age with the rise of “Consumerization of IT”, catalyzed by the nearly ubiquitous access to social networking and other Web 2.0 tools, IT has permeated almost every market sector in our society. The set of functions and services supported and enabled by IT has become exceedingly vast, diverse and complex such that no single business model or supplier is in a position to dominate, let alone destroy all others.  The era when a handful of proprietary stalwart vendors dominated the IT industry is all but over. Just this past decade, we have witnessed the meteoric rise of Google, Facebook and more recently, Twitter. A growing formidable force, namely the open source software and its bottom-up self-organizing community, powers as well as empowers most if not all of the Web 2.0 companies. At this point in our discussion, it is apt to segue to the third vendor announcement during the week of July 19, 2010.

On July 19, Cloud service provider RackSpace with NASA announced the sponsorship of the project: OpenStack, an open source IaaS Cloud platform. Included in the announcement is a diverse group of computer system providers from across the technology industry like CITRIX, DELL, NTT DATA, RIGHTSCALE and others to drive a deployable, totally open cloud solution.  According to their mission statement, OpenStack is designed to foster the emergence of technology standards and Cloud interoperability. One of the primary objectives is to facilitate enterprises to avoid vendor lock-in.

Israel: This appears to be a very timely announcement given that “vendor lock-in” is one of the top concerns confronting enterprises as they evaluate and plan for the transition to Cloud Computing. Having said that, are we not back to “square zero” – striking a balance between openness and “one-stop shopping” tight integration?

Annie: Yes indeed. Although some industry observers describe the issue as “vendor lock-in”, others see it as a broader issue describing it as the “challenge/difficulty of bringing back in-house” or the “lack of interoperability standards for seamless portability”.  For example, in the 2009 Cloud Computing survey conducted by IDC, over 80% surveyed rated this issue under both labels to be very important.  Incidentally, I should point out that “vendor lock-in” is neither a new nor a unique issue with Cloud Computing. On the contrary, it is a long-standing “problem” going all the way back from the early days of mainframe computing and culminating with the government versus IBM antitrust lawsuit in the ‘50s as we discussed earlier.

Interestingly, there are many forms and variants of vendor lock-in and they are not all equal. For example, many industry observers have been unhappy with the proprietary development and delivery model that Apple imposed on the iPod/iPhone/iPad.  Although the risk of “vendor lock-in” may be real, any negative impact on the ever-growing large and loyal Apple customer base seems minimal.  Just think about the run-away successful App Store. It is heavily “curated” by Apple. Yet since its opening on July 10, 2008, there have been more than one hundred thousand available apps in App Store, over two billion application downloads (as of November 2009), and reaching three billion downloads by January 2010. Steve Jobs hailed this as a landmark event: “Three billion applications downloaded in less than 18 months – this is like nothing we’ve ever seen before.”

Sorry we digressed. So let’s resume our discussion of the recent major announcements.  In a nutshell, the OpenStack announcement attempts to address the issue directly by allowing any organization to create and offer Cloud Computing capabilities using open source software freely available under the Apache 2.0 license running on standard hardware.

Now this gets interesting: a tale of two diametrically opposite strategies.  On one hand, we have IBM announcing the high performance zEnterprise 196 as a hybrid integrated multi-architecture “datacenter /Cloud in a box”. The goal is to mask complexity and maximize efficiency:  infrastructure (management /admin costs savings up to 70%) and energy consumption (up to 82% energy usage reduction) with a bundled technology stack: integrating multi-platforms, infrastructure and management (spanning service, platform and hardware).  A principal concern of this proprietary single vendor approach is the risk of “vendor lock-in”.

On the other hand, the OpenStack is “DIY” based on an open source development platform. The goal of OpenStack is the following: “Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans Cloud providers.”  The cons/challenges of this approach are probably similar to conventional “DIY” open source projects.

I should clarify that this dichotomy may be seen as an entire spectrum. As noted here IBM, VMware, etc on one hand, and RackSpace, Eucalyptus, etc on the other hand, exemplify the two end-points bookending the dichotomy spectrum. Along the spectrum, there are a growing number of intermediate options/offerings (with a rising number of variations) by a wide variety of IT Cloud service vendors: stalwart vendors including Amazon, Microsoft, Google, Salesforce.com,  etc as well as young companies and startups such as RackSpace, RightScale, Boomi, Canonical, Cloudkick, Opscode, etc.

Israel: Is this shaping up to be a battle between two diametrically opposite strategies? And if so, which one will come out on top? Or is it a draw?

Annie: To me, a similar dichotomy has already existed previously in the IT industry. For example, think Apple versus Google. Consider the modus operandi of the Apple core business model (“close or at least closely curated” to optimize user experience and quality) versus that of Google’s (“open standards/APIs” to maximize opportunities for 3rd party development participation).

Insofar as whether bundled systems or “Cloud in a box” versus open source “DIY” will be the ultimate winner, I have to defer to other industry observers with more experience such as you.  Perhaps in our future Q&A meet-up, I am interested to hear your views on how the competition may be settled eventually.  However, while we all await the uncertain outcome, IT practitioners should be mindful that the dichotomy spectrum would have profound implications not only on the supply side but also on the demand side.  In particular, because the offerings from the dichotomy spectrum will be rapidly evolving, the fluidity will very likely confound and confuse users/consumers as they attempt to balance a convoluted set of different tradeoffs. Many  enterprise IT practitioners will be under pressure to make difficult and ambiguous choices by picking one or more evolving offerings over other evolving offerings for building the foundation of tomorrow’s enterprise datacenters in the Cloud era.

Israel: Good timing.  So far in our Q&A today, you have focused on the first half of the narrative – namely, the supply side, now let’s continue to part 2 of your narrative, namely, the demand side.

Annie:  Earlier, I discussed the supply side by connecting the dots among three key announcements during the week of July 19. Now similarly for the demand side, I will suggest a few more dots that I believe should be connected. Specifically, I suggest connecting the following trends:

  • The growing complexities and inefficiencies of on-premises enterprise datacenters;
  • The inevitable rise of alternative delivery and deployment models for IT services; and
  • The advent of Cloud Computing:  a long-standing vision whose time may finally arrive.

Several months ago, I published a guest post on your blog site entitled “The Urgency of Now.”  You might recall that I began the post with some sobering and perhaps even alarming statistics about the gross inefficiency of traditional on-premises enterprise datacenters.  Here again is the Enterprise Datacenter Index at–a-glance:

In summary, enterprise IT faces a “crisis of staggering complexity” and IT infrastructure is reaching a “breaking point” marked by such salient factors/trends[iii] as the following:

  • 1.5 X: Information explosion driving over fifty percent yearly growth in storage shipments;
  • 85% idle: Over-provisioned waste primarily in distributed computing environments e.g. typical computing resources (capacity) remain idle for  an average of over eighty percent;
  • $40 Billion or 3.5% of sales: Retail industries annual loss due to (supply) value chain inefficiencies;
  • 60-70% IT spending on maintenance/overhead: Overall IT spending profile shows that the lion’s share of IT expenses goes towards overhead and maintenance. Maintenance overhead: seventy cents per dollar is spent on maintaining IT infrastructures at the expense of adding new capabilities;

Now consider the following scenario. Suppose enterprise IT could choose an alternative set of “self-service” IT service delivery/deployment models that would be orthogonal to traditional hierarchical command-and-control Cap-ex based datacenters.  Instead of owning and tightly controlling its own private internal datacenter and purchasing capital resources up front, an organization on-demand would “rent” pooled computing resources hosted on the provider’s multi-tenant environment. The Internet would serve as the global infrastructure “grid” and all services would be delivered through Web APIs.  In lieu of having a dedicated IT staff administering IT operations, users could avoid lengthy red-tape delay and access directly/immediately to provision as well as to manage computing capacity as “self-service IT”. In addition, instead of formal contracts and protracted delay in hardware procurement, an organization would pay for access at any time to “unlimited” computing capacity simply with a credit card.

Because there would not be formal contracts imposing preset time commitments, both entry and exit would be friction-free. In this way, an organization could accelerate time-to-value/market and help to catalyze experimentation and innovative endeavor. Furthermore, CIOs of enterprise IT could avoid or mitigate the lose-lose dilemma because they would not be restricted to choosing either a policy that leads to “waste due to over-provisioning” using peak usage estimates for capacity planning or a policy that can incur “risk due to under-provisioning” using non-peak estimates. Ideally, IT staff would “plan capacity based on typical usage” while confident that it could “scale dynamically at peak times” to maintain performance and SLAs. Simply put, the primary objectives for today’s organizations are not just about increasing speed and efficiency for back office automation. Rather, they also are about increasing speed and flexibility to adapt to changes by yielding judicious control to providers for on-demand utility computing services off-premises.

Conceptually, this scenario is an overall vision of Cloud Computing. With the advent of Cloud Computing, the vision of “Computing as a Utility” is beginning to take shape. Since the early days of time-sharing computing, that vision has taken a quantum leap towards reality. One of the earliest references to Utility Computing occurred in 1961 at the MIT Centennial. On that occasion, John McCarthy presented his vision of computing organized as a public utility. Just as the telephone system had developed into a major industry, Professor McCarthy envisioned that “Computing as a Utility” could one day become the basis of a new and important public industry.

Rooted in the long-standing vision and hope for “Computing as a Utility” that began more than half a century ago, the genesis of Cloud Computing goes back a long way. To a growing number of industry observers, it is an old idea whose time may have finally arrived when, in 2006, Amazon began offering Cloud infrastructure services to the public as a utility. Despite initial skepticism, it was a watershed event in the quest of Utility Computing and helped to usher in the first wave of industrial-strength commercial Cloud Computing offerings.

Israel: To wrap up our discussion today, can you leave us with a few thoughts about some of the implications of Cloud Computing as enterprises begin their transition to the Cloud?

Annie: Eric Schmidt, Google’s Chairman and Chief Executive has stated that Cloud computing will be “the defining technological shift of our Generation”. However, the media and vendor-spun hype (at times referred to as “cloud-washing”) around this topic has created an unprecedented level of confusion. Today, unabated sound and fury surrounding the Cloud Computing buzz continues and indeed, increases. Nevertheless, it is all but certain that there will be no “big or easy switch” for enterprise IT to transition overnight from running applications on premises to the Cloud. Because the shift is not an “all-or-nothing” or a “one size fits all” endeavor, stakeholders in enterprises should take a judicious measured approach to balance different tradeoffs.

To sustain the transition of enterprise IT to the Cloud will require not only technological advances but also new business models, new forms of IT organizational management structure and perhaps even new IT roles.  One of the “inconvenient” truths about embracing new user-empowerment technology trends and business models is the slippery slope of finding the “right” balance between hierarchical command-control and bottom-up empowerment. The harm (ineffectiveness and counter-productivity) of too much top-down control can be matched or even surpassed by the dangers of too little control. User empowerment without reasonable constraints can lead to anarchy and chaos. A new form of organizational governance is clearly required to avoid these problems. Striking a balance between planned orderliness and new emergent forces has been a challenging dynamic since the dawn of civilization.

Many of the principles that have been refined over the millennia will have direct applicability for governing tomorrow’s world of “self-service” computing in the Cloud. Clearly, there will be direct implications to new scrutiny as well as the shaping/changing of security and governance related policies. However, an organization should not overlook the human aspects and the cultural impact on the IT system administration personnel.  For example, resistance to sweeping changes driven by a fear of losing control and the stress over the prospect of losing employment can be one of the more profound ramifications that often are under the management radar.

Cloud Computing likely will change the status quo of IT system administration and, perhaps in the future, could obviate the need for some traditional IT system skills. Cloud Computing, however, is also opening new opportunities for the technical IT community and enterprise IT personnel. There is a growing consensus that, as Cloud Computing evolves, the need for more business-minded IT staff will accelerate. Specifically, there likely will be an urgent need for people “with broader business skills who can manage multiple supplier relationships.”  Freed from a variety of low-level operational tasks and controls of physical infrastructure via Cloud Computing, enterprise IT has the opportunity to promote system administration staff to higher-level decision makers as IT service facilitators and SLA contracts managers. In the near future, many traditional hierarchical command-control system operators may pursue a wider array of IT professional opportunities spanning the roles of enterprise architects; capacity planning; budget planning; performance assurance; and data, security, governance gatekeepers.

Israel: This really resonates with what I see happening in many of my consulting engagements. Successful companies waste an immense amount of capital, energy and management attention on migrating from yesterday’s datacenter to today’s or tomorrow’s datacenter. When exposed to the pains of such migrations, I am always reminded of Peter Drucker’s quip “Companies make shoes!” It is beyond me why companies who makes shoes, cars, drugs or financial instruments would want to be prisoners of their own success, hopping over from one data center to a bigger data center every few years.

Annie: Thanks for Peter Drucker’s quip. I am going to borrow it for my future use.

Israel: Annie, I can’t thank you enough for sharing your insights with us. You really connect the dots!

Endnotes:

[i] Based on the assumption that IT infrastructure performance can be greatly enhanced when each element is designed and brought to market as a component of a tightly integrated, optimized system.

[ii] With this slogan, IBM is promoting the hybrid zEnterprise 196 integrating multiple architectures and OS in a “box” as the one stop shopping ready-made private Cloud for enterprises.

[iii] Information source from IBM, The Open Group Conference, July 22, 2009.

Beautiful Quality

leave a comment »

Figure 1: Agile Assessment – Quality (Source: QSMA)

Colleague and friend Michael Mah has kindly shared with me the figure above – quality assessment for a sample of Agile projects in the QSMA metrics database of more than 8000 software projects. The two red squares in this figure represent the recent results Mah measured on two projects carried out by Quick Solutions (QSI) – a Westerville, OH company offering a broad range IT services.

One to one-and-a-half standard deviation better than the mean might not seem like much to six sigma black belts. However, in the context of typical results we see in the software industry the QSI results are outstanding.  I have not done the exact math whether those results are superior to 95%, 97% or 98% of software projects in the QSMA database as the very exact figure almost does not matter when you achieve this level of excellence.

I asked Bart Murphy – QSI’s Vice President of Delivery and Operations – for the ‘secret sauce.’ Here is the QSI recipe:

For the projects referenced in Michael’s evaluation, our primary focus was quality…  Our team was tasked with not only a significant Innovation effort, but we also managed all aspects of supporting and stabilizing the application (production support, triage, infrastructure, database support, etc.).  Our ‘secret sauce’ was assembling a world class team and executing our Agile methodology.  We organized the team efforts to focus on the three major initiatives; Innovation, Stabilization (Technical Debt), and Production Support.  We developed a release plan and coordinated efforts to deploy releases that resolved a significant number of defects, introduced market differentiating features, and addressed massive amount of technical debt.  The team was able to accomplish this without introducing additional defects into the production system.   Our success can be attributed to the commitment from the business to understand our Agile methodology and being highly engaged throughout the project (co-located with the team).  In addition, the focus of quality that is integrated into our process through the use of Test Driven Development, Continuous Integration, Test Automation, Quality Assurance, and Show & Tells.  Lastly, this would not have been possible without the co-location of the entire team given the significant issues and time constraints for delivery.

Bart also provided me with a table  ‘a la Capers Jones’ in which he elaborates on the factors that helped QSI achieve these results and those that stood in the way. I will publish and discuss Bart’s table in a forthcoming post. As part of this post I will also compare the factors identified by Bart with those reported by Capers Jones.

Technical Debt at Cutter

leave a comment »

No, this post is not about technical debt we identified in the software systems used by the Cutter Consortium to drive numerous publications, events and engagements. Rather, it is about various activities carried out at Cutter to enhance the state of the art and make the know-how available to a broad spectrum of IT professionals who can use technical debt engagements to pursue technical and business opportunities.

The recently announced Cutter Technical Debt Assessment and Valuation service is quite unique IMHO:

  1. It is rooted in Agile principles and theory but applicable to any software method.
  2. It combines the passion, empowerment and collaboration of Agile with the rigor of quantified performance measures, process control techniques and strategic portfolio management.
  3. It is focused on enlightened governance through three simple metrics: net present value, cost and technical debt.

Here are some details on our current technical debt activities:

  1. John Heintz joined the Cutter Consortium and will be devoting a significant part of his time to technical debt work. I was privileged and honored to collaborate with colleagues Ken Collier, Jonathon Golden and Chris Sterling in various technical debt engagements. I can’t wait to work with them, John and other Cutter consultants on forthcoming engagements.
  2. John and I will be jointly presenting on the subject Toxic Code in the Agile Roots conference next week. In this presentation we will demonstrate how the hard lesson learned during the sub-prime loans crisis apply to software development. For example, we will be discussing development on margin…
  3. My Executive Report entitled Revolution in Software: Using Technical Debt Techniques to Govern the Software Development Process will be sent to Cutter clients in the late June/early July time-frame. I don’t think I had ever worked so hard on a paper. The best part is it was labor of love….
  4. The main exercise in my Agile 2010 workshop How We Do Things Around Here in Order to Succeed is about applying Agile governance through technical debt techniques across organizations and cultures. Expect a lot of fun in this exercise no matter what your corporate culture might be – Control, Competence, Cultivation or Collaboration.
  5. John and I will be doing a Cutter webinar on Reining in Technical Debt on Thursday, August 19 at 12 noon EDT. Click here for details.
  6. A Cutter IT Journal (CITJ) on the subject of technical debt will be published in the September-October time-frame. I am the guest editor for this issue of the CITJ. We have nine great contributors who will examine technical debt from just about every possible perspective. I doubt that we have the ‘real estate’ for additional contributions, but do drop me a note if you have intriguing ideas about technical debt. I will do my best to incorporate your thoughts with proper attribution in my editorial preamble for this issue of the CITJ.
  7. Jim Highsmith and I will jointly deliver a seminar entitled Technical Debt Assessment: The Science of Software Development Governance in the forthcoming Cutter Summit. This is really a wonderful ‘closing of the loop’ for me: my interest in technical debt was triggered by Jim’s presentation How to Be an Agile Leader in the Agile 2006 conference.

Standing back to reflect on where we are with respect to technical debt at Cutter, I see a lot of things coming nicely together: Agile, technical debt, governance, risk management, devops, etc. I am not certain where the confluence of all these threads, and possibly others, might lead us. However, I already enjoy the adrenaline rush this confluence evokes in me…

Cutter’s Technical Debt Assessment and Valuation Service

with 3 comments

 

 

Source: Cutter Technical Debt and Valuation Service

The Cutter Consortium has announced the availability of the Technical Debt Assessment and Valuation Service. The service combines static code analytics with dynamic program analytics to give the client “x-rays” of the software being examined at any desired granularity – from the whole project portfolio to a single instruction. It breaks down technical debt into the areas of coverage, complexity, duplication, violations and comments. Clients get an aggregate dollar figure for “paying back” debt that they can then plug into their financial models to objectively analyze their critical software assets. Based on these metrics, they can make the best decisions about their ongoing strategy for the software development effort under scrutiny.

This new service is an important addition to the enlightened software governance framework that Jim Highsmith, Michael Mah and I have been thinking about and contributing to for sometime now (see Beyond Scope, Schedule and Cost: Measuring Agile Performance and Quantifying the Start Afresh Option). The heart of both the technical debt service and the enlightened governance framework is captured by the following words from the press release:

Executives in charge of software governance have long dealt with two kinds of dollar figures: One, the cost of producing and maintaining the software; and two, the value of the software, which is usually expressed in terms of the net present value associated with the expected value stream the product will generate. Now we can deal with technical debt in the same quantitative manner, regardless of the software methods a company uses.

When expressed in terms of dollars, technical debt ties neatly into value vis-à-vis cost considerations. For a “well behaved” software project, three factors — value, cost, and technical debt — have to satisfy the equation Value >> Cost > Technical Debt. Monitoring the balance between value, cost, and technical debt on an ongoing basis is an effective way for organizations to stay on top of their real progress, and for stakeholders and investors to ensure their investment is sound.

By boiling down technical debt to dollars and tying it to cost and value, the service enables a metrics-driven governance framework for the use of five major constituencies, as follows:

Technical debt assessments and valuation can specifically help CIOs ensure alignment of software development with IT Operations; give CTOs early warning signs of impending project trouble; assure those involved in due diligence for M&A activity that the code being acquired will adapt to meet future needs; enables CEOs to effectively govern the software development process; and, it provides critical information as to whether software under consideration constitutes an asset or a liability for venture capitalists who need to make informed investment decisions.

It should finally be pointed out that the technical debt assessment service and the governance framework it enables are applicable to any software method. They can be used to:

  • Govern a heterogeneous environment in which multiple software methods are used
  • Make apples-to-apples comparisons between disparate software projects
  • Assess project performance vis-a-vis industry norms

Forthcoming Cutter Executive Reports, Executive Updates and Email Advisors on the technical debt service are restricted to Cutter clients. As appropriate, I will publish the latest and greatest news on the subject in the Cutter Blog (which is an open forum I highly recommend).

Acknowledgements: I would like to wholeheartedly thank the following colleagues for inspiring, enlightening and supporting me during the preparation of the service:

  • Karen Coburn
  • Jennifer Flaxman
  • Jonathon Golden
  • John Heintz
  • Jim Highsmith
  • Ken Collier
  • Kim Leonard
  • Kara Letourneau
  • Michal Mah
  • Anne Mullaney
  • Chris Sterling
  • Cindy Swain
  • Sarah Wiesbrock

Open-Sourcing the Inovis End-to-End Kanban System

leave a comment »



Source: Gat, Huddleson, Bodwell and Chin, “Reformulating the Product Delivery Process

Colleague and “partner in crime” Stephen Chin has published a post on the Inovis End-to-End Kanban System (aka Apropos) we presented at the LSSC10 conference on April 23.  As readers of this blog might recall, the system tracks features through their full life-cycle from proposal to validation, ensuring actionable feedback cycles. By so doing it firmly anchors the software method in the overall business context with special attention to operational aspects such as deployment, monitoring and support.

Stephen outlines details of the forthcoming open-sourcing of Apropos as follows:

The plan for this tool is to do the initial launch of a BSD-licensed open-source version on May 22nd.  This will include support for the Rally Community Edition, which is free for up to 10 users.  In future releases we plan to support other Agile Lifecycle Management tools, both commercial and open-source, but will need assistance from the community to do this.

If you are interested in helping out with this project, please contact me.  I will have limited bandwidth until after the initial launch, but after that would love to scale up this project with interested parties.

I really can’t wait till the 22nd. IMHO Apropos has the potential to become the leading Kanban system by the community for the community.

The Mojo of Innovation Games

with 2 comments

This post is a shameless plug for Innovation Games. Shameless that it might be, it is grounded in the hands-on experience I acquired as a participant in Luke Hohmann’s workshop on the subject last week. Colleagues Ken Collier, Alan Shalloway and Michele Sliger took the workshop together with me.

While Innovation Games had been conceived, implemented and published by Luke more than 4 years ago, the contemporary on-line implementation breaks new grounds in three important ways:

  1. It ties in ideation, requirements management and software project management in a seamless fashion. (Stay tuned for exciting announcements on the subject in a couple of months).
  2. It gets over the “too much data” barrier of the paper-based version of the game. Data capture is largely automated now. Data analysis tools are forthcoming.
  3. It gets over the “across the pond” obstacle. You can play Innovation Games to your heart’s content no matter how geographically dispersed your teams might be.

Not bad for three guys and a dog. Actually, I don’t even know whether they can afford a dog. These guys operate on passion, craftsmanship and mojo…

Postscript: If you know Luke, you already know what I mean by “the Luke mojo.” If you don’t, may I suggest you get to know him. A convenient opportunity might be the forthcoming Agile Roots 2010 conference – the organizers are speaking with Luke about delivering a keynote presentation literally as I write this post.

The Agile Flywheel

with 8 comments

Readers of The Agile Executive have been exposed to the “All In!” strategy used by Erik Huddleston to transform the software engineering process at Inovis and make it uniquely streamlined. In this post we follow up on the original discussion of the subject to explore the effect of Agile on IT Operations. As the title implies, Agile at Inovis served as a flywheel which created the momentum required to transform IT Operations and blend the best of Agile with the best of ITIL.

This guest post was written by Ray Riescher – a Six Sigma Black Belt, Agile evangelist and a business process change agent. Ray is currently responsible for business process management and IT governance at Inovis, a  leading provider of business-to-business (B2B) e-commerce services, in Alpharetta, GA

Here is Ray:

When we converted to an Agile Scrum software methodology some 24 months ago, I never imagined the lessons I’d learn and the organizational change that would be driven by the adoption of Scrum.

I’ve lived by the philosophy that managing a business is managing its processes and that all of those processes, especially the operational processes, are interconnected.  However, I don’t think I was fully prepared for effect Agile Scrum would have on our company operations.

We dove head first into Agile Scrum and adapted to it very quickly. However, it wasn’t until we landed a very large and demanding customer that Scrum was really put to the test. New enhancements, new features, and new configurations were all needed ASAP.  Scrum delivered with rapid development and deployment in the form of releases that were moving into production with amazing velocity. Our release cadence hit warp drive and at one point we experienced several months where multiple teams’ production releases were deploying at the end of every two week sprint.

We’ve subscribed to the ITIL service support processes for Release, Change, Incident, Problem and Configuration Management. ITIL has served us well, giving us a common language and a clear understanding of process boundaries.

As the Scrum release cadence kicked in, the downstream ITIL processes had to keep up, adapt, and support the dynamics of rapid production changes.  What happened was enlightening and maybe a bit ground breaking.

The Release Management process had to reassess its reliance on artifacts for gate keeping. The levels of sign offs had to be streamlined, the heavyweight deployment documentation had to be lightened, yet the process still had to control the production release to ensure deployment success.  The rapidity of the release cycles meant that maintenance window downtime would be experienced too frequently by customers, so “rolling bounce” deployment strategies were devised and implemented.

Change requests could no longer wait for a weekly Change Management review board to approve and schedule the changes.  Change management risk models had to be relied on for accurate detection of risky changes.

Early on in this dynamic environment, we weren’t quite as good as we needed to be and our Incident Management process was put to the test.  Faster releases meant more opportunity for problems with service degradation and outages. This reality manifested itself more frequently than we’d ever experienced. Monitoring, detecting and repairing became paramount for environment stability and customer satisfaction.

What we found out was that we became very agile at this break/fix game. We developed a small team approach to managing incidents and leveraged the ITIL Problem Management process to rapidly perform root cause analysis. Once the true root cause was determined, a fix would be defined and deployed. Sometimes the fix was software related and went through the Scrum process, sometimes the fix was hardware related and went through the Configuration Management process, other times it was more operational and the fix took the form of training or corrections to procedural documentation.

The point is we’ve become agile across the entire IT spectrum. Whether it’s development via Scrum, the velocity with which we now operate our ITIL processes, or the integrated break/fix operational support processes, we are performing all of these with an agile mindset and discipline. We have small teams, working on priorities, and completing what needs to be completed now.

Scrum set the flywheel in motion and caused the rest of the IT process life cycle to respond.  ITIL’s processes still form the solid core of service support and we’ve improved the processes’ capability to handle intense work velocity. The organization adapted by developing unprecedented speed in the ability to deliver production fixes and to solve root cause problems with agility.

What I think we are witnessing is a manifestation of Agile Business Service Management; a holistic agile methodology running across the IT process spectrum that’s delivering eye popping change and tremendous results.

The Hole in the Soul and the Legitimacy of Capitalism

with 2 comments

In a January 13, 2010 post entitled The Hole in the Soul of Business, Gary Hamel offers the following perspective on success, happiness and business:

I believe that long-lasting success, both personal and corporate, stems from an allegiance to the sublime and the majestic… Viktor Frankl, the Austrian neurologist, held a similar view, which he expressed forcefully in “Man’s Search for Meaning:” “For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended consequence of one’s personal dedication to a cause greater than oneself . . ..”
 
Which brings me back to my worry. Given all this, why is the language of business so sterile, so uninspiring and so relentlessly banal? Is it because business is the province of engineers and economists rather than artists and theologians? Is it because the emphasis on rationality and pragmatism squashes idealism? I’m not sure. But I know this—customers, investors, taxpayers and policymakers believe there’s a hole in the soul of business. The only way for managers to change this fact, and regain the moral high ground, is to embrace what Socrates called the good, the just and the beautiful.

So, dear reader, a couple of questions for you: Why do you believe the language of beauty, love, justice and service is so notably absent in the corporate realm? And what would you do to remedy that fact?

Hamel’s call for action is echoed in the analysis of the double bubble at the turn of the century by Carlota Perez:

The current generation of political and business leaders has to face the task of reconstituting finance and bringing the world out of recession. It is crucial that they widen their lens and include in their focus a much greater and loftier task: bringing about the structural shift within nations and in the world economy. Civil society through its many new organisations and communications networks is likely to have a much greater role to play in the outcome on this occasion. Creating favourable conditions for a sustainable global knowledge society is a task waiting to be realized. When – or if – it is done we should no longer measure growth and prosperity by stock market indices but by real GDP, employment and well being, and by the rate of global growth and reduction of poverty (and violence) across and within countries.

As if these words were not arousing enough, Perez adds one final piercing observation:

 The legitimacy of capitalism rests upon its capacity to turn individual quest for profit into collective benefit.

IMHO Hamel and Perez identified the very same phenomenon (“hole in the soul”). The only difference is at the level they discuss the phenomenon. Hamel makes his observation at the business/corporate level; Perez at the socio-economic level.

Between Agile and ITIL – Part II

leave a comment »

The July 2009 post Between Agile and ITIL introduced the application of Agile principles to system management with the following words:

You do not need to be an expert in Value Stream Mapping to appreciate the power of speeding up deployment to match the pace of Agile development. By aligning development with deployment, you streamline “production” with “consumption.” The rationale for so doing is aptly captured in the first bullet of the Declaration of Interdependence: “We increase return on investment by making continuous flow of value our focus.”

Yesterday’s press release about the acquisition of Phurnace by BMC validates the projection given in the afore-listed post. Colleague and friend Michael Cote puts his finger on the heart of the acquisition in his post in People Over Process:

The interesting part is also that this is automation – I’m assuming – at the application layer, where as most automation talk in past and present is at the infrastructure layer. Of course, the thought leaders in this area – folks like Reductive Labs (Puppet),OpsCode (Chef), and in a more general sense cloud management outfits – are doing a helpful job of blurring the distinction between traditional IT layers like application and infrastructure with their dev/ops angled automation. Check out this white paper done by Reductive Labs and dto solutions on the topic for a nice toe-dip. And, I’d expect to see more application layer automation from VMWare/SpringSource. Older automation portfolios like BMC’s Blade Logic line need to keep a close eye on these developments, hopefully, taking in the proven parts of that work.

One can, of course, automate IT tasks without embracing Agile. The fundamental question to be answered is whether one considers ITIL as an “empirical” process control model or as a “defined” process control model (or possibly a hybrid).

Prosperity without Growth

leave a comment »

Readers of this blog might recall the ‘secret sauce’ proposed in The Mindset for talking about Agile with executives:

Success, however, depends on a certain kind of mindset of the executive you are talking to.  This mindset is nicely described in H. Thomas Johnson‘s article Manage a Living System, Not a Ledger:

…a business organization cannot improve its long-run financial results by working to improve its financial results. But the only way to ensure satisfactory and stable long-term financial results is to work on improving the system from which those results emerge.

In a perceptive CQI  article on the recently reported problems at Toyota,  Johnson offers the following analysis:

Toyota avoided this fate until the last decade because it did not regard results as outcomes that a business achieves by requiring managers to drive people to meet financial targets. It saw that results emerge from a process in which people carefully nurture a web of relationships. These relationships, strikingly enough, emulate the behaviour in natural living systems.

The reversal of Toyota’s fortunes in the past decade suggests that many of its top managers lost the habit of thought that had previously shaped the company’s policies and actions. They lost the habit of thought that caused the company, perhaps unconsciously, to act like a living system. Toyota adopted the finance-oriented mechanistic thinking that had spawned the inferior management practices and the poor performance shown by most of its competitors after the 1970s. And because it abandoned living-system thinking for mechanistic thinking, Toyota began to embrace a virtual world of finance, not a concrete world of humans in cooperative relationships.

Johnson concludes his analysis with a broad warning:

Efforts of companies to reduce that waste by “going green” are not likely to be any more effective than efforts to improve performance by “going lean”. In neither case do these efforts change the thinking that produces excess growth. The efforts might reduce the rate of growth for a time, but they will never reverse it as long as companies adhere to the conventional wisdom from the virtual world of finance that says prosperity is not possible without growth. [Highlights by IG]

The hazards of the virtual world of finance have been conclusively demonstrated during the macro-economic crisis of 2008-2009. One must wonder what it would take to learn the applicable lessons at the micro level of individual companies.