The Agile Executive

Making Agile Work

Search Results

Devops: It is Not About ITIL, It is About Proficiency

with 2 comments

As you would expect, the Information Technology Infrastructure Library (ITIL) topic was brought up in the devops day held last Friday in a LinkedIn facility in Mountain View, CA. We, of course, had the expected spectrum of opinions about ITIL in the context of devops – from “ITIL will never work for a true continuous development shop” to “well, you can make ITIL work under such circumstances.” Needless to say, a noticeable level of passion accompanied these two statements…

IMHO the heart of the issue is not ITIL per se but system management proficiency. If your system management proficiency is high, you are likely to be able to effectively respond to 10, 20 or 50 deploys per day. Conversely, if your system management proficiency is low, ops is not likely to be able to cope with high velocity in dev. The critical piece is alignment of velocities between dev and ops, not the method used to manage IT systems and services.  Whether you use ITIL, COBIT or your own home-grown set of best practices is irrelevant. Achieving alignment of velocities between dev and ops is a matter of proficiency in system management.

How to Initiate a DevOps Project

with 4 comments

17th/21st Lancers c. 1922-1929 "THE FIGHTING SPIRIT!" by sunnybrook100 - One Million Views!.

Source: 17th/21st Lancers c. 1922-1929 “THE FIGHTING SPIRIT!”

Agile consultants on a development project often start by helping the team construct a backlog. The task is sufficiently concrete to get all stakeholders (product management, project management, development, test, any others) on a collaborative track through the creation of a key artifact. The backlog establishes a base line for the tasks to be carried out in the project.

For a DevOps project, start by establishing the technical debt of the software to be released to operations. By so doing you build the foundations for collaboration between development and operations through shared data. In the DevOps context, the technical debt data form the basis for the creation and grooming of  a unified backlog which includes various user stories from operations.

Apply the same approach when you are fortunate to be able to include folks from operations in the Agile team from the very beginning. You start with zero technical debt, but you track it on an ongoing basis and include the corresponding “fix-it” stories in the backlog as you accrue the debt. Running technical debt analytics on the source code every two weeks is a good practice to follow.

As the head of development, you might not be comfortable sharing technical debt data. This being the case, you are not ready for DevOps.

Fresh Perspectives on Technical Debt

leave a comment »

Update, October 15: The issue has been posted on the Cutter website (Cutter IT Journal subscription privileges required).

Cutter is just about ready to post the October issue of the IT Journal for which I am the guest editor. Print subscribers should receive it by the last week of the month. Jim Highsmith and I will be reflecting on it in our forthcoming seminar on technical debt in the Cutter Summit.

This issue sheds light on three noteworthy aspects of technical debt techniques:

  1. Their pragmatic use as an integral part of Governance, Risk and Compliance (GRC).
  2. Extending the techniques to shed light on various nuances of technical debt that have alluded us so far.
  3. Applying the techniques in new domains such as devops.

Here is the Table of Contents for this exciting issue:

Opening Statement

by Israel Gat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3

Modernizing the DeLorean System: Comparing Actual and Predicted Results of a Technical Debt Reduction Project

by John Heintz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

The Economics of Technical Debt

by Stephen Chin, Erik Huddleston, Walter Bodwell, and Israel Gat . . . . . . . . . . . . . . . . . 11

Technical Debt: Challenging the Metaphor

by David Rooney . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Manage Project Portfolios More Effectively by Including Software Debt in the Decision Process

by Brent Barton and Chris Sterling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

The Risks of Acceptance Test Debt

by Ken Pugh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Transformation Patterns for Curing the Human Causes of Technical Debt

by Jonathon Michael Golden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30

Infrastructure Debt: Revisiting the Foundation

by Andrew Shafer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

 

Action Item: Apply the techniques recommended in this issue to govern your software assets in an effective manner.

____________________________________________________________________________________________________

Overwhelmed by a “mountain” of technical debt? Let me know if you would like assistance in devising and carrying out plans to reduce the debt in a biggest-bang-for-the-buck manner. Click Services for details.

____________________________________________________________________________________________________

The Gat/Highsmith Joint Seminar on Technical Debt and Software Governance

leave a comment »

Jim and I have finalized the content and the format for our forthcoming Cutter Summit seminar. The seminar is structured around a case study which includes four exercise. We expect the case study/exercises will take close to two-thirds of the allotted time (the morning of October 27). In the other third we will provide the theory and practices to be used in the seminar exercises and (hopefully) in many future technical debt engagements participants in the workshop will oversee.

The seminar does not require deep technical knowledge. It targets participants who possess conceptual grasp of software development, software governance and IT operations/ITIL. If you feel like reading a little about technical debt prior to the Summit, the various posts on technical debt in this blog will be more than sufficient.

We plan to go with the following agenda (still subject to some minor tweaking):

Agenda for the October 27, 9:30AM to 1:00PM Technical Debt Seminar

  • Setting the Stage: Why Technical Debt is a Strategic Issue
  • Part I: What is Technical Debt?
  • Part II : Case Study – NotMyCompany, Inc.
    • Exercise #1 – Modernizing NotMyCompany’s Legacy Code
  • Part III: The Nature of Technical Debt
  • Part IV: Unified Governance
    • Exercise #2 – The acquisition of SocialAreUs by NotMyCompany
  • Part V: Process Control Models
    • Exercise #3 – How Often Should NotMyCompany Stop the Line?
  • (Time Permitting – Part VI: Using Technical Debt in Devops
    • Exercise #4 – The Agile Versus ITIL Debate at NotMyCompany)

By the end of the seminar you will know how to effectively apply technical debt techniques as an integral part of software governance that is anchored in business realities and imperatives.

Written by israelgat

September 30, 2010 at 3:20 pm

Why Spend the Afternoon as well on Technical Debt?

with 2 comments

Source: http://www.flickr.com/photos/pinksherbet/233228813/

Yesterday’s post Why Spend a Whole Morning on Technical Debt? listed eight characteristics of the technical debt metric that will be discussed during the morning of October 27 when Jim Highsmith and I deliver our joint Cutter Summit seminar. This posts adds to the previous post by suggesting a related topic for the afternoon.

No, I am not trying to “hijack” the Summit agenda messing with the afternoon sessions by colleagues Claude Baudoin and Mitchell Ummel. I am simply pointing out a corollary to the morning seminar that might be on your mind in the afternoon. Needless to say, thinking about it in the afternoon of the 28th instead of the afternoon of the 27th is quite appropriate…

Yesterday’s post concluded with a “what it all means” statement, as follows:

Technical debt is a meaningful metric at any level of your organization and for any department in it. Moreover, it is applicable to any business process that is not yet taking software quality into account.

If you accept this premise, you can use the technical debt metric to construct boundary objects between various departments in your company/organization. The metric could serve as the heart of boundary objects between dev and IT ops, between dev and customer support, between dev and a company to which some development tasks are outsourced, etc. The point is the enablement of working agreements between multiple stakeholders through the technical debt metric. For example, dev and IT ops might mutually agree that the technical debt in the code to be deployed to the production environment will be less than $3 per line of code. Or, dev and customer support might agree that enhanced refactoring will commence if the code decays over time to more than $4 per line of code.

You can align various departments by by using the technical debt metric. This alignment is particularly important when the operational balance between departments has been disrupted. For example, your developers might be coding faster than your ITIL change managers can process the change requests.

A lot more on the use of the technical debt metric to mitigate cross-organizational dysfunctions, including some Outmodel aspects, will be covered in our seminar in Cambridge, MA on the morning of the 27th. We look forward to discussing this intriguing subject with you there!

Israel

Outline of the Technical Debt Seminar at the Cutter Summit

leave a comment »



Pictured above are speakers of the forthcoming Cutter Summit. Between the seventeen of us we will cover a broad spectrum of IT topics such as Agile, Enterprise Architecture, Business Strategy, Cloud Computing, Collaboration, Governance and Security. Inter-disciplinary seminars, panels and case studies will weave all those threads together to give participants a clear view of the unfolding transformation in IT and of the new way(s) companies are starting to utilize IT. Click here for a details.

As Jim Highsmith and I continue to develop our joint seminar on technical debt for the summit, I would like to give readers of this blog a sense of where we are and ask for feedback. Right now we are considering the following building blocks for the seminar:

  • The Nature of Technical Debt
  • Technical Debt Metrics
  • Monetizing Technical Debt
  • Constructing Roadmaps for Paying Back Technical Debt
  • Risk Assessment and Mitigation
  • A Simple Software Governance Framework
  • Schedule in the Simple Governance Framework
  • Enlightened Governance
  • Baking in Quality One Build at a Time
  • How Often Should the Project Team Regroup?
  • Multi-Level Governance
  • Extending  Technical Debt Techniques to Devops
  • Use of Technical Debt Techniques in Agile Portfolio Management
  • The Start Afresh Option
  • Technical Debt as an Integral Part of a Value Delivery Culture

In the course of going through a subset of these building blocks, we will cover the latest and greatest from the October issue of the Cutter IT Journal on technical debt, present two case studies, and conduct a few group exercises.

Through the Prism of IT Transformation for Tomorrow’s Enterprise Datacenters: Interview with Annie Shum

with one comment

As indicated in our recent post “Extending the Scope of the Agile Executive”, Cote and I have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. We start the coverage of structural changes that are relevant to Agile with an interview with Annie Shum, VP of Advanced Technology, Amdocs Corp.

We cover a broad panorama in this interview with Annie. Here are some items that may be of special interest to the reader who focuses on Agile methods, processes and governance in a broad sense – from programming to IT operations and anything in between:

  • Unleashing disruptive transformations
  • Supply and demand – the two sides of the IT “coin”
  • Open source software in general and OpenStack in particular
  • The impact of social networking and other Web 2.0 tools
  • Three billion downloads and counting…
  • Finding the “right” balance between hierarchical command-control and bottom-up empowerment
  • “Self-service” IT service delivery/deployment
  • Forthcoming changes in IT system administration and the rise of DevOps
  • How to gain freedom from a variety of low-level operational tasks and controls of physical infrastructure
  • Provisioning and over provisioning
  • Many others…

Annie answers all questions with data, insights and passion. No surprises there…

Israel: Nancy Foy immortalized the monolithic International Business Machines Corporation in her classic “The Sun Never Sets on IBM.” Much has changed, of course, since the book was published in the 70’s. For quite a few years IBM has been deconstructing its business design, its organizational structure and both internal and external processes. By some accounts, prior to Gerstner IBM had even been contemplating reforming itself as a bunch of independent companies. The contrast to IBM’s announcement a couple of weeks ago about putting both software and hardware under one hand is noteworthy. What do you make of it, Annie? Is this a new development? Or is it a blast from the past?

Annie: Interesting question but I would be remiss if I failed to point out that I don’t have a crystal ball or the expertise to predict reliably whether this will be an isolated case or a trend-setter.  Although the arguably radical IBM organizational restructuring in management is newsworthy, I am not especially interested in looking at it purely from the perspective of vendor management structure because it is merely a means to an end.  What intrigues me is the rationale behind this key announcement.  In particular, I am interested in envisioning the more profound and potentially game-changing, if not disruptive, transformation that IBM hopes to unleash by adopting this bold organizational restructuring with likely (significant) risks.

To better understand this new undertaking, I think it would be instructive to analyze it from the supply side as well as the demand side.  So let’s break up the narrative: first by looking at the supply side, namely the IT service providers/system vendors, followed by the second half of the narrative, the customers/consumers.

Israel: I am intrigued by your supply side/demand side approach. Please elaborate.

Annie: To understand the supply side, consider the three major IT vendor announcements made during the week of July 19, 2010.  Not as three disparate events. Instead, by putting them in context and connecting the dots among them, we can uncover some very interesting insights into emerging trends of the IT industry in general and actionable guidelines for tomorrow’s enterprise datacenters in particular. 

Let’s begin with the May 2010 report from Saugatuck Research titled, “Gorillas In the Cloud: Applying Saugatuck’s “Master Brand” Model to Cloud IT” whereby “Master Brands” refer to those vendors (and service providers) that dominate and influence IT marketplaces, technologies and/or user accounts. This May report sets the stage for the latest Saugatuck research alert titled, “One-Stop Shopping – Major Vendors Acquire Assets for the Cloud”. This research alert describes how increasing numbers of major vendors are striving to become the “sole source for offerings up and down the IT EcoStack™ targeting the Cloud.”

As if on cue, IBM released two major announcements just this past week. First, on July 20, 2010, InformationWeek reported that IBM plans[i] to combine hardware and software to spur the company’s efforts to deliver bundled, plug-and-play systems. According to Sam Palmisano, the core strategy pivots on producing tightly bundled computer systems that “feature chips, middleware, and business software designed from the ground up to support Cloud Computing and other new-wave IT architectures.”

To some long-standing industry observers, this strategy may appear to be “back to the future” and IBM is simply returning to its roots after a prolonged hiatus from its original business model.  There is, however, an important historical footnote. Almost five decades ago, due to concerns of monopoly antitrust abuses stemming from the bundling of hardware and software in the IBM mainframe systems, the US government took legal action leading to IBM’s acceptance of the 1956 Consent Decree.

Today, unlike the past, IBM no longer dominates the computer systems market. In fact, there is a growing trend towards bundled systems, mainly by the “Master Brands”, to “mask” complexity for customers as they embark on implementing complex IT endeavors including key programs such as datacenter consolidation, server/storage virtualization, predictive analytics, SOA/BPM, Cloud Computing (public, private or hybrid), and Green IT. For example, Oracle acquired Sun Microsystems in 2009 for $7.4 billion to support what InformationWeek described as Larry Ellison’s “applications-to-disk” strategy, while HP and Microsoft earlier this year unveiled a multi-million dollar initiative under which they will jointly engineer servers and software.

It is likely that the timeline of the July 19 IBM announcement was influenced (perhaps even pressured) by its rivals taking a similar approach to address evolving enterprise datacenters. To expedite this strategy to deliver bundled “plug-and-play” systems, IBM first announced sweeping organizational restructuring to foster internal collaboration and harness synergies across products and LOBs. Clearly, the biggest change is the management restructuring by consolidating key hardware and software divisions under the watch of a single executive, Steve Mills who’s a longtime IBM software chief.

Next, just three days later on the heels of this organizational makeover, IBM made another major announcement on July 22, 2010 amidst much fanfare and hype. Presenting the vision of a new “Dimension in Computing” designed to control multi-platform datacenter operational costs and (significantly reduce complexity), IBM announced a new hybrid “system of systems” platform that unifies IT for efficient service delivery and large-scale datacenter simplification. Dubbed a “datacenter in a box” or a “cloud in a box[ii]”, it integrates the new super powerful and energy-efficient mainframe zEnterprise, 196  running z/OS and the zEnterprise  BladeCenter Extension zBX, running Linux and AIX. By extending the System Z’s qualities of service (spanning security, scalability, availability, efficiency and virtualization) to enable Cloud readiness and optimized service delivery for enterprises, IBM likely is promoting its strength in building private Clouds for large enterprises.  See the following two slides from the IBM July 22 announcement.


Israel: So it looks like the IT industry is heading towards more “power” consolidation of mega vendors or as you referenced earlier, “Master Brands”. Is this a fait accompli? If so, is it a matter of channeling demand toward one-stop-shopping irrespective of integration realities underneath? Isn’t there a danger to this trend?”

Annie: Despite these high profile announcements by the major vendors, it is far from fait accompli. And yes, your comments are only too real especially for those who have lived through the era of monopolies and antitrust concerns. Frankly, many people believe that such a trend may be a clear threat in the presently emerging era.  While I don’t want to downplay the risk and potential damage of antitrust abuses, I believe there are some factors at work here to counteract, or at least limit, unchecked monopolies in the IT industry.

In this Internet age with the rise of “Consumerization of IT”, catalyzed by the nearly ubiquitous access to social networking and other Web 2.0 tools, IT has permeated almost every market sector in our society. The set of functions and services supported and enabled by IT has become exceedingly vast, diverse and complex such that no single business model or supplier is in a position to dominate, let alone destroy all others.  The era when a handful of proprietary stalwart vendors dominated the IT industry is all but over. Just this past decade, we have witnessed the meteoric rise of Google, Facebook and more recently, Twitter. A growing formidable force, namely the open source software and its bottom-up self-organizing community, powers as well as empowers most if not all of the Web 2.0 companies. At this point in our discussion, it is apt to segue to the third vendor announcement during the week of July 19, 2010.

On July 19, Cloud service provider RackSpace with NASA announced the sponsorship of the project: OpenStack, an open source IaaS Cloud platform. Included in the announcement is a diverse group of computer system providers from across the technology industry like CITRIX, DELL, NTT DATA, RIGHTSCALE and others to drive a deployable, totally open cloud solution.  According to their mission statement, OpenStack is designed to foster the emergence of technology standards and Cloud interoperability. One of the primary objectives is to facilitate enterprises to avoid vendor lock-in.

Israel: This appears to be a very timely announcement given that “vendor lock-in” is one of the top concerns confronting enterprises as they evaluate and plan for the transition to Cloud Computing. Having said that, are we not back to “square zero” – striking a balance between openness and “one-stop shopping” tight integration?

Annie: Yes indeed. Although some industry observers describe the issue as “vendor lock-in”, others see it as a broader issue describing it as the “challenge/difficulty of bringing back in-house” or the “lack of interoperability standards for seamless portability”.  For example, in the 2009 Cloud Computing survey conducted by IDC, over 80% surveyed rated this issue under both labels to be very important.  Incidentally, I should point out that “vendor lock-in” is neither a new nor a unique issue with Cloud Computing. On the contrary, it is a long-standing “problem” going all the way back from the early days of mainframe computing and culminating with the government versus IBM antitrust lawsuit in the ‘50s as we discussed earlier.

Interestingly, there are many forms and variants of vendor lock-in and they are not all equal. For example, many industry observers have been unhappy with the proprietary development and delivery model that Apple imposed on the iPod/iPhone/iPad.  Although the risk of “vendor lock-in” may be real, any negative impact on the ever-growing large and loyal Apple customer base seems minimal.  Just think about the run-away successful App Store. It is heavily “curated” by Apple. Yet since its opening on July 10, 2008, there have been more than one hundred thousand available apps in App Store, over two billion application downloads (as of November 2009), and reaching three billion downloads by January 2010. Steve Jobs hailed this as a landmark event: “Three billion applications downloaded in less than 18 months – this is like nothing we’ve ever seen before.”

Sorry we digressed. So let’s resume our discussion of the recent major announcements.  In a nutshell, the OpenStack announcement attempts to address the issue directly by allowing any organization to create and offer Cloud Computing capabilities using open source software freely available under the Apache 2.0 license running on standard hardware.

Now this gets interesting: a tale of two diametrically opposite strategies.  On one hand, we have IBM announcing the high performance zEnterprise 196 as a hybrid integrated multi-architecture “datacenter /Cloud in a box”. The goal is to mask complexity and maximize efficiency:  infrastructure (management /admin costs savings up to 70%) and energy consumption (up to 82% energy usage reduction) with a bundled technology stack: integrating multi-platforms, infrastructure and management (spanning service, platform and hardware).  A principal concern of this proprietary single vendor approach is the risk of “vendor lock-in”.

On the other hand, the OpenStack is “DIY” based on an open source development platform. The goal of OpenStack is the following: “Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans Cloud providers.”  The cons/challenges of this approach are probably similar to conventional “DIY” open source projects.

I should clarify that this dichotomy may be seen as an entire spectrum. As noted here IBM, VMware, etc on one hand, and RackSpace, Eucalyptus, etc on the other hand, exemplify the two end-points bookending the dichotomy spectrum. Along the spectrum, there are a growing number of intermediate options/offerings (with a rising number of variations) by a wide variety of IT Cloud service vendors: stalwart vendors including Amazon, Microsoft, Google, Salesforce.com,  etc as well as young companies and startups such as RackSpace, RightScale, Boomi, Canonical, Cloudkick, Opscode, etc.

Israel: Is this shaping up to be a battle between two diametrically opposite strategies? And if so, which one will come out on top? Or is it a draw?

Annie: To me, a similar dichotomy has already existed previously in the IT industry. For example, think Apple versus Google. Consider the modus operandi of the Apple core business model (“close or at least closely curated” to optimize user experience and quality) versus that of Google’s (“open standards/APIs” to maximize opportunities for 3rd party development participation).

Insofar as whether bundled systems or “Cloud in a box” versus open source “DIY” will be the ultimate winner, I have to defer to other industry observers with more experience such as you.  Perhaps in our future Q&A meet-up, I am interested to hear your views on how the competition may be settled eventually.  However, while we all await the uncertain outcome, IT practitioners should be mindful that the dichotomy spectrum would have profound implications not only on the supply side but also on the demand side.  In particular, because the offerings from the dichotomy spectrum will be rapidly evolving, the fluidity will very likely confound and confuse users/consumers as they attempt to balance a convoluted set of different tradeoffs. Many  enterprise IT practitioners will be under pressure to make difficult and ambiguous choices by picking one or more evolving offerings over other evolving offerings for building the foundation of tomorrow’s enterprise datacenters in the Cloud era.

Israel: Good timing.  So far in our Q&A today, you have focused on the first half of the narrative – namely, the supply side, now let’s continue to part 2 of your narrative, namely, the demand side.

Annie:  Earlier, I discussed the supply side by connecting the dots among three key announcements during the week of July 19. Now similarly for the demand side, I will suggest a few more dots that I believe should be connected. Specifically, I suggest connecting the following trends:

  • The growing complexities and inefficiencies of on-premises enterprise datacenters;
  • The inevitable rise of alternative delivery and deployment models for IT services; and
  • The advent of Cloud Computing:  a long-standing vision whose time may finally arrive.

Several months ago, I published a guest post on your blog site entitled “The Urgency of Now.”  You might recall that I began the post with some sobering and perhaps even alarming statistics about the gross inefficiency of traditional on-premises enterprise datacenters.  Here again is the Enterprise Datacenter Index at–a-glance:

In summary, enterprise IT faces a “crisis of staggering complexity” and IT infrastructure is reaching a “breaking point” marked by such salient factors/trends[iii] as the following:

  • 1.5 X: Information explosion driving over fifty percent yearly growth in storage shipments;
  • 85% idle: Over-provisioned waste primarily in distributed computing environments e.g. typical computing resources (capacity) remain idle for  an average of over eighty percent;
  • $40 Billion or 3.5% of sales: Retail industries annual loss due to (supply) value chain inefficiencies;
  • 60-70% IT spending on maintenance/overhead: Overall IT spending profile shows that the lion’s share of IT expenses goes towards overhead and maintenance. Maintenance overhead: seventy cents per dollar is spent on maintaining IT infrastructures at the expense of adding new capabilities;

Now consider the following scenario. Suppose enterprise IT could choose an alternative set of “self-service” IT service delivery/deployment models that would be orthogonal to traditional hierarchical command-and-control Cap-ex based datacenters.  Instead of owning and tightly controlling its own private internal datacenter and purchasing capital resources up front, an organization on-demand would “rent” pooled computing resources hosted on the provider’s multi-tenant environment. The Internet would serve as the global infrastructure “grid” and all services would be delivered through Web APIs.  In lieu of having a dedicated IT staff administering IT operations, users could avoid lengthy red-tape delay and access directly/immediately to provision as well as to manage computing capacity as “self-service IT”. In addition, instead of formal contracts and protracted delay in hardware procurement, an organization would pay for access at any time to “unlimited” computing capacity simply with a credit card.

Because there would not be formal contracts imposing preset time commitments, both entry and exit would be friction-free. In this way, an organization could accelerate time-to-value/market and help to catalyze experimentation and innovative endeavor. Furthermore, CIOs of enterprise IT could avoid or mitigate the lose-lose dilemma because they would not be restricted to choosing either a policy that leads to “waste due to over-provisioning” using peak usage estimates for capacity planning or a policy that can incur “risk due to under-provisioning” using non-peak estimates. Ideally, IT staff would “plan capacity based on typical usage” while confident that it could “scale dynamically at peak times” to maintain performance and SLAs. Simply put, the primary objectives for today’s organizations are not just about increasing speed and efficiency for back office automation. Rather, they also are about increasing speed and flexibility to adapt to changes by yielding judicious control to providers for on-demand utility computing services off-premises.

Conceptually, this scenario is an overall vision of Cloud Computing. With the advent of Cloud Computing, the vision of “Computing as a Utility” is beginning to take shape. Since the early days of time-sharing computing, that vision has taken a quantum leap towards reality. One of the earliest references to Utility Computing occurred in 1961 at the MIT Centennial. On that occasion, John McCarthy presented his vision of computing organized as a public utility. Just as the telephone system had developed into a major industry, Professor McCarthy envisioned that “Computing as a Utility” could one day become the basis of a new and important public industry.

Rooted in the long-standing vision and hope for “Computing as a Utility” that began more than half a century ago, the genesis of Cloud Computing goes back a long way. To a growing number of industry observers, it is an old idea whose time may have finally arrived when, in 2006, Amazon began offering Cloud infrastructure services to the public as a utility. Despite initial skepticism, it was a watershed event in the quest of Utility Computing and helped to usher in the first wave of industrial-strength commercial Cloud Computing offerings.

Israel: To wrap up our discussion today, can you leave us with a few thoughts about some of the implications of Cloud Computing as enterprises begin their transition to the Cloud?

Annie: Eric Schmidt, Google’s Chairman and Chief Executive has stated that Cloud computing will be “the defining technological shift of our Generation”. However, the media and vendor-spun hype (at times referred to as “cloud-washing”) around this topic has created an unprecedented level of confusion. Today, unabated sound and fury surrounding the Cloud Computing buzz continues and indeed, increases. Nevertheless, it is all but certain that there will be no “big or easy switch” for enterprise IT to transition overnight from running applications on premises to the Cloud. Because the shift is not an “all-or-nothing” or a “one size fits all” endeavor, stakeholders in enterprises should take a judicious measured approach to balance different tradeoffs.

To sustain the transition of enterprise IT to the Cloud will require not only technological advances but also new business models, new forms of IT organizational management structure and perhaps even new IT roles.  One of the “inconvenient” truths about embracing new user-empowerment technology trends and business models is the slippery slope of finding the “right” balance between hierarchical command-control and bottom-up empowerment. The harm (ineffectiveness and counter-productivity) of too much top-down control can be matched or even surpassed by the dangers of too little control. User empowerment without reasonable constraints can lead to anarchy and chaos. A new form of organizational governance is clearly required to avoid these problems. Striking a balance between planned orderliness and new emergent forces has been a challenging dynamic since the dawn of civilization.

Many of the principles that have been refined over the millennia will have direct applicability for governing tomorrow’s world of “self-service” computing in the Cloud. Clearly, there will be direct implications to new scrutiny as well as the shaping/changing of security and governance related policies. However, an organization should not overlook the human aspects and the cultural impact on the IT system administration personnel.  For example, resistance to sweeping changes driven by a fear of losing control and the stress over the prospect of losing employment can be one of the more profound ramifications that often are under the management radar.

Cloud Computing likely will change the status quo of IT system administration and, perhaps in the future, could obviate the need for some traditional IT system skills. Cloud Computing, however, is also opening new opportunities for the technical IT community and enterprise IT personnel. There is a growing consensus that, as Cloud Computing evolves, the need for more business-minded IT staff will accelerate. Specifically, there likely will be an urgent need for people “with broader business skills who can manage multiple supplier relationships.”  Freed from a variety of low-level operational tasks and controls of physical infrastructure via Cloud Computing, enterprise IT has the opportunity to promote system administration staff to higher-level decision makers as IT service facilitators and SLA contracts managers. In the near future, many traditional hierarchical command-control system operators may pursue a wider array of IT professional opportunities spanning the roles of enterprise architects; capacity planning; budget planning; performance assurance; and data, security, governance gatekeepers.

Israel: This really resonates with what I see happening in many of my consulting engagements. Successful companies waste an immense amount of capital, energy and management attention on migrating from yesterday’s datacenter to today’s or tomorrow’s datacenter. When exposed to the pains of such migrations, I am always reminded of Peter Drucker’s quip “Companies make shoes!” It is beyond me why companies who makes shoes, cars, drugs or financial instruments would want to be prisoners of their own success, hopping over from one data center to a bigger data center every few years.

Annie: Thanks for Peter Drucker’s quip. I am going to borrow it for my future use.

Israel: Annie, I can’t thank you enough for sharing your insights with us. You really connect the dots!

Endnotes:

[i] Based on the assumption that IT infrastructure performance can be greatly enhanced when each element is designed and brought to market as a component of a tightly integrated, optimized system.

[ii] With this slogan, IBM is promoting the hybrid zEnterprise 196 integrating multiple architectures and OS in a “box” as the one stop shopping ready-made private Cloud for enterprises.

[iii] Information source from IBM, The Open Group Conference, July 22, 2009.

Extending the Scope of The Agile Executive

leave a comment »

For the past 18 months Michael Cote and I focused The Agile Executive on software methods, processes and governance. Occasional posts on cloud computing and devops have been supplementary in nature. Structural changes in the industry have generally been left to be covered by other blogs (e.g.  Cote’s Redmonk blog).

We have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. Two reasons drove us to this conclusion:

  • The rise of software testing as a service. The importance of this trend was summarized in Israel’s recent Cutter blog post “Changing Playing Fields“:

Consider companies like BrowserMob (acquired earlier this month by NeuStar), Feedback Army,  Mob4Hire,  uTest (partnered with SOASTA a few months ago), XBOSoft and others. These companies combine web and cloud economics with the effectiveness and efficiency of crowdsourcing. By so doing, they change the playing fields of software delivery…

  • The rise of devops. The line between dev and ops, or at least between dev and web ops, is becoming fuzzier and fuzzier.

As monolithic software development and delivery processes get deconstructed, the structural changes affect methods, processes and governance alike. Hence, discussion of Agile topics in this blog will not be complete without devoting a certain amount of “real estate” to these two changes (software testing as a service and devops) and others that are no doubt forthcoming. For example, it is a small step from testing as a service to development as a service in the true sense of the word – through crowdsourcing, not through outsourcing.

I asked a few friends to help me cover forthcoming structural changes that are relevant to Agile. Their thoughts will be captured through either guest posts or interviews. In these posts/interviews we will explore topics for their own sake. We will connect the dots back to Agile by referencing these posts/interviews in the various posts devoted to Agile. Needless to say, Agile posts will continue to constitute the vast majority of posts in this blog.

We will start the next week with a guest post by Peter McGarahan and an interview with Annie Shum. Stay tuned…

Technical Debt Meets Continuous Deployment

with 11 comments

As you would expect in a conference entitled velocity, and in a follow-on devops day, speeding up things was an overarching theme. In the context of devops, the theme primarily manifested itself in lively discussions about the number of deploys per day. Comments such as the following reply to my post Ops Driven Dev were typical:

Conceptually, I move the whole business application configuration into the source code…

The theme that was missing for me in many of the presentations and discussions on the subject was the striking of a balance between velocity and quality. The classical trade-off in process control is between production rate and product quality (and safety, but that aspect [safety] is beyond the scope of this post). IMHO this trade-off applies to software just as it applies to mechanical or chemical processes.

The heart of the “deploy early and often” strategy hailed by advocates of continuous deployment is known deployment state to known deployment state. You don’t let the deployment evolve from one state to another before it has stabilized to a robust state. The power of this incremental deployment is in dealing with single-piece (or as small number of pieces as possible) flow rather than dealing with the effects of multiple-piece flow. When the deployment increments are small enough, rollback, root cause analysis and recovery are relatively straightforward if a deployment turns sour. It is a similar concept to Agile development, extending continuous integration to continuous deployment.

While I am wholeheartedly behind this devops strategy, I believe it needs to be reinforced through rigorous quality criteria the code must satisfy prior to deployment. The most straightforward way for so doing is through embedding technical debt criteria in the release/deploy process. For example:

  • The code will not be deployed unless the overall technical debt per line of code is lower than $2.
  • To qualify for deployment, code duplication levels must be kept under 8%.
  • Code whose Cyclomatic complexity per Java class is higher than 15 will not be accepted for deployment.
  • 50% unit test coverage is the minimal level required for deployment.
  • Many others…

I have no doubt whatsoever that code which does not satisfy these criteria might be successfully deployed in a short-term manner. The problem, however, is the accumulative effect over the long haul of successive deployments of code increments of inadequate quality. As Figure 1 demonstrates, a Java file with Cyclomatic complexity of 38 has a probability of 50% to be error-prone. If you do not stop it prior to deployment through technical debt criteria, it is likely to affect your customers and play havoc with your deployment quite a few times in the future. The fact that it did not do so during the first hour of deployment does not guarantee that such a  file will be “well-behaved” in the future.

mccabegraph.jpg

Figure 1: Error-proneness as a Function of Cyclomatic Complexity (Source: http://www.enerjy.com/blog/?p=198)

To attain satisfactory long-term quality and stability, you need both the right process and the right code. Continuous deployment is the “right process” if you have developed the deployment infrastructure to support it. The “right code” in this context is code whose technical debt levels are quantified and governed prior to deployment.

Velocity 2010 – Where Have All The Business Executives Gone?

leave a comment »

velocity2009_logo

By now I have “touched” and been touched by dozens and dozens of participants in the O’Reilly Velocity conference. I did not meet any business executive (other than various CEOs/CMOs who pitched their companies from the podium).

No doubt, the O’Reilly Velocity conferences (this is the third one) are geek events. Having said that, IMHO these conferences are extremely important to the business executive. If you don’t attend you miss up on four value propositions:

  • Getting a sense of what the future in web operation holds.
  • Grasping what forthcoming advances in web operations mean to your business design. See for example my post Ops Driven Dev from earlier today.
  • Understanding the needs of a growing demographic sector that your business might not be able to access today.
  • Getting to know the kind of developers and sysadmins that probably work in the trenches of your company.

You owe it to yourself to consider attending Velocity 2011 if terms like “sysadmin 2.0”, “darkmode” and “devops” and do not resonate with you. I can’t give you “your money back” guarantee if you are not satisfied with the conference, but I will gladly pick the bar tab when we meet there.

Written by israelgat

June 24, 2010 at 3:02 pm