The Agile Executive

Making Agile Work

Posts Tagged ‘On-premises

Through the Prism of IT Transformation for Tomorrow’s Enterprise Datacenters: Interview with Annie Shum

with one comment

As indicated in our recent post “Extending the Scope of the Agile Executive”, Cote and I have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. We start the coverage of structural changes that are relevant to Agile with an interview with Annie Shum, VP of Advanced Technology, Amdocs Corp.

We cover a broad panorama in this interview with Annie. Here are some items that may be of special interest to the reader who focuses on Agile methods, processes and governance in a broad sense – from programming to IT operations and anything in between:

  • Unleashing disruptive transformations
  • Supply and demand – the two sides of the IT “coin”
  • Open source software in general and OpenStack in particular
  • The impact of social networking and other Web 2.0 tools
  • Three billion downloads and counting…
  • Finding the “right” balance between hierarchical command-control and bottom-up empowerment
  • “Self-service” IT service delivery/deployment
  • Forthcoming changes in IT system administration and the rise of DevOps
  • How to gain freedom from a variety of low-level operational tasks and controls of physical infrastructure
  • Provisioning and over provisioning
  • Many others…

Annie answers all questions with data, insights and passion. No surprises there…

Israel: Nancy Foy immortalized the monolithic International Business Machines Corporation in her classic “The Sun Never Sets on IBM.” Much has changed, of course, since the book was published in the 70’s. For quite a few years IBM has been deconstructing its business design, its organizational structure and both internal and external processes. By some accounts, prior to Gerstner IBM had even been contemplating reforming itself as a bunch of independent companies. The contrast to IBM’s announcement a couple of weeks ago about putting both software and hardware under one hand is noteworthy. What do you make of it, Annie? Is this a new development? Or is it a blast from the past?

Annie: Interesting question but I would be remiss if I failed to point out that I don’t have a crystal ball or the expertise to predict reliably whether this will be an isolated case or a trend-setter.  Although the arguably radical IBM organizational restructuring in management is newsworthy, I am not especially interested in looking at it purely from the perspective of vendor management structure because it is merely a means to an end.  What intrigues me is the rationale behind this key announcement.  In particular, I am interested in envisioning the more profound and potentially game-changing, if not disruptive, transformation that IBM hopes to unleash by adopting this bold organizational restructuring with likely (significant) risks.

To better understand this new undertaking, I think it would be instructive to analyze it from the supply side as well as the demand side.  So let’s break up the narrative: first by looking at the supply side, namely the IT service providers/system vendors, followed by the second half of the narrative, the customers/consumers.

Israel: I am intrigued by your supply side/demand side approach. Please elaborate.

Annie: To understand the supply side, consider the three major IT vendor announcements made during the week of July 19, 2010.  Not as three disparate events. Instead, by putting them in context and connecting the dots among them, we can uncover some very interesting insights into emerging trends of the IT industry in general and actionable guidelines for tomorrow’s enterprise datacenters in particular. 

Let’s begin with the May 2010 report from Saugatuck Research titled, “Gorillas In the Cloud: Applying Saugatuck’s “Master Brand” Model to Cloud IT” whereby “Master Brands” refer to those vendors (and service providers) that dominate and influence IT marketplaces, technologies and/or user accounts. This May report sets the stage for the latest Saugatuck research alert titled, “One-Stop Shopping – Major Vendors Acquire Assets for the Cloud”. This research alert describes how increasing numbers of major vendors are striving to become the “sole source for offerings up and down the IT EcoStack™ targeting the Cloud.”

As if on cue, IBM released two major announcements just this past week. First, on July 20, 2010, InformationWeek reported that IBM plans[i] to combine hardware and software to spur the company’s efforts to deliver bundled, plug-and-play systems. According to Sam Palmisano, the core strategy pivots on producing tightly bundled computer systems that “feature chips, middleware, and business software designed from the ground up to support Cloud Computing and other new-wave IT architectures.”

To some long-standing industry observers, this strategy may appear to be “back to the future” and IBM is simply returning to its roots after a prolonged hiatus from its original business model.  There is, however, an important historical footnote. Almost five decades ago, due to concerns of monopoly antitrust abuses stemming from the bundling of hardware and software in the IBM mainframe systems, the US government took legal action leading to IBM’s acceptance of the 1956 Consent Decree.

Today, unlike the past, IBM no longer dominates the computer systems market. In fact, there is a growing trend towards bundled systems, mainly by the “Master Brands”, to “mask” complexity for customers as they embark on implementing complex IT endeavors including key programs such as datacenter consolidation, server/storage virtualization, predictive analytics, SOA/BPM, Cloud Computing (public, private or hybrid), and Green IT. For example, Oracle acquired Sun Microsystems in 2009 for $7.4 billion to support what InformationWeek described as Larry Ellison’s “applications-to-disk” strategy, while HP and Microsoft earlier this year unveiled a multi-million dollar initiative under which they will jointly engineer servers and software.

It is likely that the timeline of the July 19 IBM announcement was influenced (perhaps even pressured) by its rivals taking a similar approach to address evolving enterprise datacenters. To expedite this strategy to deliver bundled “plug-and-play” systems, IBM first announced sweeping organizational restructuring to foster internal collaboration and harness synergies across products and LOBs. Clearly, the biggest change is the management restructuring by consolidating key hardware and software divisions under the watch of a single executive, Steve Mills who’s a longtime IBM software chief.

Next, just three days later on the heels of this organizational makeover, IBM made another major announcement on July 22, 2010 amidst much fanfare and hype. Presenting the vision of a new “Dimension in Computing” designed to control multi-platform datacenter operational costs and (significantly reduce complexity), IBM announced a new hybrid “system of systems” platform that unifies IT for efficient service delivery and large-scale datacenter simplification. Dubbed a “datacenter in a box” or a “cloud in a box[ii]”, it integrates the new super powerful and energy-efficient mainframe zEnterprise, 196  running z/OS and the zEnterprise  BladeCenter Extension zBX, running Linux and AIX. By extending the System Z’s qualities of service (spanning security, scalability, availability, efficiency and virtualization) to enable Cloud readiness and optimized service delivery for enterprises, IBM likely is promoting its strength in building private Clouds for large enterprises.  See the following two slides from the IBM July 22 announcement.


Israel: So it looks like the IT industry is heading towards more “power” consolidation of mega vendors or as you referenced earlier, “Master Brands”. Is this a fait accompli? If so, is it a matter of channeling demand toward one-stop-shopping irrespective of integration realities underneath? Isn’t there a danger to this trend?”

Annie: Despite these high profile announcements by the major vendors, it is far from fait accompli. And yes, your comments are only too real especially for those who have lived through the era of monopolies and antitrust concerns. Frankly, many people believe that such a trend may be a clear threat in the presently emerging era.  While I don’t want to downplay the risk and potential damage of antitrust abuses, I believe there are some factors at work here to counteract, or at least limit, unchecked monopolies in the IT industry.

In this Internet age with the rise of “Consumerization of IT”, catalyzed by the nearly ubiquitous access to social networking and other Web 2.0 tools, IT has permeated almost every market sector in our society. The set of functions and services supported and enabled by IT has become exceedingly vast, diverse and complex such that no single business model or supplier is in a position to dominate, let alone destroy all others.  The era when a handful of proprietary stalwart vendors dominated the IT industry is all but over. Just this past decade, we have witnessed the meteoric rise of Google, Facebook and more recently, Twitter. A growing formidable force, namely the open source software and its bottom-up self-organizing community, powers as well as empowers most if not all of the Web 2.0 companies. At this point in our discussion, it is apt to segue to the third vendor announcement during the week of July 19, 2010.

On July 19, Cloud service provider RackSpace with NASA announced the sponsorship of the project: OpenStack, an open source IaaS Cloud platform. Included in the announcement is a diverse group of computer system providers from across the technology industry like CITRIX, DELL, NTT DATA, RIGHTSCALE and others to drive a deployable, totally open cloud solution.  According to their mission statement, OpenStack is designed to foster the emergence of technology standards and Cloud interoperability. One of the primary objectives is to facilitate enterprises to avoid vendor lock-in.

Israel: This appears to be a very timely announcement given that “vendor lock-in” is one of the top concerns confronting enterprises as they evaluate and plan for the transition to Cloud Computing. Having said that, are we not back to “square zero” – striking a balance between openness and “one-stop shopping” tight integration?

Annie: Yes indeed. Although some industry observers describe the issue as “vendor lock-in”, others see it as a broader issue describing it as the “challenge/difficulty of bringing back in-house” or the “lack of interoperability standards for seamless portability”.  For example, in the 2009 Cloud Computing survey conducted by IDC, over 80% surveyed rated this issue under both labels to be very important.  Incidentally, I should point out that “vendor lock-in” is neither a new nor a unique issue with Cloud Computing. On the contrary, it is a long-standing “problem” going all the way back from the early days of mainframe computing and culminating with the government versus IBM antitrust lawsuit in the ‘50s as we discussed earlier.

Interestingly, there are many forms and variants of vendor lock-in and they are not all equal. For example, many industry observers have been unhappy with the proprietary development and delivery model that Apple imposed on the iPod/iPhone/iPad.  Although the risk of “vendor lock-in” may be real, any negative impact on the ever-growing large and loyal Apple customer base seems minimal.  Just think about the run-away successful App Store. It is heavily “curated” by Apple. Yet since its opening on July 10, 2008, there have been more than one hundred thousand available apps in App Store, over two billion application downloads (as of November 2009), and reaching three billion downloads by January 2010. Steve Jobs hailed this as a landmark event: “Three billion applications downloaded in less than 18 months – this is like nothing we’ve ever seen before.”

Sorry we digressed. So let’s resume our discussion of the recent major announcements.  In a nutshell, the OpenStack announcement attempts to address the issue directly by allowing any organization to create and offer Cloud Computing capabilities using open source software freely available under the Apache 2.0 license running on standard hardware.

Now this gets interesting: a tale of two diametrically opposite strategies.  On one hand, we have IBM announcing the high performance zEnterprise 196 as a hybrid integrated multi-architecture “datacenter /Cloud in a box”. The goal is to mask complexity and maximize efficiency:  infrastructure (management /admin costs savings up to 70%) and energy consumption (up to 82% energy usage reduction) with a bundled technology stack: integrating multi-platforms, infrastructure and management (spanning service, platform and hardware).  A principal concern of this proprietary single vendor approach is the risk of “vendor lock-in”.

On the other hand, the OpenStack is “DIY” based on an open source development platform. The goal of OpenStack is the following: “Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans Cloud providers.”  The cons/challenges of this approach are probably similar to conventional “DIY” open source projects.

I should clarify that this dichotomy may be seen as an entire spectrum. As noted here IBM, VMware, etc on one hand, and RackSpace, Eucalyptus, etc on the other hand, exemplify the two end-points bookending the dichotomy spectrum. Along the spectrum, there are a growing number of intermediate options/offerings (with a rising number of variations) by a wide variety of IT Cloud service vendors: stalwart vendors including Amazon, Microsoft, Google, Salesforce.com,  etc as well as young companies and startups such as RackSpace, RightScale, Boomi, Canonical, Cloudkick, Opscode, etc.

Israel: Is this shaping up to be a battle between two diametrically opposite strategies? And if so, which one will come out on top? Or is it a draw?

Annie: To me, a similar dichotomy has already existed previously in the IT industry. For example, think Apple versus Google. Consider the modus operandi of the Apple core business model (“close or at least closely curated” to optimize user experience and quality) versus that of Google’s (“open standards/APIs” to maximize opportunities for 3rd party development participation).

Insofar as whether bundled systems or “Cloud in a box” versus open source “DIY” will be the ultimate winner, I have to defer to other industry observers with more experience such as you.  Perhaps in our future Q&A meet-up, I am interested to hear your views on how the competition may be settled eventually.  However, while we all await the uncertain outcome, IT practitioners should be mindful that the dichotomy spectrum would have profound implications not only on the supply side but also on the demand side.  In particular, because the offerings from the dichotomy spectrum will be rapidly evolving, the fluidity will very likely confound and confuse users/consumers as they attempt to balance a convoluted set of different tradeoffs. Many  enterprise IT practitioners will be under pressure to make difficult and ambiguous choices by picking one or more evolving offerings over other evolving offerings for building the foundation of tomorrow’s enterprise datacenters in the Cloud era.

Israel: Good timing.  So far in our Q&A today, you have focused on the first half of the narrative – namely, the supply side, now let’s continue to part 2 of your narrative, namely, the demand side.

Annie:  Earlier, I discussed the supply side by connecting the dots among three key announcements during the week of July 19. Now similarly for the demand side, I will suggest a few more dots that I believe should be connected. Specifically, I suggest connecting the following trends:

  • The growing complexities and inefficiencies of on-premises enterprise datacenters;
  • The inevitable rise of alternative delivery and deployment models for IT services; and
  • The advent of Cloud Computing:  a long-standing vision whose time may finally arrive.

Several months ago, I published a guest post on your blog site entitled “The Urgency of Now.”  You might recall that I began the post with some sobering and perhaps even alarming statistics about the gross inefficiency of traditional on-premises enterprise datacenters.  Here again is the Enterprise Datacenter Index at–a-glance:

In summary, enterprise IT faces a “crisis of staggering complexity” and IT infrastructure is reaching a “breaking point” marked by such salient factors/trends[iii] as the following:

  • 1.5 X: Information explosion driving over fifty percent yearly growth in storage shipments;
  • 85% idle: Over-provisioned waste primarily in distributed computing environments e.g. typical computing resources (capacity) remain idle for  an average of over eighty percent;
  • $40 Billion or 3.5% of sales: Retail industries annual loss due to (supply) value chain inefficiencies;
  • 60-70% IT spending on maintenance/overhead: Overall IT spending profile shows that the lion’s share of IT expenses goes towards overhead and maintenance. Maintenance overhead: seventy cents per dollar is spent on maintaining IT infrastructures at the expense of adding new capabilities;

Now consider the following scenario. Suppose enterprise IT could choose an alternative set of “self-service” IT service delivery/deployment models that would be orthogonal to traditional hierarchical command-and-control Cap-ex based datacenters.  Instead of owning and tightly controlling its own private internal datacenter and purchasing capital resources up front, an organization on-demand would “rent” pooled computing resources hosted on the provider’s multi-tenant environment. The Internet would serve as the global infrastructure “grid” and all services would be delivered through Web APIs.  In lieu of having a dedicated IT staff administering IT operations, users could avoid lengthy red-tape delay and access directly/immediately to provision as well as to manage computing capacity as “self-service IT”. In addition, instead of formal contracts and protracted delay in hardware procurement, an organization would pay for access at any time to “unlimited” computing capacity simply with a credit card.

Because there would not be formal contracts imposing preset time commitments, both entry and exit would be friction-free. In this way, an organization could accelerate time-to-value/market and help to catalyze experimentation and innovative endeavor. Furthermore, CIOs of enterprise IT could avoid or mitigate the lose-lose dilemma because they would not be restricted to choosing either a policy that leads to “waste due to over-provisioning” using peak usage estimates for capacity planning or a policy that can incur “risk due to under-provisioning” using non-peak estimates. Ideally, IT staff would “plan capacity based on typical usage” while confident that it could “scale dynamically at peak times” to maintain performance and SLAs. Simply put, the primary objectives for today’s organizations are not just about increasing speed and efficiency for back office automation. Rather, they also are about increasing speed and flexibility to adapt to changes by yielding judicious control to providers for on-demand utility computing services off-premises.

Conceptually, this scenario is an overall vision of Cloud Computing. With the advent of Cloud Computing, the vision of “Computing as a Utility” is beginning to take shape. Since the early days of time-sharing computing, that vision has taken a quantum leap towards reality. One of the earliest references to Utility Computing occurred in 1961 at the MIT Centennial. On that occasion, John McCarthy presented his vision of computing organized as a public utility. Just as the telephone system had developed into a major industry, Professor McCarthy envisioned that “Computing as a Utility” could one day become the basis of a new and important public industry.

Rooted in the long-standing vision and hope for “Computing as a Utility” that began more than half a century ago, the genesis of Cloud Computing goes back a long way. To a growing number of industry observers, it is an old idea whose time may have finally arrived when, in 2006, Amazon began offering Cloud infrastructure services to the public as a utility. Despite initial skepticism, it was a watershed event in the quest of Utility Computing and helped to usher in the first wave of industrial-strength commercial Cloud Computing offerings.

Israel: To wrap up our discussion today, can you leave us with a few thoughts about some of the implications of Cloud Computing as enterprises begin their transition to the Cloud?

Annie: Eric Schmidt, Google’s Chairman and Chief Executive has stated that Cloud computing will be “the defining technological shift of our Generation”. However, the media and vendor-spun hype (at times referred to as “cloud-washing”) around this topic has created an unprecedented level of confusion. Today, unabated sound and fury surrounding the Cloud Computing buzz continues and indeed, increases. Nevertheless, it is all but certain that there will be no “big or easy switch” for enterprise IT to transition overnight from running applications on premises to the Cloud. Because the shift is not an “all-or-nothing” or a “one size fits all” endeavor, stakeholders in enterprises should take a judicious measured approach to balance different tradeoffs.

To sustain the transition of enterprise IT to the Cloud will require not only technological advances but also new business models, new forms of IT organizational management structure and perhaps even new IT roles.  One of the “inconvenient” truths about embracing new user-empowerment technology trends and business models is the slippery slope of finding the “right” balance between hierarchical command-control and bottom-up empowerment. The harm (ineffectiveness and counter-productivity) of too much top-down control can be matched or even surpassed by the dangers of too little control. User empowerment without reasonable constraints can lead to anarchy and chaos. A new form of organizational governance is clearly required to avoid these problems. Striking a balance between planned orderliness and new emergent forces has been a challenging dynamic since the dawn of civilization.

Many of the principles that have been refined over the millennia will have direct applicability for governing tomorrow’s world of “self-service” computing in the Cloud. Clearly, there will be direct implications to new scrutiny as well as the shaping/changing of security and governance related policies. However, an organization should not overlook the human aspects and the cultural impact on the IT system administration personnel.  For example, resistance to sweeping changes driven by a fear of losing control and the stress over the prospect of losing employment can be one of the more profound ramifications that often are under the management radar.

Cloud Computing likely will change the status quo of IT system administration and, perhaps in the future, could obviate the need for some traditional IT system skills. Cloud Computing, however, is also opening new opportunities for the technical IT community and enterprise IT personnel. There is a growing consensus that, as Cloud Computing evolves, the need for more business-minded IT staff will accelerate. Specifically, there likely will be an urgent need for people “with broader business skills who can manage multiple supplier relationships.”  Freed from a variety of low-level operational tasks and controls of physical infrastructure via Cloud Computing, enterprise IT has the opportunity to promote system administration staff to higher-level decision makers as IT service facilitators and SLA contracts managers. In the near future, many traditional hierarchical command-control system operators may pursue a wider array of IT professional opportunities spanning the roles of enterprise architects; capacity planning; budget planning; performance assurance; and data, security, governance gatekeepers.

Israel: This really resonates with what I see happening in many of my consulting engagements. Successful companies waste an immense amount of capital, energy and management attention on migrating from yesterday’s datacenter to today’s or tomorrow’s datacenter. When exposed to the pains of such migrations, I am always reminded of Peter Drucker’s quip “Companies make shoes!” It is beyond me why companies who makes shoes, cars, drugs or financial instruments would want to be prisoners of their own success, hopping over from one data center to a bigger data center every few years.

Annie: Thanks for Peter Drucker’s quip. I am going to borrow it for my future use.

Israel: Annie, I can’t thank you enough for sharing your insights with us. You really connect the dots!

Endnotes:

[i] Based on the assumption that IT infrastructure performance can be greatly enhanced when each element is designed and brought to market as a component of a tightly integrated, optimized system.

[ii] With this slogan, IBM is promoting the hybrid zEnterprise 196 integrating multiple architectures and OS in a “box” as the one stop shopping ready-made private Cloud for enterprises.

[iii] Information source from IBM, The Open Group Conference, July 22, 2009.

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

with 2 comments

Economies of Scale have been much discussed in The Agile Executive since the recent OpsCamp in Austin, TX. The significant savings on system administration costs  in very large data centers have been called out as a major advantage of Internet-scale Clouds. Unlike various short-lived advantages, the benefits to the Cloud operator, and to the Cloud user when the savings are passed on to him/her, are sustainable.

In this guest post, colleague and friend Annie Shum analyzes the various sources of waste in operations in traditional data centers. Like an Agilist with Lean inclinations who confronts an inefficient Waterfall process, Annie explains how economies of scale apply to the various kinds of waste that are prevalent in today’s small and medium data centers. Furthermore, she connects the dots that lead toward a Green IT option.

Here is Annie:

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

Scale Matters: “Over time, however, competitive advantage within categories shifts inexorably toward volume operations architecture.” – Geoffrey Moore, “Dealing with Darwin”

It is a truism that today’s datacenters are systemically inefficient. This is not intended as an indictment of all conventional datacenters. Nor does it imply that today’s datacenters cannot be made more efficient (incrementally) through right sizing and other initiatives, notably consolidation by deploying virtualization technologies and governance by enforcing energy conservation/recycling policies. There are a myriad of inefficiencies, however, that are prevalent in datacenters today.

Many industry observers lament the “staggering complexity” that permeates on-premises datacenters. Over time, most, if not all, enterprise IT datacenters have become amalgamations of disparate heterogeneous resources. Generally, they can be described as incohesive, perhaps even haphazard, accumulations. The datacenter components and configurations often reflect the intersections of organizational politics (LOB reporting structures leading to highly customized/organizational asset acquisitions and configurations), business needs of the moment (shifting corporate strategies and changing business imperatives to gain competitive edge or meet regulatory compliances) and technology limitations (commercial tools available in the marketplace). It should come as no surprise that human interactions and errors are considered a major contributor to the inefficiencies of datacenters: IBM reported that human errors account for seventy percent of the datacenter problems.

The challenge of maximizing energy efficiency begins fundamentally with the historical capital-intensive ownership model for computing assets to enable each organization to operate its own datacenter and to provide “24×7 availability” to its own users.  The enterprise IT staff has been required to support unpredictable future growth, accommodate situational demands and unscheduled but deadline-critical events, meet performance levels within SLAs and comply with regulatory and auditing requirements. Hence, datacenters generally are over-configured and over-provisioned. In addition to highly skewed under-utilization of distributed platform servers, ninety percent of corporate datacenters have excess cooling capacity. Worst of all, according to IBM, about seventy-two percent of cooling bypassed the computing equipment entirely. Further compounding these problems for a typical enterprise datacenter, is the lack of transparency and the inability to control energy consumption properly due to inadequate and often inaccurate instrumentation to quantify energy consumption and waste due to energy lost.

The economics of Cloud Computing can offer a compelling option for more efficient IT: by lowering power consumption for individual organizations and by improving the efficiency of a large number of discrete datacenters. Although the electricity consumption of Cloud Computing is projected to be one to two percent of today’s global electricity use, Cloud service providers can still cultivate sustainable Green I.T. effectively at lower costs by leveraging state-of-the-art super energy efficient massive datacenters, proximity to power generation thereby reducing transmission costs and, above all, harnessing enormous economies of scale. To better understand how Cloud Computing can offer greener computing in the Cloud and how will it help moderate power consumption by datacenters and rein in run-away costs, a good starting place is James Hamilton’s September 2008 study on Internet-Scale Service Efficiency” as summarized in the table below.

Resource Cost in

Medium DC

Cost in

Very Large DC

Ratio
Network $95 / Mbps / month $13 / Mbps / month 7.1x
Storage $2.20 / GB / month $0.40 / GB / month 5.7x
Administration ≈140 servers/admin >1000 servers/admin 7.1x

Table 1: Internet-Scale Service Efficiency [Source: James Hamilton]

This study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments.  The significant cost savings is driven primarily by scale. Other key factors include location (low cost real estate and electricity rate, abundant water supply and readily available fiber-optic connectivity), proximity to electricity and power generators, load diversity, and virtualization technologies.

Will this mark the beginning of the end for traditional on-premises datacenters? Can enterprise IT continue to justify new business cases for expanding today’s non-renewable energy powered datacenters? According to the McKinsey article, the costs to launch a large enterprise datacenter have risen sharply from $150M to over $500M over the past five years. The facility operating costs are also increasing at about twenty percent per year. How long will the status quo last for enterprise IT considering the recent trend of Cloud service providers? Major players such as Google, Microsoft as well as the U.S. government itself have invested in or are planning ultra energy-efficient mega-size datacenters (also known as “container hotels”) with massive commoditized containerization and proximity both to power source and less expensive power rates. Bottom line: will the tide turn if the economics (radical cost savings) due to enormous economies of scale become too significant to ignore?

Despite the potential for significant cost savings, it is premature to declare the demise of traditional IT or the end of enterprise datacenters. After all, the rationale for today’s enterprise IT extends well beyond simplistic bottom-line economics – at least for now. To most industry observers, enterprise datacenters are unlikely to disappear although the traditional roles of enterprise IT will be changing. A likely scenario may involve redistributing IT personnel from operating low-level system operational tasks to addressing higher-level functions involving governance, energy management, security and business processes. Such change not only would become more apparent but will likely be precipitated by the rise of hybrid Clouds and the growing interconnection linking SOA, BPM and social computing. Another likely scenario is the rise of the mega datacenters or “container hotels” for Cloud Utility Computing providers. Although the global economic outlook will undoubtedly play a key role in shaping the development plans/timelines of the mega datacenters, they are here to stay. Case in point: by 2012, Intel estimates that it will design and ship about a quarter of the server chips (it sells) to such mega-data centers.


Cloud Computing Forecasts: “Cloudy” Future for Enterprise IT

leave a comment »

In a comment on The Urgency of Now, Marcel Den Hartog discusses technology assimilation in the face of hype:

But if people are already reluctant to run the things they have, on another platform they already have, on an operating system they are already familiar with (Linux on zSeries), how can you expect them to even look at cloud computing seriously? Every technological advancement requires people to adapt and change. Human nature is that we don’t like that, so it often requires a disaster to change our behavior. Or carefully planned steps to prove and convince people. However, nothing makes IT people more cautious than a hype. And that is how cloud is perceived. When the press, the analysts and the industry start writing about cloud as part of the IT solution, people will want to change. Now that it’s presented as the silver bullet to all IT problems, people are cautious to say the least.

Here is Annie Shum‘s thoughtful reply to Marcel’s comment:

Today, the Cloud era has only just begun. Despite lingering doubts, growing concerns and wide-spread confusion (especially separating media and vendor spun hype from reality), the IT industry generally views Cloud Computing as more appealing than traditional ASP /hosting or outsourcing/off-shoring. To technology-centric startups and nimble entrepreneurs, Cloud Computing enables them to punch above their weight class. By turning up-front CapEx into a more scalable and variable cost structure based on an on-demand pay-as-you-go model, Cloud Computing can provide a temporary, level playing field. Similarly, many budget-constrained and cash-strapped organizations also look to Cloud Computing for immediate (friction-free) access to “unlimited” computing resources. To wit: Cloud Computing may be considered as a utility-based alternative to an on-premises datacenter and allow an organization (notably cash-strapped startups) to “Think like a ‘big guy’. Pay like a ‘little guy’ ”.

Forward-thinking organizations should not lose sight of the vast potential of Cloud Computing that extends well beyond short-term economics. At its core, Cloud Computing is about enabling business agility and connectivity by abstracting computing infrastructure via a new set of flexible service delivery/deployment models. Harvard Business School Professor Andrew McAffee painted a “Cloudy” future for Corporate IT in his August 21, 2009 blog and cited a perceptive 1983 paper by Warren D. Devine, Jr. in the Journal of Economic History called “From Shafts to Wires: Historical Perspective on Electrification”.[1] There are three key take-away messages that resonate with the current Cloud Computing paradigm shift. First: The real impact of the new technology was not apparent right away. Second: The transition to full utilization of the new technology will be long, but inevitable. Third: There will be detractors and skeptics about the new technology throughout the transition. Interestingly, telephone is another groundbreaking disruptive technology that might have faced similar skepticism in the beginning. Legend has it that a Western Union internal memo dated 1876 downplayed the viability of the telephone: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communications. The device is inherently of no value to us.”

The dominance of Cloud Computing as a computing platform, however, is far from a fait accompli. Nor will it ever be complete, a “one-size fits all” or a “big and overnight switch”. The shape of computing is constantly changing but it is always a blended and gradual transition, analogous to a modern city. While the cityscape continues to change, a complete “rip-and-replace” overhaul is rarely feasible or cost-effective. Instead, city planners generally preserve legacy structures although some of them are retrofitted with standards-based interfaces that enable them to connect to the shared infrastructure of the city. For example, the Paris city planners retrofitted Notre Dame with facilities such as electricity, water, and plumbing. Similarly, despite the passage of the last three computing paradigm shifts – first mainframe, next Client/Server and PCs, and then Web N-tier – they all co-exist and can be expected to continue in the future. Consider the following. Major shares of mission-critical business applications are running today on mainframe servers. Through application modernization, legacy applications – notably Cobol for example – now can operate in a Web 2.0 environment as well as deploy in the Cloud via the Amazon EC2 platform.

Cloud Computing can provide great appeal to a wide swath of organizations spanning startups, SMBs, ISVs, enterprise IT and government agencies. The most commonly cited benefits include the promise of avoiding CapEx and lowering TCO to on-demand elasticity, immediacy and ease of deployment, time to value, location independence and catalyzing innovation. However, there is no magic in the Cloud and it is certainly not a panacea for all IT woes. Some applications are not “Cloud-friendly”. While deploying applications in the Cloud can enable business agility incrementally, such deployment will not change the characteristics of the applications fundamentally to be highly scalable, flexible and automatically responsive to new business requirements. Realistically, one must recognize that the many of the challenging problems – security, data integration and service interoperability in particular – will persist and live on regardless of the computing delivery medium: Cloud, hosted or on-premises.

[1] “The author combed through the contemporaneous business and technology press to learn what ‘experts’ were saying as manufacturing switched over from steam to electrical power, a process that took about 50 years to complete.” – Andrew McAfee, September 21, 2009.

I will go one step further and add quality to Annie’s list of challenging problem. A crappy on-premises application will continue to be crappy in the cloud. An audit of the technical debt should be conducted before “clouding” an application. See Technical Debt on Your Balance Sheet for a recommendation on quantifying the results of the quality audit.

The Urgency of Now – Guest Post by Annie Shum

with 3 comments

Failure to learn, failure to anticipate, and failure to adapt are the three generic causes of military disasters. Each one of these three failures is bad enough. In combination, they can be catastrophic. Germany swiftly defeated and conquered France in 1940 due to the utter failure of the French army to grasp the nature of future war, to conceive the probable action of the German forces and to adequately react to the German initiative once it unfolded through the Ardennes. The patterns leading to the catastrophe suffered by the French are similar in some ways to the eco-meltdowns described by Jared Diamond in Collapse: How Societies Choose to Succeed or Fail.

In this guest post, colleague and friend Annie Shum poses disturbing questions with respect to our willingness and ability as IT professionals to learn, anticipate and adapt to the imperatives of Cloud Computing. Between shockingly low (15%) server capacity utilization on the one hand, and dramatic changes in the needs of the business on the other hand, companies who continue to use industrial-era IT models are at peril. Annie weaves theses and other related threads together, and makes a resounding call-to-action to re-think IT.

It is remarkable that Annie’s analysis herein of the root causes of a possible meltdown in IT identifies worrisome patterns similar to those that the Agile movement has pointed out to with respect to arcane methods of software development. The very same core problems that afflict software development manifest themselves in the IT paradigm as well as in the corresponding business design. Painful and wasteful that this repeated manifestation is, it actually creates the opportunity to manage software, IT, and the business in unison. To do so, we need to embrace a data-driven version of the economics of IT, to grasp the true nature of Cloud Computing without the hype that currently surrounds it, and to adapt software development, IT operations and business design accordingly. As the title of this post states, we need to start carrying out these three tasks now.

Here is Annie:

The Urgency of Now: The Edge of Chaos and A “Strategic Inflection Point” for IT

“It was the worst of times. It may be the best of times.” – IBM

Consider the following table. It contains a list of statistics pertaining to the enterprise datacenter index compiled by Peter Mell and Tim Grance, NIST. Overall, the statistics are sobering, perhaps even alarming, and do not bode well for the long-term sustainability of traditional on-premises datacenters. Prudent IT organizations – whether big or small, stalwart or startup – should consider this as a wake-up call. In particular, out of the almost twelve million servers in US datacenters today, the typical server capacity utilization is only around fifteen percent. Although not explicitly shown in this table, the average utilization of the mainframe z/OS servers is typically over eighty percent. However, mainframe z/OS server utilization is only a minor component of the overall average server utilization.

Statistics Enterprise Datacenter Index
11,800,000 Servers in US datacenters
15% Typical server capacity utilization
$800,000,000,000/year Purchasing & maintaining enterprise software
80% Software costs spent on maintenance: the “80-20” ratio
100x Power consumption/sq ft compared to office building
4x Increase in server power consumption, 2001 to 2006
2x Increase in number of servers, 2001 to 2006
$21,300,000 Datacenter construction cost, 9000 sq ft
$1,000,000/year Annual cost to power the datacenter
1.5% Portion of national power generation
50% Potential power reduction from green technologies
2% Portion of global carbon emissions

 

Over the years, organizations have accepted such skewed levels of server inefficiency and escalating maintenance costs of IT infrastructure as the norm. Even as organizations continue to express concerns, many seem resigned to the status quo tacitly: akin to what Bob Evans of InfoWeek described as “insurmountable laws of physics.” Looking ahead, however, the status quo may no longer be a viable option for most organizations. Due to soaring electricity/power costs compounded by the recent global financial meltdown with a near collapse of the financial system that triggered a prolonged (and for now, apparently indefinite) credit crunch, these are unparalleled strident and chaotic times for businesses. Pressured by business decision-makers who are under a heightened level of anxiety, enterprise IT is now confronting a transformative dilemma whether to preserve the status quo or to re-think IT.

On one hand, the current global recessionary down cycle is a particularly powerful (albeit rooted in fear) and instinctive deterrent to challenging the status quo. For risk-adverse organizations, it is only understandable why status quo, fundamental flaws notwithstanding, may trump disruptive change during these challenging times. On the other hand, forward-thinking decision-makers may make the bold but disruptive (radical) choice to view status quo as the fundamental problem: acknowledge the growing “urgency of now” by resolving to overcome and correct the entrenched shortcomings of enterprise IT.

“You never want a serious crisis to go to waste”. That quote (or its many variations) has been attributed alike to economists and politicians. The same could be said for IT. Indeed a growing number of IT industry observers believe the profound impact of the on-going economic crisis could offer a rare window of opportunity for organizations to rethink traditional capital-intensive, command-control, on-premises IT operations and invest in new and more flexible self-service IT delivery/deployment models. Think of this defining moment as what Andy Grove, co-founder of Intel, described as the “strategic inflection point”.  He was referring to the point in the dynamic when the fundamentals of a business are about to change and “that change can mean an opportunity to rise to new heights.” Nonetheless, the choices will be hard decisions because the options are stark: either counter-intuitively invest in a down cycle by focusing on a more sustainable but disruptive trajectory or hunker down and risk irreversible shrinking business. 

As one considers how to address the challenges of today’s enterprise IT, perhaps the following two observations should be taken into account. First, despite the quantum leap in technology advancements, generally the basic design and delivery models of existing IT applications/services are variations of traditionally insular, back-office automation business tools. Second, the organizational structure and business models of most companies are deeply rooted in models of yesteryear, in many instances dating back to the Industrial Revolution. In theory, adhering to the traditional organizational model of top-down command-control can maximize predictability, efficiency and order. Heretofore, this has been the modus operandi for most organizations that Umair Haque succinctly characterized as “ industrial-era companies that make industrial-era stuff — and play by industrial-era rules.” In today’s exponential times, however, the velocity of change and the rapidly growing need of interconnecting to other organizations and automating value chains inevitably lead to an increase in uncertainty and disorder.  Strategically, forward-thinking organizations should consider seeking alternative models to address the interdependent and shifting new world order.  

In their book, “Presence – Human Purpose and the Field of the Future”, authors Peter Senge, Otto Schramer, Joe Jaworski and Betty Sue Flower observe that many of the practices of the Industrial Age appear to be largely unaffected by the changing reality of today’s society and continue to expand in today’s business organizations. They conclude with this advice:  “As long as our thinking is governed by industrial ‘machine age’ metaphors such as control, predictability, and faster is better, we will continue to re-create organizations as we have had – for the last 100 years – despite their increasing disharmony with the world and the science of the 21st century.”  Likewise, the traditional top-down command-control modus operandi of enterprise IT today does not reflect adequately and hence likely is unable to accommodate fully the transformational shift of business from silo organizations to “all thing’s digital all the time”, hyper-interconnected and hyper-interdependent ecosystems.