The Agile Executive

Making Agile Work

Posts Tagged ‘Web 2.0

Through the Prism of IT Transformation for Tomorrow’s Enterprise Datacenters: Interview with Annie Shum

with one comment

As indicated in our recent post “Extending the Scope of the Agile Executive”, Cote and I have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. We start the coverage of structural changes that are relevant to Agile with an interview with Annie Shum, VP of Advanced Technology, Amdocs Corp.

We cover a broad panorama in this interview with Annie. Here are some items that may be of special interest to the reader who focuses on Agile methods, processes and governance in a broad sense – from programming to IT operations and anything in between:

  • Unleashing disruptive transformations
  • Supply and demand – the two sides of the IT “coin”
  • Open source software in general and OpenStack in particular
  • The impact of social networking and other Web 2.0 tools
  • Three billion downloads and counting…
  • Finding the “right” balance between hierarchical command-control and bottom-up empowerment
  • “Self-service” IT service delivery/deployment
  • Forthcoming changes in IT system administration and the rise of DevOps
  • How to gain freedom from a variety of low-level operational tasks and controls of physical infrastructure
  • Provisioning and over provisioning
  • Many others…

Annie answers all questions with data, insights and passion. No surprises there…

Israel: Nancy Foy immortalized the monolithic International Business Machines Corporation in her classic “The Sun Never Sets on IBM.” Much has changed, of course, since the book was published in the 70’s. For quite a few years IBM has been deconstructing its business design, its organizational structure and both internal and external processes. By some accounts, prior to Gerstner IBM had even been contemplating reforming itself as a bunch of independent companies. The contrast to IBM’s announcement a couple of weeks ago about putting both software and hardware under one hand is noteworthy. What do you make of it, Annie? Is this a new development? Or is it a blast from the past?

Annie: Interesting question but I would be remiss if I failed to point out that I don’t have a crystal ball or the expertise to predict reliably whether this will be an isolated case or a trend-setter.  Although the arguably radical IBM organizational restructuring in management is newsworthy, I am not especially interested in looking at it purely from the perspective of vendor management structure because it is merely a means to an end.  What intrigues me is the rationale behind this key announcement.  In particular, I am interested in envisioning the more profound and potentially game-changing, if not disruptive, transformation that IBM hopes to unleash by adopting this bold organizational restructuring with likely (significant) risks.

To better understand this new undertaking, I think it would be instructive to analyze it from the supply side as well as the demand side.  So let’s break up the narrative: first by looking at the supply side, namely the IT service providers/system vendors, followed by the second half of the narrative, the customers/consumers.

Israel: I am intrigued by your supply side/demand side approach. Please elaborate.

Annie: To understand the supply side, consider the three major IT vendor announcements made during the week of July 19, 2010.  Not as three disparate events. Instead, by putting them in context and connecting the dots among them, we can uncover some very interesting insights into emerging trends of the IT industry in general and actionable guidelines for tomorrow’s enterprise datacenters in particular. 

Let’s begin with the May 2010 report from Saugatuck Research titled, “Gorillas In the Cloud: Applying Saugatuck’s “Master Brand” Model to Cloud IT” whereby “Master Brands” refer to those vendors (and service providers) that dominate and influence IT marketplaces, technologies and/or user accounts. This May report sets the stage for the latest Saugatuck research alert titled, “One-Stop Shopping – Major Vendors Acquire Assets for the Cloud”. This research alert describes how increasing numbers of major vendors are striving to become the “sole source for offerings up and down the IT EcoStack™ targeting the Cloud.”

As if on cue, IBM released two major announcements just this past week. First, on July 20, 2010, InformationWeek reported that IBM plans[i] to combine hardware and software to spur the company’s efforts to deliver bundled, plug-and-play systems. According to Sam Palmisano, the core strategy pivots on producing tightly bundled computer systems that “feature chips, middleware, and business software designed from the ground up to support Cloud Computing and other new-wave IT architectures.”

To some long-standing industry observers, this strategy may appear to be “back to the future” and IBM is simply returning to its roots after a prolonged hiatus from its original business model.  There is, however, an important historical footnote. Almost five decades ago, due to concerns of monopoly antitrust abuses stemming from the bundling of hardware and software in the IBM mainframe systems, the US government took legal action leading to IBM’s acceptance of the 1956 Consent Decree.

Today, unlike the past, IBM no longer dominates the computer systems market. In fact, there is a growing trend towards bundled systems, mainly by the “Master Brands”, to “mask” complexity for customers as they embark on implementing complex IT endeavors including key programs such as datacenter consolidation, server/storage virtualization, predictive analytics, SOA/BPM, Cloud Computing (public, private or hybrid), and Green IT. For example, Oracle acquired Sun Microsystems in 2009 for $7.4 billion to support what InformationWeek described as Larry Ellison’s “applications-to-disk” strategy, while HP and Microsoft earlier this year unveiled a multi-million dollar initiative under which they will jointly engineer servers and software.

It is likely that the timeline of the July 19 IBM announcement was influenced (perhaps even pressured) by its rivals taking a similar approach to address evolving enterprise datacenters. To expedite this strategy to deliver bundled “plug-and-play” systems, IBM first announced sweeping organizational restructuring to foster internal collaboration and harness synergies across products and LOBs. Clearly, the biggest change is the management restructuring by consolidating key hardware and software divisions under the watch of a single executive, Steve Mills who’s a longtime IBM software chief.

Next, just three days later on the heels of this organizational makeover, IBM made another major announcement on July 22, 2010 amidst much fanfare and hype. Presenting the vision of a new “Dimension in Computing” designed to control multi-platform datacenter operational costs and (significantly reduce complexity), IBM announced a new hybrid “system of systems” platform that unifies IT for efficient service delivery and large-scale datacenter simplification. Dubbed a “datacenter in a box” or a “cloud in a box[ii]”, it integrates the new super powerful and energy-efficient mainframe zEnterprise, 196  running z/OS and the zEnterprise  BladeCenter Extension zBX, running Linux and AIX. By extending the System Z’s qualities of service (spanning security, scalability, availability, efficiency and virtualization) to enable Cloud readiness and optimized service delivery for enterprises, IBM likely is promoting its strength in building private Clouds for large enterprises.  See the following two slides from the IBM July 22 announcement.


Israel: So it looks like the IT industry is heading towards more “power” consolidation of mega vendors or as you referenced earlier, “Master Brands”. Is this a fait accompli? If so, is it a matter of channeling demand toward one-stop-shopping irrespective of integration realities underneath? Isn’t there a danger to this trend?”

Annie: Despite these high profile announcements by the major vendors, it is far from fait accompli. And yes, your comments are only too real especially for those who have lived through the era of monopolies and antitrust concerns. Frankly, many people believe that such a trend may be a clear threat in the presently emerging era.  While I don’t want to downplay the risk and potential damage of antitrust abuses, I believe there are some factors at work here to counteract, or at least limit, unchecked monopolies in the IT industry.

In this Internet age with the rise of “Consumerization of IT”, catalyzed by the nearly ubiquitous access to social networking and other Web 2.0 tools, IT has permeated almost every market sector in our society. The set of functions and services supported and enabled by IT has become exceedingly vast, diverse and complex such that no single business model or supplier is in a position to dominate, let alone destroy all others.  The era when a handful of proprietary stalwart vendors dominated the IT industry is all but over. Just this past decade, we have witnessed the meteoric rise of Google, Facebook and more recently, Twitter. A growing formidable force, namely the open source software and its bottom-up self-organizing community, powers as well as empowers most if not all of the Web 2.0 companies. At this point in our discussion, it is apt to segue to the third vendor announcement during the week of July 19, 2010.

On July 19, Cloud service provider RackSpace with NASA announced the sponsorship of the project: OpenStack, an open source IaaS Cloud platform. Included in the announcement is a diverse group of computer system providers from across the technology industry like CITRIX, DELL, NTT DATA, RIGHTSCALE and others to drive a deployable, totally open cloud solution.  According to their mission statement, OpenStack is designed to foster the emergence of technology standards and Cloud interoperability. One of the primary objectives is to facilitate enterprises to avoid vendor lock-in.

Israel: This appears to be a very timely announcement given that “vendor lock-in” is one of the top concerns confronting enterprises as they evaluate and plan for the transition to Cloud Computing. Having said that, are we not back to “square zero” – striking a balance between openness and “one-stop shopping” tight integration?

Annie: Yes indeed. Although some industry observers describe the issue as “vendor lock-in”, others see it as a broader issue describing it as the “challenge/difficulty of bringing back in-house” or the “lack of interoperability standards for seamless portability”.  For example, in the 2009 Cloud Computing survey conducted by IDC, over 80% surveyed rated this issue under both labels to be very important.  Incidentally, I should point out that “vendor lock-in” is neither a new nor a unique issue with Cloud Computing. On the contrary, it is a long-standing “problem” going all the way back from the early days of mainframe computing and culminating with the government versus IBM antitrust lawsuit in the ‘50s as we discussed earlier.

Interestingly, there are many forms and variants of vendor lock-in and they are not all equal. For example, many industry observers have been unhappy with the proprietary development and delivery model that Apple imposed on the iPod/iPhone/iPad.  Although the risk of “vendor lock-in” may be real, any negative impact on the ever-growing large and loyal Apple customer base seems minimal.  Just think about the run-away successful App Store. It is heavily “curated” by Apple. Yet since its opening on July 10, 2008, there have been more than one hundred thousand available apps in App Store, over two billion application downloads (as of November 2009), and reaching three billion downloads by January 2010. Steve Jobs hailed this as a landmark event: “Three billion applications downloaded in less than 18 months – this is like nothing we’ve ever seen before.”

Sorry we digressed. So let’s resume our discussion of the recent major announcements.  In a nutshell, the OpenStack announcement attempts to address the issue directly by allowing any organization to create and offer Cloud Computing capabilities using open source software freely available under the Apache 2.0 license running on standard hardware.

Now this gets interesting: a tale of two diametrically opposite strategies.  On one hand, we have IBM announcing the high performance zEnterprise 196 as a hybrid integrated multi-architecture “datacenter /Cloud in a box”. The goal is to mask complexity and maximize efficiency:  infrastructure (management /admin costs savings up to 70%) and energy consumption (up to 82% energy usage reduction) with a bundled technology stack: integrating multi-platforms, infrastructure and management (spanning service, platform and hardware).  A principal concern of this proprietary single vendor approach is the risk of “vendor lock-in”.

On the other hand, the OpenStack is “DIY” based on an open source development platform. The goal of OpenStack is the following: “Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans Cloud providers.”  The cons/challenges of this approach are probably similar to conventional “DIY” open source projects.

I should clarify that this dichotomy may be seen as an entire spectrum. As noted here IBM, VMware, etc on one hand, and RackSpace, Eucalyptus, etc on the other hand, exemplify the two end-points bookending the dichotomy spectrum. Along the spectrum, there are a growing number of intermediate options/offerings (with a rising number of variations) by a wide variety of IT Cloud service vendors: stalwart vendors including Amazon, Microsoft, Google, Salesforce.com,  etc as well as young companies and startups such as RackSpace, RightScale, Boomi, Canonical, Cloudkick, Opscode, etc.

Israel: Is this shaping up to be a battle between two diametrically opposite strategies? And if so, which one will come out on top? Or is it a draw?

Annie: To me, a similar dichotomy has already existed previously in the IT industry. For example, think Apple versus Google. Consider the modus operandi of the Apple core business model (“close or at least closely curated” to optimize user experience and quality) versus that of Google’s (“open standards/APIs” to maximize opportunities for 3rd party development participation).

Insofar as whether bundled systems or “Cloud in a box” versus open source “DIY” will be the ultimate winner, I have to defer to other industry observers with more experience such as you.  Perhaps in our future Q&A meet-up, I am interested to hear your views on how the competition may be settled eventually.  However, while we all await the uncertain outcome, IT practitioners should be mindful that the dichotomy spectrum would have profound implications not only on the supply side but also on the demand side.  In particular, because the offerings from the dichotomy spectrum will be rapidly evolving, the fluidity will very likely confound and confuse users/consumers as they attempt to balance a convoluted set of different tradeoffs. Many  enterprise IT practitioners will be under pressure to make difficult and ambiguous choices by picking one or more evolving offerings over other evolving offerings for building the foundation of tomorrow’s enterprise datacenters in the Cloud era.

Israel: Good timing.  So far in our Q&A today, you have focused on the first half of the narrative – namely, the supply side, now let’s continue to part 2 of your narrative, namely, the demand side.

Annie:  Earlier, I discussed the supply side by connecting the dots among three key announcements during the week of July 19. Now similarly for the demand side, I will suggest a few more dots that I believe should be connected. Specifically, I suggest connecting the following trends:

  • The growing complexities and inefficiencies of on-premises enterprise datacenters;
  • The inevitable rise of alternative delivery and deployment models for IT services; and
  • The advent of Cloud Computing:  a long-standing vision whose time may finally arrive.

Several months ago, I published a guest post on your blog site entitled “The Urgency of Now.”  You might recall that I began the post with some sobering and perhaps even alarming statistics about the gross inefficiency of traditional on-premises enterprise datacenters.  Here again is the Enterprise Datacenter Index at–a-glance:

In summary, enterprise IT faces a “crisis of staggering complexity” and IT infrastructure is reaching a “breaking point” marked by such salient factors/trends[iii] as the following:

  • 1.5 X: Information explosion driving over fifty percent yearly growth in storage shipments;
  • 85% idle: Over-provisioned waste primarily in distributed computing environments e.g. typical computing resources (capacity) remain idle for  an average of over eighty percent;
  • $40 Billion or 3.5% of sales: Retail industries annual loss due to (supply) value chain inefficiencies;
  • 60-70% IT spending on maintenance/overhead: Overall IT spending profile shows that the lion’s share of IT expenses goes towards overhead and maintenance. Maintenance overhead: seventy cents per dollar is spent on maintaining IT infrastructures at the expense of adding new capabilities;

Now consider the following scenario. Suppose enterprise IT could choose an alternative set of “self-service” IT service delivery/deployment models that would be orthogonal to traditional hierarchical command-and-control Cap-ex based datacenters.  Instead of owning and tightly controlling its own private internal datacenter and purchasing capital resources up front, an organization on-demand would “rent” pooled computing resources hosted on the provider’s multi-tenant environment. The Internet would serve as the global infrastructure “grid” and all services would be delivered through Web APIs.  In lieu of having a dedicated IT staff administering IT operations, users could avoid lengthy red-tape delay and access directly/immediately to provision as well as to manage computing capacity as “self-service IT”. In addition, instead of formal contracts and protracted delay in hardware procurement, an organization would pay for access at any time to “unlimited” computing capacity simply with a credit card.

Because there would not be formal contracts imposing preset time commitments, both entry and exit would be friction-free. In this way, an organization could accelerate time-to-value/market and help to catalyze experimentation and innovative endeavor. Furthermore, CIOs of enterprise IT could avoid or mitigate the lose-lose dilemma because they would not be restricted to choosing either a policy that leads to “waste due to over-provisioning” using peak usage estimates for capacity planning or a policy that can incur “risk due to under-provisioning” using non-peak estimates. Ideally, IT staff would “plan capacity based on typical usage” while confident that it could “scale dynamically at peak times” to maintain performance and SLAs. Simply put, the primary objectives for today’s organizations are not just about increasing speed and efficiency for back office automation. Rather, they also are about increasing speed and flexibility to adapt to changes by yielding judicious control to providers for on-demand utility computing services off-premises.

Conceptually, this scenario is an overall vision of Cloud Computing. With the advent of Cloud Computing, the vision of “Computing as a Utility” is beginning to take shape. Since the early days of time-sharing computing, that vision has taken a quantum leap towards reality. One of the earliest references to Utility Computing occurred in 1961 at the MIT Centennial. On that occasion, John McCarthy presented his vision of computing organized as a public utility. Just as the telephone system had developed into a major industry, Professor McCarthy envisioned that “Computing as a Utility” could one day become the basis of a new and important public industry.

Rooted in the long-standing vision and hope for “Computing as a Utility” that began more than half a century ago, the genesis of Cloud Computing goes back a long way. To a growing number of industry observers, it is an old idea whose time may have finally arrived when, in 2006, Amazon began offering Cloud infrastructure services to the public as a utility. Despite initial skepticism, it was a watershed event in the quest of Utility Computing and helped to usher in the first wave of industrial-strength commercial Cloud Computing offerings.

Israel: To wrap up our discussion today, can you leave us with a few thoughts about some of the implications of Cloud Computing as enterprises begin their transition to the Cloud?

Annie: Eric Schmidt, Google’s Chairman and Chief Executive has stated that Cloud computing will be “the defining technological shift of our Generation”. However, the media and vendor-spun hype (at times referred to as “cloud-washing”) around this topic has created an unprecedented level of confusion. Today, unabated sound and fury surrounding the Cloud Computing buzz continues and indeed, increases. Nevertheless, it is all but certain that there will be no “big or easy switch” for enterprise IT to transition overnight from running applications on premises to the Cloud. Because the shift is not an “all-or-nothing” or a “one size fits all” endeavor, stakeholders in enterprises should take a judicious measured approach to balance different tradeoffs.

To sustain the transition of enterprise IT to the Cloud will require not only technological advances but also new business models, new forms of IT organizational management structure and perhaps even new IT roles.  One of the “inconvenient” truths about embracing new user-empowerment technology trends and business models is the slippery slope of finding the “right” balance between hierarchical command-control and bottom-up empowerment. The harm (ineffectiveness and counter-productivity) of too much top-down control can be matched or even surpassed by the dangers of too little control. User empowerment without reasonable constraints can lead to anarchy and chaos. A new form of organizational governance is clearly required to avoid these problems. Striking a balance between planned orderliness and new emergent forces has been a challenging dynamic since the dawn of civilization.

Many of the principles that have been refined over the millennia will have direct applicability for governing tomorrow’s world of “self-service” computing in the Cloud. Clearly, there will be direct implications to new scrutiny as well as the shaping/changing of security and governance related policies. However, an organization should not overlook the human aspects and the cultural impact on the IT system administration personnel.  For example, resistance to sweeping changes driven by a fear of losing control and the stress over the prospect of losing employment can be one of the more profound ramifications that often are under the management radar.

Cloud Computing likely will change the status quo of IT system administration and, perhaps in the future, could obviate the need for some traditional IT system skills. Cloud Computing, however, is also opening new opportunities for the technical IT community and enterprise IT personnel. There is a growing consensus that, as Cloud Computing evolves, the need for more business-minded IT staff will accelerate. Specifically, there likely will be an urgent need for people “with broader business skills who can manage multiple supplier relationships.”  Freed from a variety of low-level operational tasks and controls of physical infrastructure via Cloud Computing, enterprise IT has the opportunity to promote system administration staff to higher-level decision makers as IT service facilitators and SLA contracts managers. In the near future, many traditional hierarchical command-control system operators may pursue a wider array of IT professional opportunities spanning the roles of enterprise architects; capacity planning; budget planning; performance assurance; and data, security, governance gatekeepers.

Israel: This really resonates with what I see happening in many of my consulting engagements. Successful companies waste an immense amount of capital, energy and management attention on migrating from yesterday’s datacenter to today’s or tomorrow’s datacenter. When exposed to the pains of such migrations, I am always reminded of Peter Drucker’s quip “Companies make shoes!” It is beyond me why companies who makes shoes, cars, drugs or financial instruments would want to be prisoners of their own success, hopping over from one data center to a bigger data center every few years.

Annie: Thanks for Peter Drucker’s quip. I am going to borrow it for my future use.

Israel: Annie, I can’t thank you enough for sharing your insights with us. You really connect the dots!

Endnotes:

[i] Based on the assumption that IT infrastructure performance can be greatly enhanced when each element is designed and brought to market as a component of a tightly integrated, optimized system.

[ii] With this slogan, IBM is promoting the hybrid zEnterprise 196 integrating multiple architectures and OS in a “box” as the one stop shopping ready-made private Cloud for enterprises.

[iii] Information source from IBM, The Open Group Conference, July 22, 2009.

Cloud Computing Forecasts: “Cloudy” Future for Enterprise IT

leave a comment »

In a comment on The Urgency of Now, Marcel Den Hartog discusses technology assimilation in the face of hype:

But if people are already reluctant to run the things they have, on another platform they already have, on an operating system they are already familiar with (Linux on zSeries), how can you expect them to even look at cloud computing seriously? Every technological advancement requires people to adapt and change. Human nature is that we don’t like that, so it often requires a disaster to change our behavior. Or carefully planned steps to prove and convince people. However, nothing makes IT people more cautious than a hype. And that is how cloud is perceived. When the press, the analysts and the industry start writing about cloud as part of the IT solution, people will want to change. Now that it’s presented as the silver bullet to all IT problems, people are cautious to say the least.

Here is Annie Shum‘s thoughtful reply to Marcel’s comment:

Today, the Cloud era has only just begun. Despite lingering doubts, growing concerns and wide-spread confusion (especially separating media and vendor spun hype from reality), the IT industry generally views Cloud Computing as more appealing than traditional ASP /hosting or outsourcing/off-shoring. To technology-centric startups and nimble entrepreneurs, Cloud Computing enables them to punch above their weight class. By turning up-front CapEx into a more scalable and variable cost structure based on an on-demand pay-as-you-go model, Cloud Computing can provide a temporary, level playing field. Similarly, many budget-constrained and cash-strapped organizations also look to Cloud Computing for immediate (friction-free) access to “unlimited” computing resources. To wit: Cloud Computing may be considered as a utility-based alternative to an on-premises datacenter and allow an organization (notably cash-strapped startups) to “Think like a ‘big guy’. Pay like a ‘little guy’ ”.

Forward-thinking organizations should not lose sight of the vast potential of Cloud Computing that extends well beyond short-term economics. At its core, Cloud Computing is about enabling business agility and connectivity by abstracting computing infrastructure via a new set of flexible service delivery/deployment models. Harvard Business School Professor Andrew McAffee painted a “Cloudy” future for Corporate IT in his August 21, 2009 blog and cited a perceptive 1983 paper by Warren D. Devine, Jr. in the Journal of Economic History called “From Shafts to Wires: Historical Perspective on Electrification”.[1] There are three key take-away messages that resonate with the current Cloud Computing paradigm shift. First: The real impact of the new technology was not apparent right away. Second: The transition to full utilization of the new technology will be long, but inevitable. Third: There will be detractors and skeptics about the new technology throughout the transition. Interestingly, telephone is another groundbreaking disruptive technology that might have faced similar skepticism in the beginning. Legend has it that a Western Union internal memo dated 1876 downplayed the viability of the telephone: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communications. The device is inherently of no value to us.”

The dominance of Cloud Computing as a computing platform, however, is far from a fait accompli. Nor will it ever be complete, a “one-size fits all” or a “big and overnight switch”. The shape of computing is constantly changing but it is always a blended and gradual transition, analogous to a modern city. While the cityscape continues to change, a complete “rip-and-replace” overhaul is rarely feasible or cost-effective. Instead, city planners generally preserve legacy structures although some of them are retrofitted with standards-based interfaces that enable them to connect to the shared infrastructure of the city. For example, the Paris city planners retrofitted Notre Dame with facilities such as electricity, water, and plumbing. Similarly, despite the passage of the last three computing paradigm shifts – first mainframe, next Client/Server and PCs, and then Web N-tier – they all co-exist and can be expected to continue in the future. Consider the following. Major shares of mission-critical business applications are running today on mainframe servers. Through application modernization, legacy applications – notably Cobol for example – now can operate in a Web 2.0 environment as well as deploy in the Cloud via the Amazon EC2 platform.

Cloud Computing can provide great appeal to a wide swath of organizations spanning startups, SMBs, ISVs, enterprise IT and government agencies. The most commonly cited benefits include the promise of avoiding CapEx and lowering TCO to on-demand elasticity, immediacy and ease of deployment, time to value, location independence and catalyzing innovation. However, there is no magic in the Cloud and it is certainly not a panacea for all IT woes. Some applications are not “Cloud-friendly”. While deploying applications in the Cloud can enable business agility incrementally, such deployment will not change the characteristics of the applications fundamentally to be highly scalable, flexible and automatically responsive to new business requirements. Realistically, one must recognize that the many of the challenging problems – security, data integration and service interoperability in particular – will persist and live on regardless of the computing delivery medium: Cloud, hosted or on-premises.

[1] “The author combed through the contemporaneous business and technology press to learn what ‘experts’ were saying as manufacturing switched over from steam to electrical power, a process that took about 50 years to complete.” – Andrew McAfee, September 21, 2009.

I will go one step further and add quality to Annie’s list of challenging problem. A crappy on-premises application will continue to be crappy in the cloud. An audit of the technical debt should be conducted before “clouding” an application. See Technical Debt on Your Balance Sheet for a recommendation on quantifying the results of the quality audit.

The Changing Nature of Innovation: Part I — New Forms of Experimentation

with 4 comments

Colleague Christian Sarkar drew my attention to two recent Harvard Business Review (HBR) articles that shed light on the way(s) innovation is being approached nowadays. To the best of my knowledge, none of the two articles has been written by an author who is associated with the Agile movement. Both, if you ask me, would have resonated big time with the authors of the Agile Manifesto.

The February 2009 HBR article How to Design Smart Business Experiments focuses on data-driven decisions as distinct from decisions taken based on “intuition”:

Every day, managers in your organization take steps to implement new ideas without having any real evidence to back them up. They fiddle with offerings, try out distribution approaches, and alter how work gets done, usually acting on little more than gut feel or seeming common sense—”I’ll bet this” or “I think that.” Even more disturbing, some wrap their decisions in the language of science, creating an illusion of evidence. Their so-called experiments aren’t worthy of the name, because they lack investigative rigor. It’s likely that the resulting guesses will be wrong and, worst of all, that very little will have been learned in the process.

It doesn’t have to be this way. Thanks to new, broadly available software and given some straightforward investments to build capabilities, managers can now base consequential decisions on scientifically valid experiments. Of course, the scientific method is not new, nor is its application in business. The R&D centers of firms ranging from biscuit bakers to drug makers have always relied on it, as have direct-mail marketers tracking response rates to different permutations of their pitches. To apply it outside such settings, however, has until recently been a major undertaking. Any foray into the randomized testing of management ideas—that is, the random assignment of subjects to test and control groups—meant employing or engaging a PhD in statistics or perhaps a “design of experiments” expert (sometimes seen in advanced TQM programs). Now, a quantitatively trained MBA can oversee the process, assisted by software that will help determine what kind of samples are necessary, which sites to use for testing and controls, and whether any changes resulting from experiments are statistically significant.

On the heels of this essay on how one could attain and utilize experimentally validated data, the October 2009 HBR article How GE is Disrupting Itself discusses what is already happening in the form of Reverse Innovation:

  • The model that GE and other industrial manufacturers have followed for decades – developing high-end products at home and adapting them for other markets around the world – won’t suffice as growth slows in rich nations.
  • To tap opportunities in emerging markets and pioneer value segments in wealthy countries, companies must learn reverse innovation: developing products in countries like China and India and then distributing them globally.
  • While multinationals need both approaches, there are deep conflicts between the two. But those conflicts can be overcome.
  • If GE doesn’t master reverse innovation, the emerging giants could destroy the company.

It does not really matter whether you are a “shoe string and prayer” start-up spending $500 on A/B testing through Web 2.0 technology or a Fortune 500 company investing $1B in the development and introduction of a new car in rural India in order to “pioneer value segments in wealthy countries.” Either way, your experimentation is affordable in the context of the end-result you have in mind.

Fast forward to Agile methods. The chunking of work to two-week segments makes experimentation affordable – you cancel an unsuccessful iteration as needed and move on to work on the next one. Furthermore, you can make the go/no-go decision with respect to an iteration based on statistically significant “real time” user response. This closed-loop operational nimbleness and affordability , in conjunction with a mindset that considers a “failure” of an iteration as a valuable lesson to learn from, facilitates experimentation. Innovation simply follows.