The Agile Executive

Making Agile Work

Posts Tagged ‘Cloud Computing

Agile Enterprise Forum 2011

with 4 comments

Charles Handy, Chris Potts, Don Reinertsen, John Seddon and I are the featured speakers in the Agile Enterprise Forum 2011. The Forum will be held on March 10, 2011 in the Chandos House at the Royal Society of Medicine,  London. Attendance is limited to 30 CIOs.

The theme for the forum is Agility for Complex Organizations. The overarching message is nicely captured in the following summary by James Yoxall:

There are two strands of interest for a CIO: strategy and delivery.  The Agile/Lean message can be summarised as “merging” the two, so that delivery can start before strategy is complete, and delivery informs strategy through feedback loops. This leads to a faster/earlier delivery and a better end result.

My own workshop – Agile Governance: Tying Delivery to Value – builds on this message by describing a specific strategic initiative which is not achievable without the use of advanced delivery techniques. Here is the abstract for my workshop:

This workshop will explore mechanisms for unlocking the full potential of existing software through the combination of Agile/Lean methods with technical debt techniques. These mechanisms apply to complex organisations that rely on in-house development teams as well as to third party delivery partners. Israel’s approach emphasizes the need to continuously monitor and mitigate the decay of software that more often than not had been developed over many years. Most importantly, it shows how well-governed software can become the enabler for unleashing the synergistic power of cloud, mobile and social.

You can think of the workshop as linking past, present and future. The “sins” of the past require technical debt reduction initiatives today. These initiatives utilize the classical Agile/Lean techniques of continuous measurement and tight feedback loops. Without such initiative, the value of existing software cannot be unlocked in the future. In particular, competing in the hyper-segmented markets that cloud, mobile and social generate will be next to impossible for legacy software that has not been modernized.

The Supply Side of the Consumerization of Enterprise Software

with 10 comments

Source: http://www.flickr.com/photos/bertboerland/2944895894/

In my recent post about the consumerization of enterprise software I discussed two factors that are likely to accelerate the pace toward such consumerization:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

In this post I would like to add a third factor – the buying pattern. My contention is that the buying pattern for micro-apps will spread to enterprise application. Potential demand for buying in this way is huge. Supply for buying enterprise software as micros-apps is not quite there yet, but it would take only one smart vendor to start transforming the traditional pattern how enterprise software is chunked, offered and sold.

Think about your recent experience downloading an application to your smart mobile phone. You did not go through a six-month evaluation period; you did not do a comprehensive competitive analysis; you did not check how well the seller does customer support in Sumatra. You simply paid something like $7.99 and downloaded the application. You are more than happy if it fulfills your needs in a reasonable manner. If it does not, you simply buy another application with the functionality you desire. Maybe you are a little more cautious now and ask a friend or send an inquiry to your Twitter followers before you pick the new application. Whatever you might choose to do, the fundamental facts are: A) you can afford to lose $7.99; and, B) your time is more precious than the sunk cost of the application. You simply move on.

This buying pattern is not something that you are going to forget when you step into your office in the morning. It makes perfect sense to you and it would be good for your company. You would rather concentrate on your business than on the tricky language of clause number 734 in the contract that your department’s attorney prepared for licensing yet another piece of enterprise software.

The ‘$7.99 experience’ you and zillion other folks like you had over the past week or the past month makes enterprise software vendors extremely vulnerable. The “high-touch; high-margin; high-commitment” [1] business design is not sustainable once the purchase model changes.  The expensive machinery of professional services, system engineering and customer support is not affordable at the face of competition that constructs modular chunks of enterprise software and sells them at a price the customer can afford to write off (if they do not perform to satisfaction). Maybe the ceiling in the enterprise to ‘forget about this application and move on’ is no higher than $1,000 (instead of ‘no higher than $7.99’ for the private citizen), but a smart vendor can still make a lot of money on selling at one thousand dollars a pop to the enterprise.

The growing gap between “this lovely application on my iPhone” and the “headache of licensing traditional enterprise software” is an immense incentive for up-and-coming software vendors to use the ‘$7.99 experience’ as the heart of a new business design. This new business design can be simply summarized as “low-touch; low-margin; low commitment” [2]. And, yes, it is very disruptive to the incumbents…

My hunch is that the IT Service Management (ITSM) industry will be the first to crumble. The premise of “service delivery” sounds a little hollow in a cloud computing world characterized by “everything as a service” [3]. Would a buyer be really willing to pay for “service for the service” from a vendor who does not actually provide the underlying service?! It sounds like paying a Fidelity or a Vanguard investment manager to manage a portfolio of their own mutual funds for you…

All it takes for this shift to start – in ITSM or in another part of enterprise software –  is one successful vendor.

Footnotes:

[1] I am indebted to Annie Shum for this phrase.

[2] Ibid.

[3] I am indebted to Russ Daniels for this phrase.

How to Break the Vicious Cycle of Technical Debt

with 10 comments

The dire consequences of the pressure to quickly deliver more functions and features to the market have been described in detail in various posts in this blog (see, for example, Toxic Code). Relentless pressure forces the development team to take technical debt. The very same pressure stands in the way of paying back the debt in a timely manner. The accrued technical debt reduces the velocity of the development team. Reduced development velocity leads to increased pressure to deliver, which leads to taking additional technical debt, which… It is a vicious cycle that is extremely difficult to break.

Figure 1: The Vicious Cycle of Technical Debt

The post Using Credit Limits to Constrain “Development on Margin” proposed a way of coping with the vicious cycle of technical debt – placing a limit on the amount of technical debt a development team is allowed to accrue. Such a limit addresses the demand side of the software development process. Once a team reaches the pre-determined technical debt limit (such as $3 per line of code) it cannot continue piling on new functions and features. It must attend to reducing the technical debt.

A complementary measure can be applied to the supply side of the software development process. For example, one can dynamically augment the team by drawing upon on-demand testing. uTest‘s recent announcement about securing Series C financing explains the rationale for the on-demand paradigm:

“The whole ‘appification’ of software platforms, whether it’s for social platforms like Facebook or mobile platforms like the iPhone or Android or Palm, or even just Web apps, creates a dramatically more complex user-testing matrix for software publishers, which could mean media companies, retailers, enterprise software companies,” says Wienbar. “Anybody who has to interact with consumers needs a service to help with that testing. You can’t cover that whole matrix with your in-house test team.”

Likewise, on-demand development can augment the development team whenever the capacity of the in-house team is insufficient to satisfy demand. IMHO it is only a matter of little time till marketplaces for on-demand development will evolve. All the necessary ‘ingredients’ for so doing – Agile, Cloud, Mobile and Social – are readily available. It is merely a matter of putting them together to offer on-demand development as a commercial service.

Whether you do on-demand testing, on-demand development or both, you will soon be able to address the supply side of software development in a flexible and cost-effective manner. Between curtailing demand through technical debt limits and expanding supply through on-demand testing/development, you will be better able to cope with the relentless pressure to deliver more and quicker than the capacity of your team allows.

Consumerization of Enterprise Software

with 7 comments

Source: http://www.flickr.com/photos/ross/3055802287/

Figure 1: Consumerization of IT

The devastation in traditional Publishing needs precious little mentioning. Just think about a brand like BusinessWeek selling for a meager cash offer in the $2 million to $5 million range, McGraw Hill getting into interactive text books through Inkling or Flipboard delivering “… your personalized social magazine” to your iPad. This devastation might not have gotten the attention that the plight of the ‘big three’ automobile manufacturers got, but in its own way it is as shocking as a visit to the abandoned properties in Detroit is.

As most of my clients do enterprise software, many of my discussions with them is about the consumerization of IT. From a day-to-day perspective this consumerization is primarily about six aspects:

  • Use of less expensive/consumer-focused components as infrastructure
  • ‘Pay as you go’ pricing (through Cloud pricing mechanisms/policies)
  • Use of web application interfaces to monitor IT infrastructure
  • Use of mobile and consumer based devices for accessing IT alerts and interfacing with systems
  • Use of the fast growing number of mobile applications to enhance productivity
  • Application of enterprise social networks and social software in the data center

From a strategic perspective, IT consumerization IMHO is all about the transformation toward “everything as a service” [1]. The virtuous cycle driven by Cloud, Mobile and Social manifests itself at three levels:

  • It obviously affects the IT folks with whom I discuss the subject. Immense changes are already taking place in many IT departments.
  • It affects their company. For example, the company might need to change the business design in order to optimize its supply chain.
  • It affects the clients of their company. Their definition of value changes these days faster than the time it takes the CIO I speak with to say “value.”

© Copyright 2010 Israel Gat

Figure 2: The Virtuous Cycle of Cloud, Mobile and Social

Sometimes I get a push-back from my clients on this topic. The push-back is usually rooted in the immense complexity (and fragility) of the enterprise software systems that had been built over the past ten, twenty or thirty years. The folks who push back on me point out that consumerization of IT will not scale big time until enterprise software gets “consumerized” or at least modernized.

I agree with this good counter-point but only up to a point. I believe two factors are likely to accelerate the pace toward “consumerization” of enterprise software:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

If you are in enterprise software you need to start modernizing your applications today. The reason is the imperative need to mitigate risk prior to reaching the end-point, almost irrespective of how far down the road the end-point might be.  See Llewellyn Falco‘s excellent video clip Rewriting Vs Refactoring for a crisp articulation of the risk involved in rewriting and why starting to refactor now is the best way to mitigate the risk.

Footnotes:

[1] The phrase “Everything as a Service” has been coined by Russ Daniels.

Outline of the Technical Debt Seminar at the Cutter Summit

leave a comment »



Pictured above are speakers of the forthcoming Cutter Summit. Between the seventeen of us we will cover a broad spectrum of IT topics such as Agile, Enterprise Architecture, Business Strategy, Cloud Computing, Collaboration, Governance and Security. Inter-disciplinary seminars, panels and case studies will weave all those threads together to give participants a clear view of the unfolding transformation in IT and of the new way(s) companies are starting to utilize IT. Click here for a details.

As Jim Highsmith and I continue to develop our joint seminar on technical debt for the summit, I would like to give readers of this blog a sense of where we are and ask for feedback. Right now we are considering the following building blocks for the seminar:

  • The Nature of Technical Debt
  • Technical Debt Metrics
  • Monetizing Technical Debt
  • Constructing Roadmaps for Paying Back Technical Debt
  • Risk Assessment and Mitigation
  • A Simple Software Governance Framework
  • Schedule in the Simple Governance Framework
  • Enlightened Governance
  • Baking in Quality One Build at a Time
  • How Often Should the Project Team Regroup?
  • Multi-Level Governance
  • Extending  Technical Debt Techniques to Devops
  • Use of Technical Debt Techniques in Agile Portfolio Management
  • The Start Afresh Option
  • Technical Debt as an Integral Part of a Value Delivery Culture

In the course of going through a subset of these building blocks, we will cover the latest and greatest from the October issue of the Cutter IT Journal on technical debt, present two case studies, and conduct a few group exercises.

As If Another Proof Point Was Needed

leave a comment »

Annie Shum’s interview earlier this week gave readers of this blog a multi-dimensional view of imminent changes in IT. If you needed independent validation, it came yesterday through EMC’s Chuck Hollis words in the national solution provider GreenPages Technology Solutions’ 14th annual summit:

Vice President Global Marketing CTO Chuck Hollis Monday said the changes resulting from the storage giant’s own no-holds barred journey to the private cloud led to a decline in IT employee job satisfaction…

Hollis said the internal IT satisfaction drop came in the second phase of the EMC cloud revolution focused squarely on mission critical applications. That second phase — which EMC is in the midst of now — has sparked major changes in IT jobs as the company has replaced IT management, security staff and backend IT staff.

“During this phase, this is where org (organizational) chart issues started to come in,” Hollis said. “People’s jobs started to change. Younger people in the organization were being promoted over older people.”

As if another proof point to add to Annie‘s rigorous data was needed…

Written by israelgat

August 4, 2010 at 7:30 am

Extending the Scope of The Agile Executive

leave a comment »

For the past 18 months Michael Cote and I focused The Agile Executive on software methods, processes and governance. Occasional posts on cloud computing and devops have been supplementary in nature. Structural changes in the industry have generally been left to be covered by other blogs (e.g.  Cote’s Redmonk blog).

We have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. Two reasons drove us to this conclusion:

  • The rise of software testing as a service. The importance of this trend was summarized in Israel’s recent Cutter blog post “Changing Playing Fields“:

Consider companies like BrowserMob (acquired earlier this month by NeuStar), Feedback Army,  Mob4Hire,  uTest (partnered with SOASTA a few months ago), XBOSoft and others. These companies combine web and cloud economics with the effectiveness and efficiency of crowdsourcing. By so doing, they change the playing fields of software delivery…

  • The rise of devops. The line between dev and ops, or at least between dev and web ops, is becoming fuzzier and fuzzier.

As monolithic software development and delivery processes get deconstructed, the structural changes affect methods, processes and governance alike. Hence, discussion of Agile topics in this blog will not be complete without devoting a certain amount of “real estate” to these two changes (software testing as a service and devops) and others that are no doubt forthcoming. For example, it is a small step from testing as a service to development as a service in the true sense of the word – through crowdsourcing, not through outsourcing.

I asked a few friends to help me cover forthcoming structural changes that are relevant to Agile. Their thoughts will be captured through either guest posts or interviews. In these posts/interviews we will explore topics for their own sake. We will connect the dots back to Agile by referencing these posts/interviews in the various posts devoted to Agile. Needless to say, Agile posts will continue to constitute the vast majority of posts in this blog.

We will start the next week with a guest post by Peter McGarahan and an interview with Annie Shum. Stay tuned…

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

with 2 comments

Economies of Scale have been much discussed in The Agile Executive since the recent OpsCamp in Austin, TX. The significant savings on system administration costs  in very large data centers have been called out as a major advantage of Internet-scale Clouds. Unlike various short-lived advantages, the benefits to the Cloud operator, and to the Cloud user when the savings are passed on to him/her, are sustainable.

In this guest post, colleague and friend Annie Shum analyzes the various sources of waste in operations in traditional data centers. Like an Agilist with Lean inclinations who confronts an inefficient Waterfall process, Annie explains how economies of scale apply to the various kinds of waste that are prevalent in today’s small and medium data centers. Furthermore, she connects the dots that lead toward a Green IT option.

Here is Annie:

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

Scale Matters: “Over time, however, competitive advantage within categories shifts inexorably toward volume operations architecture.” – Geoffrey Moore, “Dealing with Darwin”

It is a truism that today’s datacenters are systemically inefficient. This is not intended as an indictment of all conventional datacenters. Nor does it imply that today’s datacenters cannot be made more efficient (incrementally) through right sizing and other initiatives, notably consolidation by deploying virtualization technologies and governance by enforcing energy conservation/recycling policies. There are a myriad of inefficiencies, however, that are prevalent in datacenters today.

Many industry observers lament the “staggering complexity” that permeates on-premises datacenters. Over time, most, if not all, enterprise IT datacenters have become amalgamations of disparate heterogeneous resources. Generally, they can be described as incohesive, perhaps even haphazard, accumulations. The datacenter components and configurations often reflect the intersections of organizational politics (LOB reporting structures leading to highly customized/organizational asset acquisitions and configurations), business needs of the moment (shifting corporate strategies and changing business imperatives to gain competitive edge or meet regulatory compliances) and technology limitations (commercial tools available in the marketplace). It should come as no surprise that human interactions and errors are considered a major contributor to the inefficiencies of datacenters: IBM reported that human errors account for seventy percent of the datacenter problems.

The challenge of maximizing energy efficiency begins fundamentally with the historical capital-intensive ownership model for computing assets to enable each organization to operate its own datacenter and to provide “24×7 availability” to its own users.  The enterprise IT staff has been required to support unpredictable future growth, accommodate situational demands and unscheduled but deadline-critical events, meet performance levels within SLAs and comply with regulatory and auditing requirements. Hence, datacenters generally are over-configured and over-provisioned. In addition to highly skewed under-utilization of distributed platform servers, ninety percent of corporate datacenters have excess cooling capacity. Worst of all, according to IBM, about seventy-two percent of cooling bypassed the computing equipment entirely. Further compounding these problems for a typical enterprise datacenter, is the lack of transparency and the inability to control energy consumption properly due to inadequate and often inaccurate instrumentation to quantify energy consumption and waste due to energy lost.

The economics of Cloud Computing can offer a compelling option for more efficient IT: by lowering power consumption for individual organizations and by improving the efficiency of a large number of discrete datacenters. Although the electricity consumption of Cloud Computing is projected to be one to two percent of today’s global electricity use, Cloud service providers can still cultivate sustainable Green I.T. effectively at lower costs by leveraging state-of-the-art super energy efficient massive datacenters, proximity to power generation thereby reducing transmission costs and, above all, harnessing enormous economies of scale. To better understand how Cloud Computing can offer greener computing in the Cloud and how will it help moderate power consumption by datacenters and rein in run-away costs, a good starting place is James Hamilton’s September 2008 study on Internet-Scale Service Efficiency” as summarized in the table below.

Resource Cost in

Medium DC

Cost in

Very Large DC

Ratio
Network $95 / Mbps / month $13 / Mbps / month 7.1x
Storage $2.20 / GB / month $0.40 / GB / month 5.7x
Administration ≈140 servers/admin >1000 servers/admin 7.1x

Table 1: Internet-Scale Service Efficiency [Source: James Hamilton]

This study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments.  The significant cost savings is driven primarily by scale. Other key factors include location (low cost real estate and electricity rate, abundant water supply and readily available fiber-optic connectivity), proximity to electricity and power generators, load diversity, and virtualization technologies.

Will this mark the beginning of the end for traditional on-premises datacenters? Can enterprise IT continue to justify new business cases for expanding today’s non-renewable energy powered datacenters? According to the McKinsey article, the costs to launch a large enterprise datacenter have risen sharply from $150M to over $500M over the past five years. The facility operating costs are also increasing at about twenty percent per year. How long will the status quo last for enterprise IT considering the recent trend of Cloud service providers? Major players such as Google, Microsoft as well as the U.S. government itself have invested in or are planning ultra energy-efficient mega-size datacenters (also known as “container hotels”) with massive commoditized containerization and proximity both to power source and less expensive power rates. Bottom line: will the tide turn if the economics (radical cost savings) due to enormous economies of scale become too significant to ignore?

Despite the potential for significant cost savings, it is premature to declare the demise of traditional IT or the end of enterprise datacenters. After all, the rationale for today’s enterprise IT extends well beyond simplistic bottom-line economics – at least for now. To most industry observers, enterprise datacenters are unlikely to disappear although the traditional roles of enterprise IT will be changing. A likely scenario may involve redistributing IT personnel from operating low-level system operational tasks to addressing higher-level functions involving governance, energy management, security and business processes. Such change not only would become more apparent but will likely be precipitated by the rise of hybrid Clouds and the growing interconnection linking SOA, BPM and social computing. Another likely scenario is the rise of the mega datacenters or “container hotels” for Cloud Utility Computing providers. Although the global economic outlook will undoubtedly play a key role in shaping the development plans/timelines of the mega datacenters, they are here to stay. Case in point: by 2012, Intel estimates that it will design and ship about a quarter of the server chips (it sells) to such mega-data centers.


OpsCamp Through an Internet-scale Lens

with 10 comments

OpsCamp Austin 2010

Like Agile Roots in Salt Lake City in June 2009, OpsCamp in Austin last week demonstrated how powerful grass roots conferences can be. We might not have had big names on the roster, but we sure had a productive dialog on the tricky issues lurking in the cusp between software development and IT operations in Cloud environments.

The conference has been amply covered by Michael Cote, John Willis, Mark Hinkle, and Damon Edwards (to name a few). This post restricts itself to commenting on one fundamental aspect of the cloud which IMHO does not get the attention it deserves. It might be implied in various discourses on the subject, but I believe it needs to be called out as a fundamental assumption for just about anything  and everything one might consider doing with respect to the cloud. I am referring to economies of scale.

As pointed out in a forthcoming book on Cloud Computing by colleague and friend Annie Shum, the cloud phenomenon is fundamentally driven by substantial economies of scale in very large data centers. The operational costs of running such data centers are close to an order of magnitude lower than these prevailing in small and mid-sized data centers. User benefits are primarily derived from these compelling economies of scale.

I will be asking Annie to write a detailed guest post on the subject for readers of The Agile Executive. Until her post is published here, I would recommend we primarily consider the Cloud as a phenomenon that only becomes meaningful at scale. In particular, Private Clouds are not likely to yield Internet-scale efficiencies. Folks who regard their company’s conventional data center as a private cloud might be missing up on the ‘secret sauce’ of cloud computing.

The various agile system administration schemes discussed at the Austin OpsCamp are essential to attaining the requisite economies of scale in cloud services. Watch out for follow-on OpsCamps in other cities for developments to come in this all important space.

Agile Infrastructure

leave a comment »

Ten years ago I probably would not have seen any connection between global warming and server design. Today, power considerations prevail in the packaging of servers, particularly those slated for use in large and very large data centers. The dots have been connected to characterize servers in terms of their eco foot print.

In his Agile Austin presentation a couple of days ago, Cote delivered a strong case for connecting the dots of Agile software development with those of Cloud Computing. Software development and IT operations become largely inseparable in cloud environments.  In many of these environments, customer feedback is given “real time” and needs to be responded to in an ultra fast manner. Companies that develop fast closed-loop feedback and response systems are likely to have a major competitive advantage. They can make development and investment decisions based on actual user analytics, feature analytics and aggregate analytics instead of speculating what might prove valuable.

While the connection between Agile and Cloud might not be broadly recognized yet, the subject IMHO is of paramount importance. In recognition of this importance, Michael Cote, John Allspaw,  Andrew Shafer and I plan to dig into it in a podcast next week. Stay tuned…