The Agile Executive

Making Agile Work

Archive for the ‘Trends’ Category

Consumerization of Enterprise Software

with 7 comments

Source: http://www.flickr.com/photos/ross/3055802287/

Figure 1: Consumerization of IT

The devastation in traditional Publishing needs precious little mentioning. Just think about a brand like BusinessWeek selling for a meager cash offer in the $2 million to $5 million range, McGraw Hill getting into interactive text books through Inkling or Flipboard delivering “… your personalized social magazine” to your iPad. This devastation might not have gotten the attention that the plight of the ‘big three’ automobile manufacturers got, but in its own way it is as shocking as a visit to the abandoned properties in Detroit is.

As most of my clients do enterprise software, many of my discussions with them is about the consumerization of IT. From a day-to-day perspective this consumerization is primarily about six aspects:

  • Use of less expensive/consumer-focused components as infrastructure
  • ‘Pay as you go’ pricing (through Cloud pricing mechanisms/policies)
  • Use of web application interfaces to monitor IT infrastructure
  • Use of mobile and consumer based devices for accessing IT alerts and interfacing with systems
  • Use of the fast growing number of mobile applications to enhance productivity
  • Application of enterprise social networks and social software in the data center

From a strategic perspective, IT consumerization IMHO is all about the transformation toward “everything as a service” [1]. The virtuous cycle driven by Cloud, Mobile and Social manifests itself at three levels:

  • It obviously affects the IT folks with whom I discuss the subject. Immense changes are already taking place in many IT departments.
  • It affects their company. For example, the company might need to change the business design in order to optimize its supply chain.
  • It affects the clients of their company. Their definition of value changes these days faster than the time it takes the CIO I speak with to say “value.”

© Copyright 2010 Israel Gat

Figure 2: The Virtuous Cycle of Cloud, Mobile and Social

Sometimes I get a push-back from my clients on this topic. The push-back is usually rooted in the immense complexity (and fragility) of the enterprise software systems that had been built over the past ten, twenty or thirty years. The folks who push back on me point out that consumerization of IT will not scale big time until enterprise software gets “consumerized” or at least modernized.

I agree with this good counter-point but only up to a point. I believe two factors are likely to accelerate the pace toward “consumerization” of enterprise software:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

If you are in enterprise software you need to start modernizing your applications today. The reason is the imperative need to mitigate risk prior to reaching the end-point, almost irrespective of how far down the road the end-point might be.  See Llewellyn Falco‘s excellent video clip Rewriting Vs Refactoring for a crisp articulation of the risk involved in rewriting and why starting to refactor now is the best way to mitigate the risk.

Footnotes:

[1] The phrase “Everything as a Service” has been coined by Russ Daniels.

Advertisements

‘Super-Fresh’ Code

with 2 comments

Source: http://www.flickr.com/photos/21560098@N06/3636921327/

Misty Belardo published a great post/video clip on Social Media in which she describe the effect of the ‘Super-Fresh’ web on brands:

…  millions of people are creating content for the social web… the next 3 billion consumers will access the Internet from a mobile device. Imagine what that means for bad customer experiences! The ‘super-fresh’ web will force brands to engage with its customers…

I would contend we are also going to experience ‘Super-Fresh’ code in not too long a time. Such code is likely to emerge as the convergence of two overarching trends:

  1. Continuous Integration –> Continuous Deployment –> ‘Super Fresh’ Code. Sophisticated companies are already translating velocity in dev to competitive advantage through Continuous Deployment. ‘Super-fresh’ code is a natural next step.
  2. Open-sourcing –> Crowd-sourcing –> Expert-sourcing. Marketplaces for knowledge work expertise are becoming both effective and efficient. For example, uTest indicates “… 25,000+ testers in more than 160 countries.” A  marketplace for mobile application developers could probably be organized along fairly similar lines.

No doubt, complex software systems of various kinds will continue to be produced through more conventional processes for many years to come. However, ‘Super-Fresh’ code will establish itself as a new category. Code in this category will owe its robustness (and creativity!) to millions of people creating software and fixing it in extremely short time, not to process rigor.

In case you are still wondering about the premise, I would like to point out two corroborative facts:

  1. It is a small step from content to code.
  2. Various mobile applications are already developed and tested today in a different manner from the way web applications have been done.

A fascinating link exists between ‘Super-Fresh’ web and ‘Super-Fresh’ code. The dynamics (“Imagine what that means for bad customer experiences!”) Misty discusses in her blog post are a major driver for the evolution of knowledge work marketplaces and for the production of ‘Super-Fresh’ code.

Forrester on Managing Technical Debt

leave a comment »

Forrester Research analysts Dave West and Tom Grant just published their report on Agile 2010. Here is the section in their report on managing technical debt:

Managing technical debt

Dave: The Agile community has faced a lot of hard questions about how a methodology that breaks development into short iterations can maintain a long-term view on issues like maintainability. Does Agile unintentionally increase the risk of technical debt? Israel Gat is leading some breakthrough thinking in the financial measures and ramifications of technical debt. This topic deserves the attention it’s beginning to receive, in part because of its ramifications for backlog management and architecture planning. Application development professionals should :-

  • Starting captured debt. Even if it is just by encouraging developers to note issues as they are writing code in the comments of that code, or putting in place more formal peer review processes where debt is captured it is important to document debt as it accumulates.
  • Start measuring debt. Once captured, placing a value / cost to the debt created enables objective discussions to be made. It also enables reporting to provide the organization with transparency of their growing debt. I believe that this approach would enable application and product end of life discussions to be made earlier and with more accuracy.
  • Adopt standard architectures and opensource models. The more people that look at a piece of code the more likely debt will be reduced. The simple truth of many people using the same software makes it simpler and less prone to debt.

Tom: Since the role I serve, the product manager in technology companies, sites on the fault line between business and technology, I’m really interested in where Israel Gat and others take this discussion. The era of piling up functionality in the hopes that customers will be impressed with the size of the pile are clearly ending. What will replace it is still undetermined.

I will be responding to Tom’s good question in various posts along the way. For now I would just like to mention the tremendous importance of automated technical debt assessment. Typical velocity of formal code inspection is 100-200 lines of code per hour. Useful and important that formal code inspection is, there is only so much that can be inspected through our eyes, expertise and brains. The tools we use nowadays to do code analysis apply to code bases of any size. Consequently, the assessment of quality (or lack thereof) shifts from the local to the global. It is no more no a matter of an arcane code metric in an esoteric Java class that precious few folks ever hear of. Rather, it is a matter of overall quality in the portfolios of projects/products a company possesses. As mentioned in an earlier post, companies who capitalize software will sooner or later need to report technical debt as line item on their balance sheet. It will simply be listed as a liability.

From a governance perspective, technical debt techniques give us the opportunity to carry out consistent governance of the software process based on a single source of truth. The single source of truth is, of course, the code itself. The very same truth is reflected at every level in the organization. For the developer in the trenches the truth manifests itself as a blocking violation in a specific line of code. For the CFO it is the need to “pay back” $500K in the very same project. Different that the two views are, they are absolutely consistent. They merely differ in the level of aggregation.

Extending the Scope of The Agile Executive

leave a comment »

For the past 18 months Michael Cote and I focused The Agile Executive on software methods, processes and governance. Occasional posts on cloud computing and devops have been supplementary in nature. Structural changes in the industry have generally been left to be covered by other blogs (e.g.  Cote’s Redmonk blog).

We have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. Two reasons drove us to this conclusion:

  • The rise of software testing as a service. The importance of this trend was summarized in Israel’s recent Cutter blog post “Changing Playing Fields“:

Consider companies like BrowserMob (acquired earlier this month by NeuStar), Feedback Army,  Mob4Hire,  uTest (partnered with SOASTA a few months ago), XBOSoft and others. These companies combine web and cloud economics with the effectiveness and efficiency of crowdsourcing. By so doing, they change the playing fields of software delivery…

  • The rise of devops. The line between dev and ops, or at least between dev and web ops, is becoming fuzzier and fuzzier.

As monolithic software development and delivery processes get deconstructed, the structural changes affect methods, processes and governance alike. Hence, discussion of Agile topics in this blog will not be complete without devoting a certain amount of “real estate” to these two changes (software testing as a service and devops) and others that are no doubt forthcoming. For example, it is a small step from testing as a service to development as a service in the true sense of the word – through crowdsourcing, not through outsourcing.

I asked a few friends to help me cover forthcoming structural changes that are relevant to Agile. Their thoughts will be captured through either guest posts or interviews. In these posts/interviews we will explore topics for their own sake. We will connect the dots back to Agile by referencing these posts/interviews in the various posts devoted to Agile. Needless to say, Agile posts will continue to constitute the vast majority of posts in this blog.

We will start the next week with a guest post by Peter McGarahan and an interview with Annie Shum. Stay tuned…

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

with 2 comments

Economies of Scale have been much discussed in The Agile Executive since the recent OpsCamp in Austin, TX. The significant savings on system administration costs  in very large data centers have been called out as a major advantage of Internet-scale Clouds. Unlike various short-lived advantages, the benefits to the Cloud operator, and to the Cloud user when the savings are passed on to him/her, are sustainable.

In this guest post, colleague and friend Annie Shum analyzes the various sources of waste in operations in traditional data centers. Like an Agilist with Lean inclinations who confronts an inefficient Waterfall process, Annie explains how economies of scale apply to the various kinds of waste that are prevalent in today’s small and medium data centers. Furthermore, she connects the dots that lead toward a Green IT option.

Here is Annie:

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

Scale Matters: “Over time, however, competitive advantage within categories shifts inexorably toward volume operations architecture.” – Geoffrey Moore, “Dealing with Darwin”

It is a truism that today’s datacenters are systemically inefficient. This is not intended as an indictment of all conventional datacenters. Nor does it imply that today’s datacenters cannot be made more efficient (incrementally) through right sizing and other initiatives, notably consolidation by deploying virtualization technologies and governance by enforcing energy conservation/recycling policies. There are a myriad of inefficiencies, however, that are prevalent in datacenters today.

Many industry observers lament the “staggering complexity” that permeates on-premises datacenters. Over time, most, if not all, enterprise IT datacenters have become amalgamations of disparate heterogeneous resources. Generally, they can be described as incohesive, perhaps even haphazard, accumulations. The datacenter components and configurations often reflect the intersections of organizational politics (LOB reporting structures leading to highly customized/organizational asset acquisitions and configurations), business needs of the moment (shifting corporate strategies and changing business imperatives to gain competitive edge or meet regulatory compliances) and technology limitations (commercial tools available in the marketplace). It should come as no surprise that human interactions and errors are considered a major contributor to the inefficiencies of datacenters: IBM reported that human errors account for seventy percent of the datacenter problems.

The challenge of maximizing energy efficiency begins fundamentally with the historical capital-intensive ownership model for computing assets to enable each organization to operate its own datacenter and to provide “24×7 availability” to its own users.  The enterprise IT staff has been required to support unpredictable future growth, accommodate situational demands and unscheduled but deadline-critical events, meet performance levels within SLAs and comply with regulatory and auditing requirements. Hence, datacenters generally are over-configured and over-provisioned. In addition to highly skewed under-utilization of distributed platform servers, ninety percent of corporate datacenters have excess cooling capacity. Worst of all, according to IBM, about seventy-two percent of cooling bypassed the computing equipment entirely. Further compounding these problems for a typical enterprise datacenter, is the lack of transparency and the inability to control energy consumption properly due to inadequate and often inaccurate instrumentation to quantify energy consumption and waste due to energy lost.

The economics of Cloud Computing can offer a compelling option for more efficient IT: by lowering power consumption for individual organizations and by improving the efficiency of a large number of discrete datacenters. Although the electricity consumption of Cloud Computing is projected to be one to two percent of today’s global electricity use, Cloud service providers can still cultivate sustainable Green I.T. effectively at lower costs by leveraging state-of-the-art super energy efficient massive datacenters, proximity to power generation thereby reducing transmission costs and, above all, harnessing enormous economies of scale. To better understand how Cloud Computing can offer greener computing in the Cloud and how will it help moderate power consumption by datacenters and rein in run-away costs, a good starting place is James Hamilton’s September 2008 study on Internet-Scale Service Efficiency” as summarized in the table below.

Resource Cost in

Medium DC

Cost in

Very Large DC

Ratio
Network $95 / Mbps / month $13 / Mbps / month 7.1x
Storage $2.20 / GB / month $0.40 / GB / month 5.7x
Administration ≈140 servers/admin >1000 servers/admin 7.1x

Table 1: Internet-Scale Service Efficiency [Source: James Hamilton]

This study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments.  The significant cost savings is driven primarily by scale. Other key factors include location (low cost real estate and electricity rate, abundant water supply and readily available fiber-optic connectivity), proximity to electricity and power generators, load diversity, and virtualization technologies.

Will this mark the beginning of the end for traditional on-premises datacenters? Can enterprise IT continue to justify new business cases for expanding today’s non-renewable energy powered datacenters? According to the McKinsey article, the costs to launch a large enterprise datacenter have risen sharply from $150M to over $500M over the past five years. The facility operating costs are also increasing at about twenty percent per year. How long will the status quo last for enterprise IT considering the recent trend of Cloud service providers? Major players such as Google, Microsoft as well as the U.S. government itself have invested in or are planning ultra energy-efficient mega-size datacenters (also known as “container hotels”) with massive commoditized containerization and proximity both to power source and less expensive power rates. Bottom line: will the tide turn if the economics (radical cost savings) due to enormous economies of scale become too significant to ignore?

Despite the potential for significant cost savings, it is premature to declare the demise of traditional IT or the end of enterprise datacenters. After all, the rationale for today’s enterprise IT extends well beyond simplistic bottom-line economics – at least for now. To most industry observers, enterprise datacenters are unlikely to disappear although the traditional roles of enterprise IT will be changing. A likely scenario may involve redistributing IT personnel from operating low-level system operational tasks to addressing higher-level functions involving governance, energy management, security and business processes. Such change not only would become more apparent but will likely be precipitated by the rise of hybrid Clouds and the growing interconnection linking SOA, BPM and social computing. Another likely scenario is the rise of the mega datacenters or “container hotels” for Cloud Utility Computing providers. Although the global economic outlook will undoubtedly play a key role in shaping the development plans/timelines of the mega datacenters, they are here to stay. Case in point: by 2012, Intel estimates that it will design and ship about a quarter of the server chips (it sells) to such mega-data centers.


A Core Formula for Agile B2C Statrups

leave a comment »

Colleague Chris Sterling drew my attention to a Pivotal Labs talk by Nathaniel Talbott on Experiment-Driven Development (EDD). It is a forward-looking think piece, focused on development helping the business make decisions based on actual A/B Testing data. Basically, EDD to the business is like TDD to development.

Between this talk and a recent discussion with Columbia’s Yechiam Yemini on his Principle of Innovation and Entrepreneurship course, a core “formula” for Agile B2C startups emerges:

  1. Identify a business process P
  2. Create a minimum viable Internet service S to support P
  3. Apply EDD to S on just about any feature decision of significance

This core formula can be easily refined and extended. For example:

  • Criteria for choosing P could/should be established
  • Other kinds of testing (in addition to or instead of A/B testing) could be done
  • customer development layer could be added to the formula
  • Many others…

By following this formula a startup can implement the Agile Triangle depicted below in a meaningful manner. Value is validated – it is determined based on real customer feedback rather than through conjectures, speculations or ego trips.

Figure 1 – The Agile Triangle (based on Figure 1-3 in Jim Highsmith‘s Agile Project Management: Creating Innovative Products)

The quip “The voice of the people is the voice of God” has long been a tenet of musicians. The “formula” described above enables the Agile B2C startup to capture the voice of the people and thoughtfully act on it to accomplish business results.

Agile Infrastructure

leave a comment »

Ten years ago I probably would not have seen any connection between global warming and server design. Today, power considerations prevail in the packaging of servers, particularly those slated for use in large and very large data centers. The dots have been connected to characterize servers in terms of their eco foot print.

In his Agile Austin presentation a couple of days ago, Cote delivered a strong case for connecting the dots of Agile software development with those of Cloud Computing. Software development and IT operations become largely inseparable in cloud environments.  In many of these environments, customer feedback is given “real time” and needs to be responded to in an ultra fast manner. Companies that develop fast closed-loop feedback and response systems are likely to have a major competitive advantage. They can make development and investment decisions based on actual user analytics, feature analytics and aggregate analytics instead of speculating what might prove valuable.

While the connection between Agile and Cloud might not be broadly recognized yet, the subject IMHO is of paramount importance. In recognition of this importance, Michael Cote, John Allspaw,  Andrew Shafer and I plan to dig into it in a podcast next week. Stay tuned…