The Agile Executive

Making Agile Work

Posts Tagged ‘Annie Shum

The Supply Side of the Consumerization of Enterprise Software

with 10 comments

Source: http://www.flickr.com/photos/bertboerland/2944895894/

In my recent post about the consumerization of enterprise software I discussed two factors that are likely to accelerate the pace toward such consumerization:

  1. Any department/business unit that can get a service in entirety from an outside source is likely to do so without worrying about enterprise software and/or data center considerations. This is already happening in Marketing. As other functions start doing so, more and more links in the value chain of enterprise software will be “consumerized.” In other words, these services will be carried out without the involvement of the IT department.
  2. Once the switch-over costs from legacy code to state-of-the-art code are less than the steady state costs (to maintain and update legacy code), the “consumerization” of enterprise software is going to happen with ferocious urgency.

In this post I would like to add a third factor – the buying pattern. My contention is that the buying pattern for micro-apps will spread to enterprise application. Potential demand for buying in this way is huge. Supply for buying enterprise software as micros-apps is not quite there yet, but it would take only one smart vendor to start transforming the traditional pattern how enterprise software is chunked, offered and sold.

Think about your recent experience downloading an application to your smart mobile phone. You did not go through a six-month evaluation period; you did not do a comprehensive competitive analysis; you did not check how well the seller does customer support in Sumatra. You simply paid something like $7.99 and downloaded the application. You are more than happy if it fulfills your needs in a reasonable manner. If it does not, you simply buy another application with the functionality you desire. Maybe you are a little more cautious now and ask a friend or send an inquiry to your Twitter followers before you pick the new application. Whatever you might choose to do, the fundamental facts are: A) you can afford to lose $7.99; and, B) your time is more precious than the sunk cost of the application. You simply move on.

This buying pattern is not something that you are going to forget when you step into your office in the morning. It makes perfect sense to you and it would be good for your company. You would rather concentrate on your business than on the tricky language of clause number 734 in the contract that your department’s attorney prepared for licensing yet another piece of enterprise software.

The ‘$7.99 experience’ you and zillion other folks like you had over the past week or the past month makes enterprise software vendors extremely vulnerable. The “high-touch; high-margin; high-commitment” [1] business design is not sustainable once the purchase model changes.  The expensive machinery of professional services, system engineering and customer support is not affordable at the face of competition that constructs modular chunks of enterprise software and sells them at a price the customer can afford to write off (if they do not perform to satisfaction). Maybe the ceiling in the enterprise to ‘forget about this application and move on’ is no higher than $1,000 (instead of ‘no higher than $7.99’ for the private citizen), but a smart vendor can still make a lot of money on selling at one thousand dollars a pop to the enterprise.

The growing gap between “this lovely application on my iPhone” and the “headache of licensing traditional enterprise software” is an immense incentive for up-and-coming software vendors to use the ‘$7.99 experience’ as the heart of a new business design. This new business design can be simply summarized as “low-touch; low-margin; low commitment” [2]. And, yes, it is very disruptive to the incumbents…

My hunch is that the IT Service Management (ITSM) industry will be the first to crumble. The premise of “service delivery” sounds a little hollow in a cloud computing world characterized by “everything as a service” [3]. Would a buyer be really willing to pay for “service for the service” from a vendor who does not actually provide the underlying service?! It sounds like paying a Fidelity or a Vanguard investment manager to manage a portfolio of their own mutual funds for you…

All it takes for this shift to start – in ITSM or in another part of enterprise software –  is one successful vendor.

Footnotes:

[1] I am indebted to Annie Shum for this phrase.

[2] Ibid.

[3] I am indebted to Russ Daniels for this phrase.

As If Another Proof Point Was Needed

leave a comment »

Annie Shum’s interview earlier this week gave readers of this blog a multi-dimensional view of imminent changes in IT. If you needed independent validation, it came yesterday through EMC’s Chuck Hollis words in the national solution provider GreenPages Technology Solutions’ 14th annual summit:

Vice President Global Marketing CTO Chuck Hollis Monday said the changes resulting from the storage giant’s own no-holds barred journey to the private cloud led to a decline in IT employee job satisfaction…

Hollis said the internal IT satisfaction drop came in the second phase of the EMC cloud revolution focused squarely on mission critical applications. That second phase — which EMC is in the midst of now — has sparked major changes in IT jobs as the company has replaced IT management, security staff and backend IT staff.

“During this phase, this is where org (organizational) chart issues started to come in,” Hollis said. “People’s jobs started to change. Younger people in the organization were being promoted over older people.”

As if another proof point to add to Annie‘s rigorous data was needed…

Written by israelgat

August 4, 2010 at 7:30 am

Extending the Scope of The Agile Executive

leave a comment »

For the past 18 months Michael Cote and I focused The Agile Executive on software methods, processes and governance. Occasional posts on cloud computing and devops have been supplementary in nature. Structural changes in the industry have generally been left to be covered by other blogs (e.g.  Cote’s Redmonk blog).

We have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. Two reasons drove us to this conclusion:

  • The rise of software testing as a service. The importance of this trend was summarized in Israel’s recent Cutter blog post “Changing Playing Fields“:

Consider companies like BrowserMob (acquired earlier this month by NeuStar), Feedback Army,  Mob4Hire,  uTest (partnered with SOASTA a few months ago), XBOSoft and others. These companies combine web and cloud economics with the effectiveness and efficiency of crowdsourcing. By so doing, they change the playing fields of software delivery…

  • The rise of devops. The line between dev and ops, or at least between dev and web ops, is becoming fuzzier and fuzzier.

As monolithic software development and delivery processes get deconstructed, the structural changes affect methods, processes and governance alike. Hence, discussion of Agile topics in this blog will not be complete without devoting a certain amount of “real estate” to these two changes (software testing as a service and devops) and others that are no doubt forthcoming. For example, it is a small step from testing as a service to development as a service in the true sense of the word – through crowdsourcing, not through outsourcing.

I asked a few friends to help me cover forthcoming structural changes that are relevant to Agile. Their thoughts will be captured through either guest posts or interviews. In these posts/interviews we will explore topics for their own sake. We will connect the dots back to Agile by referencing these posts/interviews in the various posts devoted to Agile. Needless to say, Agile posts will continue to constitute the vast majority of posts in this blog.

We will start the next week with a guest post by Peter McGarahan and an interview with Annie Shum. Stay tuned…

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

with 2 comments

Economies of Scale have been much discussed in The Agile Executive since the recent OpsCamp in Austin, TX. The significant savings on system administration costs  in very large data centers have been called out as a major advantage of Internet-scale Clouds. Unlike various short-lived advantages, the benefits to the Cloud operator, and to the Cloud user when the savings are passed on to him/her, are sustainable.

In this guest post, colleague and friend Annie Shum analyzes the various sources of waste in operations in traditional data centers. Like an Agilist with Lean inclinations who confronts an inefficient Waterfall process, Annie explains how economies of scale apply to the various kinds of waste that are prevalent in today’s small and medium data centers. Furthermore, she connects the dots that lead toward a Green IT option.

Here is Annie:

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

Scale Matters: “Over time, however, competitive advantage within categories shifts inexorably toward volume operations architecture.” – Geoffrey Moore, “Dealing with Darwin”

It is a truism that today’s datacenters are systemically inefficient. This is not intended as an indictment of all conventional datacenters. Nor does it imply that today’s datacenters cannot be made more efficient (incrementally) through right sizing and other initiatives, notably consolidation by deploying virtualization technologies and governance by enforcing energy conservation/recycling policies. There are a myriad of inefficiencies, however, that are prevalent in datacenters today.

Many industry observers lament the “staggering complexity” that permeates on-premises datacenters. Over time, most, if not all, enterprise IT datacenters have become amalgamations of disparate heterogeneous resources. Generally, they can be described as incohesive, perhaps even haphazard, accumulations. The datacenter components and configurations often reflect the intersections of organizational politics (LOB reporting structures leading to highly customized/organizational asset acquisitions and configurations), business needs of the moment (shifting corporate strategies and changing business imperatives to gain competitive edge or meet regulatory compliances) and technology limitations (commercial tools available in the marketplace). It should come as no surprise that human interactions and errors are considered a major contributor to the inefficiencies of datacenters: IBM reported that human errors account for seventy percent of the datacenter problems.

The challenge of maximizing energy efficiency begins fundamentally with the historical capital-intensive ownership model for computing assets to enable each organization to operate its own datacenter and to provide “24×7 availability” to its own users.  The enterprise IT staff has been required to support unpredictable future growth, accommodate situational demands and unscheduled but deadline-critical events, meet performance levels within SLAs and comply with regulatory and auditing requirements. Hence, datacenters generally are over-configured and over-provisioned. In addition to highly skewed under-utilization of distributed platform servers, ninety percent of corporate datacenters have excess cooling capacity. Worst of all, according to IBM, about seventy-two percent of cooling bypassed the computing equipment entirely. Further compounding these problems for a typical enterprise datacenter, is the lack of transparency and the inability to control energy consumption properly due to inadequate and often inaccurate instrumentation to quantify energy consumption and waste due to energy lost.

The economics of Cloud Computing can offer a compelling option for more efficient IT: by lowering power consumption for individual organizations and by improving the efficiency of a large number of discrete datacenters. Although the electricity consumption of Cloud Computing is projected to be one to two percent of today’s global electricity use, Cloud service providers can still cultivate sustainable Green I.T. effectively at lower costs by leveraging state-of-the-art super energy efficient massive datacenters, proximity to power generation thereby reducing transmission costs and, above all, harnessing enormous economies of scale. To better understand how Cloud Computing can offer greener computing in the Cloud and how will it help moderate power consumption by datacenters and rein in run-away costs, a good starting place is James Hamilton’s September 2008 study on Internet-Scale Service Efficiency” as summarized in the table below.

Resource Cost in

Medium DC

Cost in

Very Large DC

Ratio
Network $95 / Mbps / month $13 / Mbps / month 7.1x
Storage $2.20 / GB / month $0.40 / GB / month 5.7x
Administration ≈140 servers/admin >1000 servers/admin 7.1x

Table 1: Internet-Scale Service Efficiency [Source: James Hamilton]

This study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments.  The significant cost savings is driven primarily by scale. Other key factors include location (low cost real estate and electricity rate, abundant water supply and readily available fiber-optic connectivity), proximity to electricity and power generators, load diversity, and virtualization technologies.

Will this mark the beginning of the end for traditional on-premises datacenters? Can enterprise IT continue to justify new business cases for expanding today’s non-renewable energy powered datacenters? According to the McKinsey article, the costs to launch a large enterprise datacenter have risen sharply from $150M to over $500M over the past five years. The facility operating costs are also increasing at about twenty percent per year. How long will the status quo last for enterprise IT considering the recent trend of Cloud service providers? Major players such as Google, Microsoft as well as the U.S. government itself have invested in or are planning ultra energy-efficient mega-size datacenters (also known as “container hotels”) with massive commoditized containerization and proximity both to power source and less expensive power rates. Bottom line: will the tide turn if the economics (radical cost savings) due to enormous economies of scale become too significant to ignore?

Despite the potential for significant cost savings, it is premature to declare the demise of traditional IT or the end of enterprise datacenters. After all, the rationale for today’s enterprise IT extends well beyond simplistic bottom-line economics – at least for now. To most industry observers, enterprise datacenters are unlikely to disappear although the traditional roles of enterprise IT will be changing. A likely scenario may involve redistributing IT personnel from operating low-level system operational tasks to addressing higher-level functions involving governance, energy management, security and business processes. Such change not only would become more apparent but will likely be precipitated by the rise of hybrid Clouds and the growing interconnection linking SOA, BPM and social computing. Another likely scenario is the rise of the mega datacenters or “container hotels” for Cloud Utility Computing providers. Although the global economic outlook will undoubtedly play a key role in shaping the development plans/timelines of the mega datacenters, they are here to stay. Case in point: by 2012, Intel estimates that it will design and ship about a quarter of the server chips (it sells) to such mega-data centers.


OpsCamp Through an Internet-scale Lens

with 10 comments

OpsCamp Austin 2010

Like Agile Roots in Salt Lake City in June 2009, OpsCamp in Austin last week demonstrated how powerful grass roots conferences can be. We might not have had big names on the roster, but we sure had a productive dialog on the tricky issues lurking in the cusp between software development and IT operations in Cloud environments.

The conference has been amply covered by Michael Cote, John Willis, Mark Hinkle, and Damon Edwards (to name a few). This post restricts itself to commenting on one fundamental aspect of the cloud which IMHO does not get the attention it deserves. It might be implied in various discourses on the subject, but I believe it needs to be called out as a fundamental assumption for just about anything  and everything one might consider doing with respect to the cloud. I am referring to economies of scale.

As pointed out in a forthcoming book on Cloud Computing by colleague and friend Annie Shum, the cloud phenomenon is fundamentally driven by substantial economies of scale in very large data centers. The operational costs of running such data centers are close to an order of magnitude lower than these prevailing in small and mid-sized data centers. User benefits are primarily derived from these compelling economies of scale.

I will be asking Annie to write a detailed guest post on the subject for readers of The Agile Executive. Until her post is published here, I would recommend we primarily consider the Cloud as a phenomenon that only becomes meaningful at scale. In particular, Private Clouds are not likely to yield Internet-scale efficiencies. Folks who regard their company’s conventional data center as a private cloud might be missing up on the ‘secret sauce’ of cloud computing.

The various agile system administration schemes discussed at the Austin OpsCamp are essential to attaining the requisite economies of scale in cloud services. Watch out for follow-on OpsCamps in other cities for developments to come in this all important space.

Cloud Computing Forecasts: “Cloudy” Future for Enterprise IT

leave a comment »

In a comment on The Urgency of Now, Marcel Den Hartog discusses technology assimilation in the face of hype:

But if people are already reluctant to run the things they have, on another platform they already have, on an operating system they are already familiar with (Linux on zSeries), how can you expect them to even look at cloud computing seriously? Every technological advancement requires people to adapt and change. Human nature is that we don’t like that, so it often requires a disaster to change our behavior. Or carefully planned steps to prove and convince people. However, nothing makes IT people more cautious than a hype. And that is how cloud is perceived. When the press, the analysts and the industry start writing about cloud as part of the IT solution, people will want to change. Now that it’s presented as the silver bullet to all IT problems, people are cautious to say the least.

Here is Annie Shum‘s thoughtful reply to Marcel’s comment:

Today, the Cloud era has only just begun. Despite lingering doubts, growing concerns and wide-spread confusion (especially separating media and vendor spun hype from reality), the IT industry generally views Cloud Computing as more appealing than traditional ASP /hosting or outsourcing/off-shoring. To technology-centric startups and nimble entrepreneurs, Cloud Computing enables them to punch above their weight class. By turning up-front CapEx into a more scalable and variable cost structure based on an on-demand pay-as-you-go model, Cloud Computing can provide a temporary, level playing field. Similarly, many budget-constrained and cash-strapped organizations also look to Cloud Computing for immediate (friction-free) access to “unlimited” computing resources. To wit: Cloud Computing may be considered as a utility-based alternative to an on-premises datacenter and allow an organization (notably cash-strapped startups) to “Think like a ‘big guy’. Pay like a ‘little guy’ ”.

Forward-thinking organizations should not lose sight of the vast potential of Cloud Computing that extends well beyond short-term economics. At its core, Cloud Computing is about enabling business agility and connectivity by abstracting computing infrastructure via a new set of flexible service delivery/deployment models. Harvard Business School Professor Andrew McAffee painted a “Cloudy” future for Corporate IT in his August 21, 2009 blog and cited a perceptive 1983 paper by Warren D. Devine, Jr. in the Journal of Economic History called “From Shafts to Wires: Historical Perspective on Electrification”.[1] There are three key take-away messages that resonate with the current Cloud Computing paradigm shift. First: The real impact of the new technology was not apparent right away. Second: The transition to full utilization of the new technology will be long, but inevitable. Third: There will be detractors and skeptics about the new technology throughout the transition. Interestingly, telephone is another groundbreaking disruptive technology that might have faced similar skepticism in the beginning. Legend has it that a Western Union internal memo dated 1876 downplayed the viability of the telephone: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communications. The device is inherently of no value to us.”

The dominance of Cloud Computing as a computing platform, however, is far from a fait accompli. Nor will it ever be complete, a “one-size fits all” or a “big and overnight switch”. The shape of computing is constantly changing but it is always a blended and gradual transition, analogous to a modern city. While the cityscape continues to change, a complete “rip-and-replace” overhaul is rarely feasible or cost-effective. Instead, city planners generally preserve legacy structures although some of them are retrofitted with standards-based interfaces that enable them to connect to the shared infrastructure of the city. For example, the Paris city planners retrofitted Notre Dame with facilities such as electricity, water, and plumbing. Similarly, despite the passage of the last three computing paradigm shifts – first mainframe, next Client/Server and PCs, and then Web N-tier – they all co-exist and can be expected to continue in the future. Consider the following. Major shares of mission-critical business applications are running today on mainframe servers. Through application modernization, legacy applications – notably Cobol for example – now can operate in a Web 2.0 environment as well as deploy in the Cloud via the Amazon EC2 platform.

Cloud Computing can provide great appeal to a wide swath of organizations spanning startups, SMBs, ISVs, enterprise IT and government agencies. The most commonly cited benefits include the promise of avoiding CapEx and lowering TCO to on-demand elasticity, immediacy and ease of deployment, time to value, location independence and catalyzing innovation. However, there is no magic in the Cloud and it is certainly not a panacea for all IT woes. Some applications are not “Cloud-friendly”. While deploying applications in the Cloud can enable business agility incrementally, such deployment will not change the characteristics of the applications fundamentally to be highly scalable, flexible and automatically responsive to new business requirements. Realistically, one must recognize that the many of the challenging problems – security, data integration and service interoperability in particular – will persist and live on regardless of the computing delivery medium: Cloud, hosted or on-premises.

[1] “The author combed through the contemporaneous business and technology press to learn what ‘experts’ were saying as manufacturing switched over from steam to electrical power, a process that took about 50 years to complete.” – Andrew McAfee, September 21, 2009.

I will go one step further and add quality to Annie’s list of challenging problem. A crappy on-premises application will continue to be crappy in the cloud. An audit of the technical debt should be conducted before “clouding” an application. See Technical Debt on Your Balance Sheet for a recommendation on quantifying the results of the quality audit.

The Urgency of Now – Guest Post by Annie Shum

with 3 comments

Failure to learn, failure to anticipate, and failure to adapt are the three generic causes of military disasters. Each one of these three failures is bad enough. In combination, they can be catastrophic. Germany swiftly defeated and conquered France in 1940 due to the utter failure of the French army to grasp the nature of future war, to conceive the probable action of the German forces and to adequately react to the German initiative once it unfolded through the Ardennes. The patterns leading to the catastrophe suffered by the French are similar in some ways to the eco-meltdowns described by Jared Diamond in Collapse: How Societies Choose to Succeed or Fail.

In this guest post, colleague and friend Annie Shum poses disturbing questions with respect to our willingness and ability as IT professionals to learn, anticipate and adapt to the imperatives of Cloud Computing. Between shockingly low (15%) server capacity utilization on the one hand, and dramatic changes in the needs of the business on the other hand, companies who continue to use industrial-era IT models are at peril. Annie weaves theses and other related threads together, and makes a resounding call-to-action to re-think IT.

It is remarkable that Annie’s analysis herein of the root causes of a possible meltdown in IT identifies worrisome patterns similar to those that the Agile movement has pointed out to with respect to arcane methods of software development. The very same core problems that afflict software development manifest themselves in the IT paradigm as well as in the corresponding business design. Painful and wasteful that this repeated manifestation is, it actually creates the opportunity to manage software, IT, and the business in unison. To do so, we need to embrace a data-driven version of the economics of IT, to grasp the true nature of Cloud Computing without the hype that currently surrounds it, and to adapt software development, IT operations and business design accordingly. As the title of this post states, we need to start carrying out these three tasks now.

Here is Annie:

The Urgency of Now: The Edge of Chaos and A “Strategic Inflection Point” for IT

“It was the worst of times. It may be the best of times.” – IBM

Consider the following table. It contains a list of statistics pertaining to the enterprise datacenter index compiled by Peter Mell and Tim Grance, NIST. Overall, the statistics are sobering, perhaps even alarming, and do not bode well for the long-term sustainability of traditional on-premises datacenters. Prudent IT organizations – whether big or small, stalwart or startup – should consider this as a wake-up call. In particular, out of the almost twelve million servers in US datacenters today, the typical server capacity utilization is only around fifteen percent. Although not explicitly shown in this table, the average utilization of the mainframe z/OS servers is typically over eighty percent. However, mainframe z/OS server utilization is only a minor component of the overall average server utilization.

Statistics Enterprise Datacenter Index
11,800,000 Servers in US datacenters
15% Typical server capacity utilization
$800,000,000,000/year Purchasing & maintaining enterprise software
80% Software costs spent on maintenance: the “80-20” ratio
100x Power consumption/sq ft compared to office building
4x Increase in server power consumption, 2001 to 2006
2x Increase in number of servers, 2001 to 2006
$21,300,000 Datacenter construction cost, 9000 sq ft
$1,000,000/year Annual cost to power the datacenter
1.5% Portion of national power generation
50% Potential power reduction from green technologies
2% Portion of global carbon emissions

 

Over the years, organizations have accepted such skewed levels of server inefficiency and escalating maintenance costs of IT infrastructure as the norm. Even as organizations continue to express concerns, many seem resigned to the status quo tacitly: akin to what Bob Evans of InfoWeek described as “insurmountable laws of physics.” Looking ahead, however, the status quo may no longer be a viable option for most organizations. Due to soaring electricity/power costs compounded by the recent global financial meltdown with a near collapse of the financial system that triggered a prolonged (and for now, apparently indefinite) credit crunch, these are unparalleled strident and chaotic times for businesses. Pressured by business decision-makers who are under a heightened level of anxiety, enterprise IT is now confronting a transformative dilemma whether to preserve the status quo or to re-think IT.

On one hand, the current global recessionary down cycle is a particularly powerful (albeit rooted in fear) and instinctive deterrent to challenging the status quo. For risk-adverse organizations, it is only understandable why status quo, fundamental flaws notwithstanding, may trump disruptive change during these challenging times. On the other hand, forward-thinking decision-makers may make the bold but disruptive (radical) choice to view status quo as the fundamental problem: acknowledge the growing “urgency of now” by resolving to overcome and correct the entrenched shortcomings of enterprise IT.

“You never want a serious crisis to go to waste”. That quote (or its many variations) has been attributed alike to economists and politicians. The same could be said for IT. Indeed a growing number of IT industry observers believe the profound impact of the on-going economic crisis could offer a rare window of opportunity for organizations to rethink traditional capital-intensive, command-control, on-premises IT operations and invest in new and more flexible self-service IT delivery/deployment models. Think of this defining moment as what Andy Grove, co-founder of Intel, described as the “strategic inflection point”.  He was referring to the point in the dynamic when the fundamentals of a business are about to change and “that change can mean an opportunity to rise to new heights.” Nonetheless, the choices will be hard decisions because the options are stark: either counter-intuitively invest in a down cycle by focusing on a more sustainable but disruptive trajectory or hunker down and risk irreversible shrinking business. 

As one considers how to address the challenges of today’s enterprise IT, perhaps the following two observations should be taken into account. First, despite the quantum leap in technology advancements, generally the basic design and delivery models of existing IT applications/services are variations of traditionally insular, back-office automation business tools. Second, the organizational structure and business models of most companies are deeply rooted in models of yesteryear, in many instances dating back to the Industrial Revolution. In theory, adhering to the traditional organizational model of top-down command-control can maximize predictability, efficiency and order. Heretofore, this has been the modus operandi for most organizations that Umair Haque succinctly characterized as “ industrial-era companies that make industrial-era stuff — and play by industrial-era rules.” In today’s exponential times, however, the velocity of change and the rapidly growing need of interconnecting to other organizations and automating value chains inevitably lead to an increase in uncertainty and disorder.  Strategically, forward-thinking organizations should consider seeking alternative models to address the interdependent and shifting new world order.  

In their book, “Presence – Human Purpose and the Field of the Future”, authors Peter Senge, Otto Schramer, Joe Jaworski and Betty Sue Flower observe that many of the practices of the Industrial Age appear to be largely unaffected by the changing reality of today’s society and continue to expand in today’s business organizations. They conclude with this advice:  “As long as our thinking is governed by industrial ‘machine age’ metaphors such as control, predictability, and faster is better, we will continue to re-create organizations as we have had – for the last 100 years – despite their increasing disharmony with the world and the science of the 21st century.”  Likewise, the traditional top-down command-control modus operandi of enterprise IT today does not reflect adequately and hence likely is unable to accommodate fully the transformational shift of business from silo organizations to “all thing’s digital all the time”, hyper-interconnected and hyper-interdependent ecosystems.

Predicting the Year Ahead

leave a comment »

Cutter Consortium has published predictions for 2010 by about a dozen of its experts. My own prediction, which examines the crash of 1929, the burst of the “dot-com bubble” in 2000 and the financial collapse in 2008, is actually quite bullish:

I expect 2010 to be the first year of a prolonged golden age. Serious as the various problems we all are wrestling with after the 2008-2009 macro-economic crisis are, they should be viewed as systemic to the way a new generation of revolutionary infrastructure gets assimilated in economy and society.

In addition to the techno-economic view expressed in the Cutter prediction, here are my Agile themes for 2010:

  • Agile moves “downstream” into Release Management.
  • Agile breaks out of Development into IT (and beyond) in the form of Agile Infrastructure and Agile Business Service Management.
  • SOA and Agile start to be linked in enterprise architecture and software/hardware/SaaS organizations.
  • Kanban starts an early adoption cycle similar to Scrum in 2006.

Acknowledgements: I am thankful to my colleagues Walter Bodwell, Sebastian HassingerErik Huddleston, Michael Cote and Annie Shum who influenced my thinking during 2009 and contributed either directly and indirectly to the themes listed above.

Double Wake-up

with one comment

Computer Associates and Salesforce made the following announcement a couple of weeks ago:

SAN FRANCISCO – Salesforce.com Dreamforce Conference – November 19, 2009 – CA, Inc. (NASDAQ: CA), the leader in Enterprise IT Management, and salesforce.com (NYSE: CRM), the enterprise cloud computing company, today announced they have partnered to deliver agile development management in the cloud on the Force.com platform. Through the alliance, CA and salesforce.com intend to introduce CA Agile Planner on Force.com to help small businesses and enterprises alike accelerate development timelines while gaining control and visibility over 100 percent of their development initiatives. The intended result will be increased innovation and reduced time-to-market.

Following an e-conversation on the subject with colleague and friend Annie Shum, I would characterize this announcement as a Double Wake-up:

  • To the premise of Agile methods; and,
  • To the premise of Cloud Computing

How appropriate it is that Annie has recently posted in this blog on the two threads coming together – Cloud Computing: Agile Deployment for Agile QA Testing.

“How do we move towards an agile procurement or agile development methodology?”

leave a comment »

Colleague and friend Annie Shum sent me the following excerpt from  U.S. CIO Vivek Kundra’s Friday keynote talk at the University of Maryland’s CIO Forum:

Questioned on whether service-oriented architecture still is an emphasis in a federal cloud computing paradigm, Kundra said SOA “absolutely” still matters. “Look at the Social Security Administration and what it’s done with SOA and local government,” he said. “They can build lightweight applications to interact with databases elsewhere.” That embrace of modern development practices extends beyond just SOA or upgrading programmers’ skills from COBOL. “How do we move towards an agile procurement or agile development methodology?” asked Kundra [highlighted by IG].

The {SOA –> agile procurement –> agile development} connection is intriguing. Obviously, Kundra gets the nature of the next revolution in productivity. My hunch would be that Business Service Management, particularly in its Agile BSM flavor, would soon be added to the mix.