The Agile Executive

Making Agile Work

Posts Tagged ‘Business Service Management

The Urgency of Now – Guest Post by Annie Shum

with 3 comments

Failure to learn, failure to anticipate, and failure to adapt are the three generic causes of military disasters. Each one of these three failures is bad enough. In combination, they can be catastrophic. Germany swiftly defeated and conquered France in 1940 due to the utter failure of the French army to grasp the nature of future war, to conceive the probable action of the German forces and to adequately react to the German initiative once it unfolded through the Ardennes. The patterns leading to the catastrophe suffered by the French are similar in some ways to the eco-meltdowns described by Jared Diamond in Collapse: How Societies Choose to Succeed or Fail.

In this guest post, colleague and friend Annie Shum poses disturbing questions with respect to our willingness and ability as IT professionals to learn, anticipate and adapt to the imperatives of Cloud Computing. Between shockingly low (15%) server capacity utilization on the one hand, and dramatic changes in the needs of the business on the other hand, companies who continue to use industrial-era IT models are at peril. Annie weaves theses and other related threads together, and makes a resounding call-to-action to re-think IT.

It is remarkable that Annie’s analysis herein of the root causes of a possible meltdown in IT identifies worrisome patterns similar to those that the Agile movement has pointed out to with respect to arcane methods of software development. The very same core problems that afflict software development manifest themselves in the IT paradigm as well as in the corresponding business design. Painful and wasteful that this repeated manifestation is, it actually creates the opportunity to manage software, IT, and the business in unison. To do so, we need to embrace a data-driven version of the economics of IT, to grasp the true nature of Cloud Computing without the hype that currently surrounds it, and to adapt software development, IT operations and business design accordingly. As the title of this post states, we need to start carrying out these three tasks now.

Here is Annie:

The Urgency of Now: The Edge of Chaos and A “Strategic Inflection Point” for IT

“It was the worst of times. It may be the best of times.” – IBM

Consider the following table. It contains a list of statistics pertaining to the enterprise datacenter index compiled by Peter Mell and Tim Grance, NIST. Overall, the statistics are sobering, perhaps even alarming, and do not bode well for the long-term sustainability of traditional on-premises datacenters. Prudent IT organizations – whether big or small, stalwart or startup – should consider this as a wake-up call. In particular, out of the almost twelve million servers in US datacenters today, the typical server capacity utilization is only around fifteen percent. Although not explicitly shown in this table, the average utilization of the mainframe z/OS servers is typically over eighty percent. However, mainframe z/OS server utilization is only a minor component of the overall average server utilization.

Statistics Enterprise Datacenter Index
11,800,000 Servers in US datacenters
15% Typical server capacity utilization
$800,000,000,000/year Purchasing & maintaining enterprise software
80% Software costs spent on maintenance: the “80-20” ratio
100x Power consumption/sq ft compared to office building
4x Increase in server power consumption, 2001 to 2006
2x Increase in number of servers, 2001 to 2006
$21,300,000 Datacenter construction cost, 9000 sq ft
$1,000,000/year Annual cost to power the datacenter
1.5% Portion of national power generation
50% Potential power reduction from green technologies
2% Portion of global carbon emissions

 

Over the years, organizations have accepted such skewed levels of server inefficiency and escalating maintenance costs of IT infrastructure as the norm. Even as organizations continue to express concerns, many seem resigned to the status quo tacitly: akin to what Bob Evans of InfoWeek described as “insurmountable laws of physics.” Looking ahead, however, the status quo may no longer be a viable option for most organizations. Due to soaring electricity/power costs compounded by the recent global financial meltdown with a near collapse of the financial system that triggered a prolonged (and for now, apparently indefinite) credit crunch, these are unparalleled strident and chaotic times for businesses. Pressured by business decision-makers who are under a heightened level of anxiety, enterprise IT is now confronting a transformative dilemma whether to preserve the status quo or to re-think IT.

On one hand, the current global recessionary down cycle is a particularly powerful (albeit rooted in fear) and instinctive deterrent to challenging the status quo. For risk-adverse organizations, it is only understandable why status quo, fundamental flaws notwithstanding, may trump disruptive change during these challenging times. On the other hand, forward-thinking decision-makers may make the bold but disruptive (radical) choice to view status quo as the fundamental problem: acknowledge the growing “urgency of now” by resolving to overcome and correct the entrenched shortcomings of enterprise IT.

“You never want a serious crisis to go to waste”. That quote (or its many variations) has been attributed alike to economists and politicians. The same could be said for IT. Indeed a growing number of IT industry observers believe the profound impact of the on-going economic crisis could offer a rare window of opportunity for organizations to rethink traditional capital-intensive, command-control, on-premises IT operations and invest in new and more flexible self-service IT delivery/deployment models. Think of this defining moment as what Andy Grove, co-founder of Intel, described as the “strategic inflection point”.  He was referring to the point in the dynamic when the fundamentals of a business are about to change and “that change can mean an opportunity to rise to new heights.” Nonetheless, the choices will be hard decisions because the options are stark: either counter-intuitively invest in a down cycle by focusing on a more sustainable but disruptive trajectory or hunker down and risk irreversible shrinking business. 

As one considers how to address the challenges of today’s enterprise IT, perhaps the following two observations should be taken into account. First, despite the quantum leap in technology advancements, generally the basic design and delivery models of existing IT applications/services are variations of traditionally insular, back-office automation business tools. Second, the organizational structure and business models of most companies are deeply rooted in models of yesteryear, in many instances dating back to the Industrial Revolution. In theory, adhering to the traditional organizational model of top-down command-control can maximize predictability, efficiency and order. Heretofore, this has been the modus operandi for most organizations that Umair Haque succinctly characterized as “ industrial-era companies that make industrial-era stuff — and play by industrial-era rules.” In today’s exponential times, however, the velocity of change and the rapidly growing need of interconnecting to other organizations and automating value chains inevitably lead to an increase in uncertainty and disorder.  Strategically, forward-thinking organizations should consider seeking alternative models to address the interdependent and shifting new world order.  

In their book, “Presence – Human Purpose and the Field of the Future”, authors Peter Senge, Otto Schramer, Joe Jaworski and Betty Sue Flower observe that many of the practices of the Industrial Age appear to be largely unaffected by the changing reality of today’s society and continue to expand in today’s business organizations. They conclude with this advice:  “As long as our thinking is governed by industrial ‘machine age’ metaphors such as control, predictability, and faster is better, we will continue to re-create organizations as we have had – for the last 100 years – despite their increasing disharmony with the world and the science of the 21st century.”  Likewise, the traditional top-down command-control modus operandi of enterprise IT today does not reflect adequately and hence likely is unable to accommodate fully the transformational shift of business from silo organizations to “all thing’s digital all the time”, hyper-interconnected and hyper-interdependent ecosystems.

Prior to Sprint Zero: A Note on Jakob Nielsen’s “Agile User Experience Projects”

leave a comment »

Dr. Jakob Nielsen published the results of a follow-on study to his 2008 report Agile Development Methods and Usability.  The bottom line from the 2009 study (entitled Agile User Experience Projects) is as follows:

The two main recommendations for ensuring good usability in Agile projects remain the same as in our original research:

  • Separate design and development, and have the user interface team progress one step ahead of the implementation team. That way, when it comes time to build something, it’s already been designed and tested. (And yes, you can do both in a week or two by using paper prototypes and discount user testing.)
  • Maintain a coherent vision of the user interface architecture. Create the initial vision during a “sprint zero” period — before any implementation has started — and maintain it through annual (or semi-annual) design vision sprints. You can’t just design individual features; they have to fit together into a coherent whole — a whole that must be designed as well. Bottom-up user interface design equals a confused total user experience (the Linux syndrome).
  • I would like to highlight one implicit sub-aspect of Dr. Nielsen’s good counsel to maintain a coherent vision of the user interface architecture:

    • Ensure coherence with the underlying application paradigm

    To illustrate the point, think of a Business Service Management Application. You might monitor any number of servers, routers, databases and applications in order to ensure that a service satisfies the corresponding Service Level Agreement. However, the user interface architecture should have service as its fundamental concept. The architecture should certainly enable zooming in on any component of the service. But, the status of any such component (or sub-component) is merely means to an end: reflecting the status of the service and initiating as appropriate action(s) to fix it. Forming a service piecemeal from a number of constituent elements like those mentioned above – servers, routers, databases, applications, etc.  – is no substitute to “service orientation” of the user interface.

    The reason for my strong emphasis on the service as the most fundamental user interface concept is nicely captured in the article “How to Spell BSM” by BSM Review‘s Tom Bishop:

    Most businesses today are so dependent on IT that, if an IT organization does not understand how the business depends on its services, or does not manage those services with that business perspective in mind, they are dooming the business to slow, steady death….

    Dr. Nielsen’s recommendation to conceive the initial user interface architecture prior to beginning any implementation work is very consistent with this imperative need in BSM to manage the services from a business perspective. I would actually go one step further and contend that whenever the underlying paradigm changes in a manner as dramatic as the servers –> services in the BSM example above, demonstration of the core concept(s) of the user interface might need to precede the “sprint zero” period. In the context of the overall planning and budgeting process which governs the Agile process, such demonstration could actually be a pre-requisite to launching “sprint zero.”

    If you consider this “prior to sprint zero” approach a bit heavy-handed, I would offer a simple test to assess its reasonableness. Play with a number of IT Service Management (ITSM) products that you picked in random. Once you did so, compare the numbers of those that clearly have services at their core, to the number of those that integrated services into their user interface as an afterthought.