The Agile Executive

Making Agile Work

Posts Tagged ‘Carlota Perez

I Never Even Spoke with Anyone from the Occupy Wall Street Movement

leave a comment »

Full disclosure part I: a month ago, while on my way to a business meeting, I saw a few OWS folks “camping” in front of the Federal Reserve Bank on Market street in San Francisco. I did not even have the time to take a picture with my iPhone, let alone chat with someone. This is the closest I ever got to touching, or being touched, by anyone in the movement.

So, I do not even know whether folks in the movement will agree or disagree with my simple interpretation of their overarching message:

  1. Our financial system is badly broken.
  2. Rather than letting it continue with business as usual, the Federal Reserve Bank should take over the banking system.
  3. Many of the services provided today by banks can be provided (once the Fed takes over) through devices such as iPhone just as they do in rural India.
  4. Substitutes could and should be developed for the services that can’t be carried out through iPhones or similar devices.
  5. Developing such services is no different from developing alternative sources of energy or health care services.
  6. Once the government puts in the appropriate policies (to encourage development of such services), a ton of entrepreneurs will jump at the opportunity.

I have no doubt that there are zillion details that I am not aware of that need to be figured out. I am a software engineer, not a banker.

But, I believe that at a very high level bullets 1-6 above capture some aspects of the message folks in the Occupy Wall Street movement are trying to get across. Hence, I am really surprised at the question I see cited so often “But what do they really want?!”  IMHO they simply want a major reform of the financial system. The details for so doing are better left to experts.

Full disclosure part II: I have to admit my blood boiled today when I saw the videos from UC Davis. As I said in a tweet an hour or so ago, it starts feeling like the brutality inflicted on the Bonus Army in 1932. Tim O’Reilly goes one step further in his post in which he brings up the loaded topic of Banality of Evil.

So, I might be writing this post with some strong emotions. But, I think the thesis I pose is directionally correct.

You don’t need to take my word for it. Just read Technological Revolutions and Financial Capital by Carlota Perez.

Written by israelgat

November 19, 2011 at 9:53 pm

The Real Cost of One Trillion Dollars in IT Debt: Part II – The Performance Paradox

with 7 comments

Some of the business ramifications of the $1 trillion in IT debt have been explored in the first post of this two-part analysis. This second post focuses on “an ounce of prevention is worth a pound of cure” aspects of IT debt. In particular, it proposes an explanation why prevention was often neglected in the US over the past decade and very possibly longer. This explanation is not meant to dwell on the past. Rather, it studies the patterns of the past in order to provide guidance for what you could do and should do in the future to rein in technical debt.

The prevention vis-a-vis cure trade-off  in software was illustrated by colleague and friend Jim Highsmith in the following figure:

Figure 1: The Technical Debt Curve

As Jim astutely points out, “once on far right of curve all choices are hard.” My experience as well as those of various Cutter colleagues have shown it is actually very hard. The reason is simple: on the far right the software controls you more than you control it. The manifestations of technical debt [1] in the form of pressing customer problems in the production environment force you into a largely reactive mode of operation. This reactive mode of operation is prone to a high error injection rate – you introduce new bugs while you fix old ones. Consequently, progress is agonizingly slow and painful. It is often characterized by “never-ending” testing periods.

In Measure and Manage Your IT Debt, Gartner’s Andrew Kyte put his finger on the mechanics that lead to the accumulation of technical debt – “when budget are tight, maintenance gets cut.” While I do not doubt Andrew’s observation, it does not answer a deeper question: why would maintenance get cut in the face of the consequences depicted in Figure 1? Most CFOs and CEOs I know would get quite alarmed by Figure 1. They do not need to be experts in object-oriented programming in order to take steps to mitigate the risks associated with slipping to the far right of the curve.

I believe the deeper answer to the question “why would maintenance get cut in the face of the consequences depicted in Figure 1?” was given by John Seely Brown in his 2009 presentation The Big Shift: The Mutual Decoupling of Two Sets of Disruptions – One in Business and One in IT. Brown points out five alarming facts in his presentation:

  1. The return on assets (ROA) for U.S. firms has steadily fallen to almost one-quarter of 1965 levels.
  2. Similarly, the ROA performance gap between corporate winners and losers has increased over time, with the “winners” barely maintaining previous performance levels while the losers experience rapid performance deterioration.
  3. U.S. competitive intensity has more than doubled during that same time [i.e. the US has become twice as competitive – IG].
  4. Average Lifetime of S&P 500 companies [declined steadily over this period].
  5. However, in those same 40 years, labor productivity has doubled – largely due to advances in technology and business innovation.

Discussion of the full-fledged analysis that Brown derives based on these five facts is beyond the scope of this blog post [2]. However, one of the phenomena he highlights –  “The performance paradox: ROA has dropped in the face of increasing labor productivity” – is IMHO at the roots of the staggering IT debt we are staring at.

Put yourself in the shoes of your CFO or your CEO, weighing the five facts highlighted by Brown in the context of Highsmith’s technical debt curve. Unless you are one of the precious few winner companies, the only viable financial strategy you can follow is a margin strategy. You are very competitive (#3 above). You have already ridden the productivity curve (#5 above). However, growth is not demonstrable or not economically feasible given the investment it takes (#1 & #2 above). Needless to say, just thinking about being dropped out of the S&P 500 index sends cold sweat down your spine. The only way left to you to satisfy the quarterly expectations of Wall Street is to cut, cut and cut again anything that does not immediately contribute to your cashflow. You cut on-going refactoring of code even if your CTO and CIO have explained the technical debt curve to you in no uncertain terms. You are not happy to do so but you are willing to pay the price down the road. You are basically following a “survive to fight another day” strategy.

If you accept this explanation for the level of debt we are staring at, the core issue with respect to IT debt at the individual company level [3] is how “patient” (or “impatient”) investment capital is. Studies by Carlota Perez seem to indicate we are entering a phase of the techno-economic cycle in which investment capital will shift from financial speculation toward (the more “patient”) production capital. While this shift is starting to happens, you have the opportunity to apply “an ounce of prevention is worth a pound of cure” strategy with respect to the new code you will be developing.

My recommendation would be to combine technical debt measurements with software process change. The ability to measure technical debt through code analysis is a necessary but not sufficient condition for changing deep-rooted patterns. Once you institute a process policy like “stop the line whenever the level of technical debt rose,” you combine the “necessary” with the “sufficient” by tying the measurement to human behavior. A possible way to do so through a modified Agile/Scrum process is illustrated in Figure 2:

Figure 2: Process Control Model for Controlling Technical Debt

As you can see in Figure 2, you stop the line and convene an event-driven Agile meeting whenever the technical debt of a certain build exceeds that of the previous build. If ‘stopping the line’ with every such build is “too much of a good thing” for your environment, you can adopt statistical process control methods to gauge when the line should be stopped. (See Using 3σ  Control Limits in Software Engineering for a discussion of the settings appropriate for your environment.)

An absolutely critical question this analysis does not cover is “But how do we pay back our $1 trillion debt?!I will address this most important question in a forthcoming post which draws upon the threads of this post plus those in the preceding Part I.

Footnotes:

[1] Kyte/Gartner define IT Debt as “the costs for bringing all the elements [i.e. business applications] in the [IT] portfolio up to a reasonable standard of engineering integrity, or replace them.” In essence, IT Debt differs from the definition of Technical Debt used in The Agile Executive in that it accounts for the possible costs associated with replacing an application. For example, the technical debt calculated through doing code analysis on a certain application might amount to $500K. In contrast, the cost of replacement might be $250K, $1M or some other figure that is not necessarily related to intrinsic quality defects in the current code base.

[2] See Hagel, Brown and Davison: The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion.

[3] As distinct from the core issue at the national level.

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

with 2 comments

Economies of Scale have been much discussed in The Agile Executive since the recent OpsCamp in Austin, TX. The significant savings on system administration costs  in very large data centers have been called out as a major advantage of Internet-scale Clouds. Unlike various short-lived advantages, the benefits to the Cloud operator, and to the Cloud user when the savings are passed on to him/her, are sustainable.

In this guest post, colleague and friend Annie Shum analyzes the various sources of waste in operations in traditional data centers. Like an Agilist with Lean inclinations who confronts an inefficient Waterfall process, Annie explains how economies of scale apply to the various kinds of waste that are prevalent in today’s small and medium data centers. Furthermore, she connects the dots that lead toward a Green IT option.

Here is Annie:

Harnessing Economies of Scale in Cloud Computing to Realize a Greener Computing Option

Scale Matters: “Over time, however, competitive advantage within categories shifts inexorably toward volume operations architecture.” – Geoffrey Moore, “Dealing with Darwin”

It is a truism that today’s datacenters are systemically inefficient. This is not intended as an indictment of all conventional datacenters. Nor does it imply that today’s datacenters cannot be made more efficient (incrementally) through right sizing and other initiatives, notably consolidation by deploying virtualization technologies and governance by enforcing energy conservation/recycling policies. There are a myriad of inefficiencies, however, that are prevalent in datacenters today.

Many industry observers lament the “staggering complexity” that permeates on-premises datacenters. Over time, most, if not all, enterprise IT datacenters have become amalgamations of disparate heterogeneous resources. Generally, they can be described as incohesive, perhaps even haphazard, accumulations. The datacenter components and configurations often reflect the intersections of organizational politics (LOB reporting structures leading to highly customized/organizational asset acquisitions and configurations), business needs of the moment (shifting corporate strategies and changing business imperatives to gain competitive edge or meet regulatory compliances) and technology limitations (commercial tools available in the marketplace). It should come as no surprise that human interactions and errors are considered a major contributor to the inefficiencies of datacenters: IBM reported that human errors account for seventy percent of the datacenter problems.

The challenge of maximizing energy efficiency begins fundamentally with the historical capital-intensive ownership model for computing assets to enable each organization to operate its own datacenter and to provide “24×7 availability” to its own users.  The enterprise IT staff has been required to support unpredictable future growth, accommodate situational demands and unscheduled but deadline-critical events, meet performance levels within SLAs and comply with regulatory and auditing requirements. Hence, datacenters generally are over-configured and over-provisioned. In addition to highly skewed under-utilization of distributed platform servers, ninety percent of corporate datacenters have excess cooling capacity. Worst of all, according to IBM, about seventy-two percent of cooling bypassed the computing equipment entirely. Further compounding these problems for a typical enterprise datacenter, is the lack of transparency and the inability to control energy consumption properly due to inadequate and often inaccurate instrumentation to quantify energy consumption and waste due to energy lost.

The economics of Cloud Computing can offer a compelling option for more efficient IT: by lowering power consumption for individual organizations and by improving the efficiency of a large number of discrete datacenters. Although the electricity consumption of Cloud Computing is projected to be one to two percent of today’s global electricity use, Cloud service providers can still cultivate sustainable Green I.T. effectively at lower costs by leveraging state-of-the-art super energy efficient massive datacenters, proximity to power generation thereby reducing transmission costs and, above all, harnessing enormous economies of scale. To better understand how Cloud Computing can offer greener computing in the Cloud and how will it help moderate power consumption by datacenters and rein in run-away costs, a good starting place is James Hamilton’s September 2008 study on Internet-Scale Service Efficiency” as summarized in the table below.

Resource Cost in

Medium DC

Cost in

Very Large DC

Ratio
Network $95 / Mbps / month $13 / Mbps / month 7.1x
Storage $2.20 / GB / month $0.40 / GB / month 5.7x
Administration ≈140 servers/admin >1000 servers/admin 7.1x

Table 1: Internet-Scale Service Efficiency [Source: James Hamilton]

This study concludes that hosted services by Cloud providers with super large datacenters (at least tens of thousands of servers) can achieve enormous economies of scale of five to seven times over smaller scale (thousands of servers) medium deployments.  The significant cost savings is driven primarily by scale. Other key factors include location (low cost real estate and electricity rate, abundant water supply and readily available fiber-optic connectivity), proximity to electricity and power generators, load diversity, and virtualization technologies.

Will this mark the beginning of the end for traditional on-premises datacenters? Can enterprise IT continue to justify new business cases for expanding today’s non-renewable energy powered datacenters? According to the McKinsey article, the costs to launch a large enterprise datacenter have risen sharply from $150M to over $500M over the past five years. The facility operating costs are also increasing at about twenty percent per year. How long will the status quo last for enterprise IT considering the recent trend of Cloud service providers? Major players such as Google, Microsoft as well as the U.S. government itself have invested in or are planning ultra energy-efficient mega-size datacenters (also known as “container hotels”) with massive commoditized containerization and proximity both to power source and less expensive power rates. Bottom line: will the tide turn if the economics (radical cost savings) due to enormous economies of scale become too significant to ignore?

Despite the potential for significant cost savings, it is premature to declare the demise of traditional IT or the end of enterprise datacenters. After all, the rationale for today’s enterprise IT extends well beyond simplistic bottom-line economics – at least for now. To most industry observers, enterprise datacenters are unlikely to disappear although the traditional roles of enterprise IT will be changing. A likely scenario may involve redistributing IT personnel from operating low-level system operational tasks to addressing higher-level functions involving governance, energy management, security and business processes. Such change not only would become more apparent but will likely be precipitated by the rise of hybrid Clouds and the growing interconnection linking SOA, BPM and social computing. Another likely scenario is the rise of the mega datacenters or “container hotels” for Cloud Utility Computing providers. Although the global economic outlook will undoubtedly play a key role in shaping the development plans/timelines of the mega datacenters, they are here to stay. Case in point: by 2012, Intel estimates that it will design and ship about a quarter of the server chips (it sells) to such mega-data centers.


The Hole in the Soul and the Legitimacy of Capitalism

with 2 comments

In a January 13, 2010 post entitled The Hole in the Soul of Business, Gary Hamel offers the following perspective on success, happiness and business:

I believe that long-lasting success, both personal and corporate, stems from an allegiance to the sublime and the majestic… Viktor Frankl, the Austrian neurologist, held a similar view, which he expressed forcefully in “Man’s Search for Meaning:” “For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended consequence of one’s personal dedication to a cause greater than oneself . . ..”
 
Which brings me back to my worry. Given all this, why is the language of business so sterile, so uninspiring and so relentlessly banal? Is it because business is the province of engineers and economists rather than artists and theologians? Is it because the emphasis on rationality and pragmatism squashes idealism? I’m not sure. But I know this—customers, investors, taxpayers and policymakers believe there’s a hole in the soul of business. The only way for managers to change this fact, and regain the moral high ground, is to embrace what Socrates called the good, the just and the beautiful.

So, dear reader, a couple of questions for you: Why do you believe the language of beauty, love, justice and service is so notably absent in the corporate realm? And what would you do to remedy that fact?

Hamel’s call for action is echoed in the analysis of the double bubble at the turn of the century by Carlota Perez:

The current generation of political and business leaders has to face the task of reconstituting finance and bringing the world out of recession. It is crucial that they widen their lens and include in their focus a much greater and loftier task: bringing about the structural shift within nations and in the world economy. Civil society through its many new organisations and communications networks is likely to have a much greater role to play in the outcome on this occasion. Creating favourable conditions for a sustainable global knowledge society is a task waiting to be realized. When – or if – it is done we should no longer measure growth and prosperity by stock market indices but by real GDP, employment and well being, and by the rate of global growth and reduction of poverty (and violence) across and within countries.

As if these words were not arousing enough, Perez adds one final piercing observation:

 The legitimacy of capitalism rests upon its capacity to turn individual quest for profit into collective benefit.

IMHO Hamel and Perez identified the very same phenomenon (“hole in the soul”). The only difference is at the level they discuss the phenomenon. Hamel makes his observation at the business/corporate level; Perez at the socio-economic level.

Are We at a Point of Saturation?

leave a comment »

In a post entitled Enterprise Software Sale as Corporate Pathology: The World’s Greatest Dog and Pony Show, colleague James Governor recommends the following practices for coping with aggressive enterprise software sales tactics:

In order to better fight their corner enterprises need to be smarter and more aggressive themselves. They should:

1. Pay more attention to the people that actually do the work. Don’t buy software that your developers have no intention of using. Make sure architects are listening to developers.

2. Consider offload options:

  • application server – if you’re running a Java workload does it really require the quality of service that a WebLogic offers? If not why not look at Glassfish, say, or Apache Tomcat.
  • database- not all data are equal. That being the case put data in the most appropriate place. If it just needs to be thrown in a bucket of bits then consider MySQL or a file system rather than your “enterprise standard relational database”
  • cloud- its other [over? IG] there. take advantage of it, especially for non transactional workloads.

Use open source and cloud as personal trainers for proprietary software. Use alternatives to snap back if the salespeople try and bullshit you.

Examining the issue James brings up from an industry perspective, the question of possible saturation jumps to mind. At this point in time, do enterprise software vendors develop more software than the demand profile warrants? Such a situation, for example, manifested itself in telecommunications during the late 1990’s and early 2000’s when only 1-2% of fibre cable capacity in the US and EMEA has been turned on. The resultant losses have been catastrophic.

 If we are indeed at a saturation point for enterprise software, a strategy question and a policy questions present themselves:

  1. For enterprise software vendors: What is the strategic course to turn around technological maturation and market saturation?  Is “pedal to the metal” strategy still appropriate for enterprise software vendors?
  2. For policy makers: Is enterprise software an industry whose growth should be stimulated? Or, would another sector of software prove superior as a target for stimulation? For example, embedded software has the potential to be used in more and more products. Moreover, it has the potential to become larger component of the products in which it is embedded.

The two questions are related. Investment choices made by enterprise software vendors will determine how dynamic the industry becomes. A possible public policy decision to stimulate growth in enterprise software makes sense only if  the industry demonstrates strong generative potential: it should be able to create new businesses around enterprise software; and,  it must trigger growth in the various industries where enterprise software is used. Absent such effects, why stimulate growth in enterprise software?

As pointed out by Perez, the public policy decision needs to take income distribution into account:

If you want to sell basic foods, your potential market grows with number of low-income families; if you sell luxury cars… you look to the upper end of the spectrum. So the rhythm of potential growth is modulated by the qualitative dynamics of effective demand. Therefore, even if the quantity of money out there equals the value of production, if it is not in the right hands, it will not guarantee that markets will clear.

It was pointed out in Enterprise Software Innovator’s Dilemma that “good enough” Open Source Software is inevitably becoming good enough. If you accept this premise, an attractive policy decision could be to allocate public funds to making Open Source Software enterprise ready. Once it is (enterprise ready), the stimulative effect of low cost enterprise software could be huge. For example, it might enable SMBs to offer services that currently can only be afforded (and provided) by Fortune 500 companies.

Marauder Strategy for Agile Companies

with one comment

Colleague Annie Shum sent me the URL to a recent post by Clayton Christensen in The Huffington Post. In this post Christensen characterizes “disruption” in the following manner:

Disruption is the causal mechanism behind the “creative destruction” that [economist Joseph] Schumpeter saw so pervasively at work in capitalist economies. [Links added by IG]

Christensen’s post is largely about the automobile industry. It, however, ties nicely to an email exchange Jeff Sutherland and I had about Agile as a disruption inside the company vis-a-vis its intentional use as a disruptive methodology in the market. To quote Jeff:

We are starting to see organizations like yours that can use Scrum to disrupt a market. There is a tremendous amount of low hanging fruit out there. Dysfunctional companies that can’t deliver. I’ve been recommending a “Marauder” strategy to the venture group. Find a company who has a large amount of resources. Set them loose like pirates on the ocean and they seek out slow ships and take them out.

Carlota Perez, who has been often cited in this blog (click here, here and here), is a disciple of Schumpeter. I really like the way the “dots” are connected: Schumpeter –> Perez –> Christensen –> Schumpeter. Their theories of disruption and constructive destruction express themselves nicely in the business design proposed by Jeff.

A Note on the Macro-Economic Crisis

with one comment

Re-reading Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages by Carlota Perez, I was struck by the following paragraph:

So, once again, the amount of money available to financial capital has grown larger than the set it recognizes as good opportunities. Since it has come to consider normal the huge gains from the successful new industries, it expects to get them from each and every investment and will not be satisfied with less. So rather than go back to funding unsophisticated production, it develops sophisticated instruments to make money out of money. [Italicized  and highlighted by IG]

Perez published the book in 2002. Her words of wisdom seem to be appropriate today even more than they might had been then.

(Click here and here for related discussions of Agile in the context of the current macro-economic crisis.)

Written by israelgat

April 13, 2009 at 12:20 pm

The Language, The Issues

with 2 comments

Colleague Clarke Ching asked me about the language I use in interacting with executives on Agile topics. To quote Clarke:

Obviously the language one uses with a developer is quite different from the language one uses with a program manager. Likewise, the language you [Israel] use in discussing Agile with executives must be quite different. What language do you use? In particular, what language do you use amidst the current economic crisis?

What language do you use amidst the current economic crisis?

I view the economic crisis as part of life. Having grown up in Israel, I still clearly remember:

  • The 1956, 1967 and 1973 wars;
  • Various economic crises;
  • Any number of measures taken by the government to cope with financial crises. For example, devaluing the currency on many occasions.

We all survived and the country moved forward in leaps and bounds. We simply learned to accept dramatic changes as inevitable, to continue doing what we believed in. We, of course, changed tactical plans in response to disruptions such as a change in the value of the currency, but continued to do the right things strategically. Such turbulence, and possibly worse, has been characteristic of much of the world for many years now. Just think of Eastern Europe, Latin America or Africa.

Fast forwarding to 2009: I try to put the economic crisis in perspective. I have discussed the techno-economic cycle along the lines articulated by author Carlota Perez in her book Technological Revolutions and Financial Capital: The Dynamics of Bubbles and olden Ages.  In my recent post Why Agile Matters,  I stated:

  • The fifth techno-economic cycle started in 1971 with the introduction of the Microprocessor;
  • This cycle has been characterized by software going hand-in-hand with miniaturized hardware. We are witnessing pervasive software on unprecedented scale;
  • Furthermore, software is becoming a bigger piece in the contents of just about any product. For example, there are about 1 million lines of code in a vanilla cell phone;
  • Agile software significantly reduces the cost of not “only” software, but the cost of any product containing software;
  • And, Agile software enables us to respond faster and more flexibly to changes – in the software, in the business process that is codified by the software, in the product in which the software is embedded.

In short, I speak about software as an important factor in the bigger scheme of things – the techno-economic cycle.

What language do you use in your conversation with executives?

I describe the benefits of Agile in the business context. For example, when I meet an executive of a major financial institution, I discuss with him/her issues of compliance and risk his company is facing.  For a global financial institution I typically discuss the critical needs during transfer of trade from London to Wall Street. A lot of things need to work seamlessly in order to ensure smooth transition. If things do not work well within the short transition window, the implications are dire:

  • Unacceptable risks. Billions of $$ could be lost if a global financial company cannot start trading on time in Wall Street;
  • Severe compliance issues. The executive with whom I speak and his/her company could get in serious regulatory trouble due to a failure to reconcile trades and keep the required audit trail.

The ties of these business imperatives to Agile are straightforward:

  • Higher quality code reduces the risk of a ‘glitch’ in the transition of trade from London to Wall Street;
  • Should a financial institution suspect a glitch might happen, Agile usually enables Application Development and Operations to fix the code faster than traditional methods;
  • And, using virtual appliance technology enables deploying the fix in minutes instead of months.

I usually cite the examples of Flickr and IMVU to demonstrate how fast one can deploy software nowadays. I make it crystal clear that I do not expect a global financial institution today to be able to deploy every thirty minutes or every nine minutes as Flickr and IMVU do. However, I stress that the software industry is clearly heading toward a much shorter cycle between concept or problem identification and deployment. I point out that he/she has an opportunity to be ahead of the power curve, to gain competitive advantage in the market through superior velocity in both development and deployment. Obviously, a faster introduction of a new hedging algorithm could make a big difference for a financial institution.

What do I typically hear from the executive in such a conversation?

The responses I usually get tend to reflect the alignment (or lack thereof) between the financial strategy and the operational strategy a company follows: