The Agile Executive

Making Agile Work

Posts Tagged ‘Legacy Software

Agile Enterprise Forum 2011

with 4 comments

Charles Handy, Chris Potts, Don Reinertsen, John Seddon and I are the featured speakers in the Agile Enterprise Forum 2011. The Forum will be held on March 10, 2011 in the Chandos House at the Royal Society of Medicine,  London. Attendance is limited to 30 CIOs.

The theme for the forum is Agility for Complex Organizations. The overarching message is nicely captured in the following summary by James Yoxall:

There are two strands of interest for a CIO: strategy and delivery.  The Agile/Lean message can be summarised as “merging” the two, so that delivery can start before strategy is complete, and delivery informs strategy through feedback loops. This leads to a faster/earlier delivery and a better end result.

My own workshop – Agile Governance: Tying Delivery to Value – builds on this message by describing a specific strategic initiative which is not achievable without the use of advanced delivery techniques. Here is the abstract for my workshop:

This workshop will explore mechanisms for unlocking the full potential of existing software through the combination of Agile/Lean methods with technical debt techniques. These mechanisms apply to complex organisations that rely on in-house development teams as well as to third party delivery partners. Israel’s approach emphasizes the need to continuously monitor and mitigate the decay of software that more often than not had been developed over many years. Most importantly, it shows how well-governed software can become the enabler for unleashing the synergistic power of cloud, mobile and social.

You can think of the workshop as linking past, present and future. The “sins” of the past require technical debt reduction initiatives today. These initiatives utilize the classical Agile/Lean techniques of continuous measurement and tight feedback loops. Without such initiative, the value of existing software cannot be unlocked in the future. In particular, competing in the hyper-segmented markets that cloud, mobile and social generate will be next to impossible for legacy software that has not been modernized.

Cloud Computing Forecasts: “Cloudy” Future for Enterprise IT

leave a comment »

In a comment on The Urgency of Now, Marcel Den Hartog discusses technology assimilation in the face of hype:

But if people are already reluctant to run the things they have, on another platform they already have, on an operating system they are already familiar with (Linux on zSeries), how can you expect them to even look at cloud computing seriously? Every technological advancement requires people to adapt and change. Human nature is that we don’t like that, so it often requires a disaster to change our behavior. Or carefully planned steps to prove and convince people. However, nothing makes IT people more cautious than a hype. And that is how cloud is perceived. When the press, the analysts and the industry start writing about cloud as part of the IT solution, people will want to change. Now that it’s presented as the silver bullet to all IT problems, people are cautious to say the least.

Here is Annie Shum‘s thoughtful reply to Marcel’s comment:

Today, the Cloud era has only just begun. Despite lingering doubts, growing concerns and wide-spread confusion (especially separating media and vendor spun hype from reality), the IT industry generally views Cloud Computing as more appealing than traditional ASP /hosting or outsourcing/off-shoring. To technology-centric startups and nimble entrepreneurs, Cloud Computing enables them to punch above their weight class. By turning up-front CapEx into a more scalable and variable cost structure based on an on-demand pay-as-you-go model, Cloud Computing can provide a temporary, level playing field. Similarly, many budget-constrained and cash-strapped organizations also look to Cloud Computing for immediate (friction-free) access to “unlimited” computing resources. To wit: Cloud Computing may be considered as a utility-based alternative to an on-premises datacenter and allow an organization (notably cash-strapped startups) to “Think like a ‘big guy’. Pay like a ‘little guy’ ”.

Forward-thinking organizations should not lose sight of the vast potential of Cloud Computing that extends well beyond short-term economics. At its core, Cloud Computing is about enabling business agility and connectivity by abstracting computing infrastructure via a new set of flexible service delivery/deployment models. Harvard Business School Professor Andrew McAffee painted a “Cloudy” future for Corporate IT in his August 21, 2009 blog and cited a perceptive 1983 paper by Warren D. Devine, Jr. in the Journal of Economic History called “From Shafts to Wires: Historical Perspective on Electrification”.[1] There are three key take-away messages that resonate with the current Cloud Computing paradigm shift. First: The real impact of the new technology was not apparent right away. Second: The transition to full utilization of the new technology will be long, but inevitable. Third: There will be detractors and skeptics about the new technology throughout the transition. Interestingly, telephone is another groundbreaking disruptive technology that might have faced similar skepticism in the beginning. Legend has it that a Western Union internal memo dated 1876 downplayed the viability of the telephone: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communications. The device is inherently of no value to us.”

The dominance of Cloud Computing as a computing platform, however, is far from a fait accompli. Nor will it ever be complete, a “one-size fits all” or a “big and overnight switch”. The shape of computing is constantly changing but it is always a blended and gradual transition, analogous to a modern city. While the cityscape continues to change, a complete “rip-and-replace” overhaul is rarely feasible or cost-effective. Instead, city planners generally preserve legacy structures although some of them are retrofitted with standards-based interfaces that enable them to connect to the shared infrastructure of the city. For example, the Paris city planners retrofitted Notre Dame with facilities such as electricity, water, and plumbing. Similarly, despite the passage of the last three computing paradigm shifts – first mainframe, next Client/Server and PCs, and then Web N-tier – they all co-exist and can be expected to continue in the future. Consider the following. Major shares of mission-critical business applications are running today on mainframe servers. Through application modernization, legacy applications – notably Cobol for example – now can operate in a Web 2.0 environment as well as deploy in the Cloud via the Amazon EC2 platform.

Cloud Computing can provide great appeal to a wide swath of organizations spanning startups, SMBs, ISVs, enterprise IT and government agencies. The most commonly cited benefits include the promise of avoiding CapEx and lowering TCO to on-demand elasticity, immediacy and ease of deployment, time to value, location independence and catalyzing innovation. However, there is no magic in the Cloud and it is certainly not a panacea for all IT woes. Some applications are not “Cloud-friendly”. While deploying applications in the Cloud can enable business agility incrementally, such deployment will not change the characteristics of the applications fundamentally to be highly scalable, flexible and automatically responsive to new business requirements. Realistically, one must recognize that the many of the challenging problems – security, data integration and service interoperability in particular – will persist and live on regardless of the computing delivery medium: Cloud, hosted or on-premises.

[1] “The author combed through the contemporaneous business and technology press to learn what ‘experts’ were saying as manufacturing switched over from steam to electrical power, a process that took about 50 years to complete.” – Andrew McAfee, September 21, 2009.

I will go one step further and add quality to Annie’s list of challenging problem. A crappy on-premises application will continue to be crappy in the cloud. An audit of the technical debt should be conducted before “clouding” an application. See Technical Debt on Your Balance Sheet for a recommendation on quantifying the results of the quality audit.

Technical Debt: Refactoring vis-a-vis Starting Afresh

with 2 comments

Only three options are available to you once your technical debt overwhelms the development and customer support teams:

  1. Continue as is.
  2. Start refactoring in earnest to pay back your debt.
  3. Switch to a new code base, either by launching an unencumbered development project or through acquiring new software from another company.

The first option, obviously, does not solve any problem – you simply continue a struggle that becomes more and more difficult over time. As indicated in the post Elbow Room for Handling Technical Debt, the second option is quite difficult to carry out if it is exercised at a late point in the software’s life cycle. This post examines the third option – starting afresh.

The attraction of starting afresh is always tempting. The sins of the past, hidden in layers over layers of technical debt and manifesting themselves in low customer satisfaction and poor customer retention, are (sort of) forgiven. The shiny new software holds the promise of better days to come.

Trouble is, your old sins are there to get you in three nasty ways:

  1. New technical debt: Even if the shiny new software is very good indeed, it will decay over time. Unless you transformed your software engineering practices to ensure refactoring is carried out on an ongoing basis, sooner or later you will accrue new technical debt.
  2. Migration: Good that the new software might be, it is not likely to provide all the functionality the old software had. Migrating your customers will require a lot of hard work, particularly if your customers developed elaborate business processes on top of your old software.
  3. Critical race between old and new: For a period of time (say a couple of years) the old software is likely to run alongside the new software. Any significant new regulatory requirement will force you to update both the old software and the new software. Ditto for security. Any nifty new technology you might want to embed in your new software is likely to be equally desirable to customers who still use your old software. You will be pressed to incorporate the new technology in the old software, not just in the new software.

Before opting for new software (in preference to refactoring the old software), I would recommend you compare the economics of the two option. My rule of a thumb for the calculus of introducing new enterprise software to replace legacy software is straightforward:

For a period of about two years, assume the run rate for dev/test/support will be 150% of your current investment in development, test and support of the old software.

Please note that the 150% figure is just the expected run rate for running the new software alongside the legacy software. You will need, of course, to add the cost of acquiring/producing the new software to the economic calculus.

You might be able to reduce the ‘150% for two years’ investment by applying some draconian migration measures. Such measures, of course, will affect your customers. In the final analysis, your decision to acquire new software must be viewed as the following simple question:

What value do you place on your customer base?!