Archive for the ‘Trends’ Category
Extending the Scope of The Agile Executive
For the past 18 months Michael Cote and I focused The Agile Executive on software methods, processes and governance. Occasional posts on cloud computing and devops have been supplementary in nature. Structural changes in the industry have generally been left to be covered by other blogs (e.g. Cote’s Redmonk blog).
We have recently reached the conclusion that The Agile Executive needs to cover structural changes in order to give a forward-looking view to its readers. Two reasons drove us to this conclusion:
- The rise of software testing as a service. The importance of this trend was summarized in Israel’s recent Cutter blog post “Changing Playing Fields“:
Consider companies like BrowserMob (acquired earlier this month by NeuStar), Feedback Army, Mob4Hire, uTest (partnered with SOASTA a few months ago), XBOSoft and others. These companies combine web and cloud economics with the effectiveness and efficiency of crowdsourcing. By so doing, they change the playing fields of software delivery…
- The rise of devops. The line between dev and ops, or at least between dev and web ops, is becoming fuzzier and fuzzier.
As monolithic software development and delivery processes get deconstructed, the structural changes affect methods, processes and governance alike. Hence, discussion of Agile topics in this blog will not be complete without devoting a certain amount of “real estate” to these two changes (software testing as a service and devops) and others that are no doubt forthcoming. For example, it is a small step from testing as a service to development as a service in the true sense of the word – through crowdsourcing, not through outsourcing.
I asked a few friends to help me cover forthcoming structural changes that are relevant to Agile. Their thoughts will be captured through either guest posts or interviews. In these posts/interviews we will explore topics for their own sake. We will connect the dots back to Agile by referencing these posts/interviews in the various posts devoted to Agile. Needless to say, Agile posts will continue to constitute the vast majority of posts in this blog.
We will start the next week with a guest post by Peter McGarahan and an interview with Annie Shum. Stay tuned…
A Core Formula for Agile B2C Statrups
Colleague Chris Sterling drew my attention to a Pivotal Labs talk by Nathaniel Talbott on Experiment-Driven Development (EDD). It is a forward-looking think piece, focused on development helping the business make decisions based on actual A/B Testing data. Basically, EDD to the business is like TDD to development.
Between this talk and a recent discussion with Columbia’s Yechiam Yemini on his Principle of Innovation and Entrepreneurship course, a core “formula” for Agile B2C startups emerges:
- Identify a business process P
- Create a minimum viable Internet service S to support P
- Apply EDD to S on just about any feature decision of significance
This core formula can be easily refined and extended. For example:
- Criteria for choosing P could/should be established
- Other kinds of testing (in addition to or instead of A/B testing) could be done
- A customer development layer could be added to the formula
- Many others…
By following this formula a startup can implement the Agile Triangle depicted below in a meaningful manner. Value is validated – it is determined based on real customer feedback rather than through conjectures, speculations or ego trips.
Figure 1 – The Agile Triangle (based on Figure 1-3 in Jim Highsmith‘s Agile Project Management: Creating Innovative Products)
The quip “The voice of the people is the voice of God” has long been a tenet of musicians. The “formula” described above enables the Agile B2C startup to capture the voice of the people and thoughtfully act on it to accomplish business results.
Agile Infrastructure
Ten years ago I probably would not have seen any connection between global warming and server design. Today, power considerations prevail in the packaging of servers, particularly those slated for use in large and very large data centers. The dots have been connected to characterize servers in terms of their eco foot print.
In his Agile Austin presentation a couple of days ago, Cote delivered a strong case for connecting the dots of Agile software development with those of Cloud Computing. Software development and IT operations become largely inseparable in cloud environments. In many of these environments, customer feedback is given “real time” and needs to be responded to in an ultra fast manner. Companies that develop fast closed-loop feedback and response systems are likely to have a major competitive advantage. They can make development and investment decisions based on actual user analytics, feature analytics and aggregate analytics instead of speculating what might prove valuable.
While the connection between Agile and Cloud might not be broadly recognized yet, the subject IMHO is of paramount importance. In recognition of this importance, Michael Cote, John Allspaw, Andrew Shafer and I plan to dig into it in a podcast next week. Stay tuned…
Cloud Computing Forecasts: “Cloudy” Future for Enterprise IT
In a comment on The Urgency of Now, Marcel Den Hartog discusses technology assimilation in the face of hype:
But if people are already reluctant to run the things they have, on another platform they already have, on an operating system they are already familiar with (Linux on zSeries), how can you expect them to even look at cloud computing seriously? Every technological advancement requires people to adapt and change. Human nature is that we don’t like that, so it often requires a disaster to change our behavior. Or carefully planned steps to prove and convince people. However, nothing makes IT people more cautious than a hype. And that is how cloud is perceived. When the press, the analysts and the industry start writing about cloud as part of the IT solution, people will want to change. Now that it’s presented as the silver bullet to all IT problems, people are cautious to say the least.
Here is Annie Shum‘s thoughtful reply to Marcel’s comment:
Today, the Cloud era has only just begun. Despite lingering doubts, growing concerns and wide-spread confusion (especially separating media and vendor spun hype from reality), the IT industry generally views Cloud Computing as more appealing than traditional ASP /hosting or outsourcing/off-shoring. To technology-centric startups and nimble entrepreneurs, Cloud Computing enables them to punch above their weight class. By turning up-front CapEx into a more scalable and variable cost structure based on an on-demand pay-as-you-go model, Cloud Computing can provide a temporary, level playing field. Similarly, many budget-constrained and cash-strapped organizations also look to Cloud Computing for immediate (friction-free) access to “unlimited” computing resources. To wit: Cloud Computing may be considered as a utility-based alternative to an on-premises datacenter and allow an organization (notably cash-strapped startups) to “Think like a ‘big guy’. Pay like a ‘little guy’ ”.
Forward-thinking organizations should not lose sight of the vast potential of Cloud Computing that extends well beyond short-term economics. At its core, Cloud Computing is about enabling business agility and connectivity by abstracting computing infrastructure via a new set of flexible service delivery/deployment models. Harvard Business School Professor Andrew McAffee painted a “Cloudy” future for Corporate IT in his August 21, 2009 blog and cited a perceptive 1983 paper by Warren D. Devine, Jr. in the Journal of Economic History called “From Shafts to Wires: Historical Perspective on Electrification”.[1] There are three key take-away messages that resonate with the current Cloud Computing paradigm shift. First: The real impact of the new technology was not apparent right away. Second: The transition to full utilization of the new technology will be long, but inevitable. Third: There will be detractors and skeptics about the new technology throughout the transition. Interestingly, telephone is another groundbreaking disruptive technology that might have faced similar skepticism in the beginning. Legend has it that a Western Union internal memo dated 1876 downplayed the viability of the telephone: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communications. The device is inherently of no value to us.”
The dominance of Cloud Computing as a computing platform, however, is far from a fait accompli. Nor will it ever be complete, a “one-size fits all” or a “big and overnight switch”. The shape of computing is constantly changing but it is always a blended and gradual transition, analogous to a modern city. While the cityscape continues to change, a complete “rip-and-replace” overhaul is rarely feasible or cost-effective. Instead, city planners generally preserve legacy structures although some of them are retrofitted with standards-based interfaces that enable them to connect to the shared infrastructure of the city. For example, the Paris city planners retrofitted Notre Dame with facilities such as electricity, water, and plumbing. Similarly, despite the passage of the last three computing paradigm shifts – first mainframe, next Client/Server and PCs, and then Web N-tier – they all co-exist and can be expected to continue in the future. Consider the following. Major shares of mission-critical business applications are running today on mainframe servers. Through application modernization, legacy applications – notably Cobol for example – now can operate in a Web 2.0 environment as well as deploy in the Cloud via the Amazon EC2 platform.
Cloud Computing can provide great appeal to a wide swath of organizations spanning startups, SMBs, ISVs, enterprise IT and government agencies. The most commonly cited benefits include the promise of avoiding CapEx and lowering TCO to on-demand elasticity, immediacy and ease of deployment, time to value, location independence and catalyzing innovation. However, there is no magic in the Cloud and it is certainly not a panacea for all IT woes. Some applications are not “Cloud-friendly”. While deploying applications in the Cloud can enable business agility incrementally, such deployment will not change the characteristics of the applications fundamentally to be highly scalable, flexible and automatically responsive to new business requirements. Realistically, one must recognize that the many of the challenging problems – security, data integration and service interoperability in particular – will persist and live on regardless of the computing delivery medium: Cloud, hosted or on-premises.
[1] “The author combed through the contemporaneous business and technology press to learn what ‘experts’ were saying as manufacturing switched over from steam to electrical power, a process that took about 50 years to complete.” – Andrew McAfee, September 21, 2009.
I will go one step further and add quality to Annie’s list of challenging problem. A crappy on-premises application will continue to be crappy in the cloud. An audit of the technical debt should be conducted before “clouding” an application. See Technical Debt on Your Balance Sheet for a recommendation on quantifying the results of the quality audit.
The Urgency of Now – Guest Post by Annie Shum
Failure to learn, failure to anticipate, and failure to adapt are the three generic causes of military disasters. Each one of these three failures is bad enough. In combination, they can be catastrophic. Germany swiftly defeated and conquered France in 1940 due to the utter failure of the French army to grasp the nature of future war, to conceive the probable action of the German forces and to adequately react to the German initiative once it unfolded through the Ardennes. The patterns leading to the catastrophe suffered by the French are similar in some ways to the eco-meltdowns described by Jared Diamond in Collapse: How Societies Choose to Succeed or Fail.
In this guest post, colleague and friend Annie Shum poses disturbing questions with respect to our willingness and ability as IT professionals to learn, anticipate and adapt to the imperatives of Cloud Computing. Between shockingly low (15%) server capacity utilization on the one hand, and dramatic changes in the needs of the business on the other hand, companies who continue to use industrial-era IT models are at peril. Annie weaves theses and other related threads together, and makes a resounding call-to-action to re-think IT.
It is remarkable that Annie’s analysis herein of the root causes of a possible meltdown in IT identifies worrisome patterns similar to those that the Agile movement has pointed out to with respect to arcane methods of software development. The very same core problems that afflict software development manifest themselves in the IT paradigm as well as in the corresponding business design. Painful and wasteful that this repeated manifestation is, it actually creates the opportunity to manage software, IT, and the business in unison. To do so, we need to embrace a data-driven version of the economics of IT, to grasp the true nature of Cloud Computing without the hype that currently surrounds it, and to adapt software development, IT operations and business design accordingly. As the title of this post states, we need to start carrying out these three tasks now.
Here is Annie:
The Urgency of Now: The Edge of Chaos and A “Strategic Inflection Point” for IT
“It was the worst of times. It may be the best of times.” – IBM
Consider the following table. It contains a list of statistics pertaining to the enterprise datacenter index compiled by Peter Mell and Tim Grance, NIST. Overall, the statistics are sobering, perhaps even alarming, and do not bode well for the long-term sustainability of traditional on-premises datacenters. Prudent IT organizations – whether big or small, stalwart or startup – should consider this as a wake-up call. In particular, out of the almost twelve million servers in US datacenters today, the typical server capacity utilization is only around fifteen percent. Although not explicitly shown in this table, the average utilization of the mainframe z/OS servers is typically over eighty percent. However, mainframe z/OS server utilization is only a minor component of the overall average server utilization.
Statistics Enterprise Datacenter Index 11,800,000 Servers in US datacenters 15% Typical server capacity utilization $800,000,000,000/year Purchasing & maintaining enterprise software 80% Software costs spent on maintenance: the “80-20” ratio 100x Power consumption/sq ft compared to office building 4x Increase in server power consumption, 2001 to 2006 2x Increase in number of servers, 2001 to 2006 $21,300,000 Datacenter construction cost, 9000 sq ft $1,000,000/year Annual cost to power the datacenter 1.5% Portion of national power generation 50% Potential power reduction from green technologies 2% Portion of global carbon emissions
Over the years, organizations have accepted such skewed levels of server inefficiency and escalating maintenance costs of IT infrastructure as the norm. Even as organizations continue to express concerns, many seem resigned to the status quo tacitly: akin to what Bob Evans of InfoWeek described as “insurmountable laws of physics.” Looking ahead, however, the status quo may no longer be a viable option for most organizations. Due to soaring electricity/power costs compounded by the recent global financial meltdown with a near collapse of the financial system that triggered a prolonged (and for now, apparently indefinite) credit crunch, these are unparalleled strident and chaotic times for businesses. Pressured by business decision-makers who are under a heightened level of anxiety, enterprise IT is now confronting a transformative dilemma whether to preserve the status quo or to re-think IT.
On one hand, the current global recessionary down cycle is a particularly powerful (albeit rooted in fear) and instinctive deterrent to challenging the status quo. For risk-adverse organizations, it is only understandable why status quo, fundamental flaws notwithstanding, may trump disruptive change during these challenging times. On the other hand, forward-thinking decision-makers may make the bold but disruptive (radical) choice to view status quo as the fundamental problem: acknowledge the growing “urgency of now” by resolving to overcome and correct the entrenched shortcomings of enterprise IT.
“You never want a serious crisis to go to waste”. That quote (or its many variations) has been attributed alike to economists and politicians. The same could be said for IT. Indeed a growing number of IT industry observers believe the profound impact of the on-going economic crisis could offer a rare window of opportunity for organizations to rethink traditional capital-intensive, command-control, on-premises IT operations and invest in new and more flexible self-service IT delivery/deployment models. Think of this defining moment as what Andy Grove, co-founder of Intel, described as the “strategic inflection point”. He was referring to the point in the dynamic when the fundamentals of a business are about to change and “that change can mean an opportunity to rise to new heights.” Nonetheless, the choices will be hard decisions because the options are stark: either counter-intuitively invest in a down cycle by focusing on a more sustainable but disruptive trajectory or hunker down and risk irreversible shrinking business.
As one considers how to address the challenges of today’s enterprise IT, perhaps the following two observations should be taken into account. First, despite the quantum leap in technology advancements, generally the basic design and delivery models of existing IT applications/services are variations of traditionally insular, back-office automation business tools. Second, the organizational structure and business models of most companies are deeply rooted in models of yesteryear, in many instances dating back to the Industrial Revolution. In theory, adhering to the traditional organizational model of top-down command-control can maximize predictability, efficiency and order. Heretofore, this has been the modus operandi for most organizations that Umair Haque succinctly characterized as “ industrial-era companies that make industrial-era stuff — and play by industrial-era rules.” In today’s exponential times, however, the velocity of change and the rapidly growing need of interconnecting to other organizations and automating value chains inevitably lead to an increase in uncertainty and disorder. Strategically, forward-thinking organizations should consider seeking alternative models to address the interdependent and shifting new world order.
In their book, “Presence – Human Purpose and the Field of the Future”, authors Peter Senge, Otto Schramer, Joe Jaworski and Betty Sue Flower observe that many of the practices of the Industrial Age appear to be largely unaffected by the changing reality of today’s society and continue to expand in today’s business organizations. They conclude with this advice: “As long as our thinking is governed by industrial ‘machine age’ metaphors such as control, predictability, and faster is better, we will continue to re-create organizations as we have had – for the last 100 years – despite their increasing disharmony with the world and the science of the 21st century.” Likewise, the traditional top-down command-control modus operandi of enterprise IT today does not reflect adequately and hence likely is unable to accommodate fully the transformational shift of business from silo organizations to “all thing’s digital all the time”, hyper-interconnected and hyper-interdependent ecosystems.
The Changing Nature of Innovation: Part I — New Forms of Experimentation
Colleague Christian Sarkar drew my attention to two recent Harvard Business Review (HBR) articles that shed light on the way(s) innovation is being approached nowadays. To the best of my knowledge, none of the two articles has been written by an author who is associated with the Agile movement. Both, if you ask me, would have resonated big time with the authors of the Agile Manifesto.
The February 2009 HBR article How to Design Smart Business Experiments focuses on data-driven decisions as distinct from decisions taken based on “intuition”:
Every day, managers in your organization take steps to implement new ideas without having any real evidence to back them up. They fiddle with offerings, try out distribution approaches, and alter how work gets done, usually acting on little more than gut feel or seeming common sense—”I’ll bet this” or “I think that.” Even more disturbing, some wrap their decisions in the language of science, creating an illusion of evidence. Their so-called experiments aren’t worthy of the name, because they lack investigative rigor. It’s likely that the resulting guesses will be wrong and, worst of all, that very little will have been learned in the process.
It doesn’t have to be this way. Thanks to new, broadly available software and given some straightforward investments to build capabilities, managers can now base consequential decisions on scientifically valid experiments. Of course, the scientific method is not new, nor is its application in business. The R&D centers of firms ranging from biscuit bakers to drug makers have always relied on it, as have direct-mail marketers tracking response rates to different permutations of their pitches. To apply it outside such settings, however, has until recently been a major undertaking. Any foray into the randomized testing of management ideas—that is, the random assignment of subjects to test and control groups—meant employing or engaging a PhD in statistics or perhaps a “design of experiments” expert (sometimes seen in advanced TQM programs). Now, a quantitatively trained MBA can oversee the process, assisted by software that will help determine what kind of samples are necessary, which sites to use for testing and controls, and whether any changes resulting from experiments are statistically significant.
On the heels of this essay on how one could attain and utilize experimentally validated data, the October 2009 HBR article How GE is Disrupting Itself discusses what is already happening in the form of Reverse Innovation:
- The model that GE and other industrial manufacturers have followed for decades – developing high-end products at home and adapting them for other markets around the world – won’t suffice as growth slows in rich nations.
- To tap opportunities in emerging markets and pioneer value segments in wealthy countries, companies must learn reverse innovation: developing products in countries like China and India and then distributing them globally.
- While multinationals need both approaches, there are deep conflicts between the two. But those conflicts can be overcome.
- If GE doesn’t master reverse innovation, the emerging giants could destroy the company.
It does not really matter whether you are a “shoe string and prayer” start-up spending $500 on A/B testing through Web 2.0 technology or a Fortune 500 company investing $1B in the development and introduction of a new car in rural India in order to “pioneer value segments in wealthy countries.” Either way, your experimentation is affordable in the context of the end-result you have in mind.
Fast forward to Agile methods. The chunking of work to two-week segments makes experimentation affordable – you cancel an unsuccessful iteration as needed and move on to work on the next one. Furthermore, you can make the go/no-go decision with respect to an iteration based on statistically significant “real time” user response. This closed-loop operational nimbleness and affordability , in conjunction with a mindset that considers a “failure” of an iteration as a valuable lesson to learn from, facilitates experimentation. Innovation simply follows.
Software Moulding Methods
Christian Sarkar and I started an e-dialog on Agile Business Service Management in BSMReview. Both of us are keenly interested in exploring the broad application of Agile BSM in the context of Gartner’s Top Ten Technologies for 2010. To quote Christian:
Israel, where do agile practices fit into this? Just about everywhere as well?
The short answer to Christian’s good question is as follows:
I consider the principles articulated in the Manifesto For Agile Software Development http://agilemanifesto.org universal and timeless. They certainly apply just about everywhere. As a matter of fact, we are seeing the Manifesto principles applied more and more to the development of hardware and content.
The fascinating thing in what we are witnessing (see, for example: Scale in London – Part II, An Omen in Chicago, Depth in Seattle, and Richness and Vibrancy in Boston) is the evolution of the classical problem of managing multiple Software Development Life Cycles. Instead of dealing with one ‘material’ (software), we handle multiple ‘materials’ (software, hardware, content, business initiative, etc.) of dissimilar characteristics. The net effect is as follows:
The challenge then becomes the simultaneous and synchronized management of two or more ‘substances’ (e.g. software and content; software, content and business initiative; or, software, hardware, content and business initiative) of different characteristics under a unified process. It is conceptually fairly similar to the techniques used in engineering composite materials.
Ten years have passed since Evans and Wurster demonstrated the effects of separating the virtual from the physical. As software becomes pervasive, we are now starting to explore putting the virtual back together with the physical through a new generation of software moulding methods.