The Agile Executive

Making Agile Work

Posts Tagged ‘Annie Shum

Cloud Computing Meet the Iterative Requirements of Agile

leave a comment »

It so happened that a key sentence fell between my editing fingers while publishing Annie Shum‘s splendid post Cloud Computing: Agile Deployment for Agile QA Testing. Here is the corrected paragraph with the missing sentence highlighted:

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Reading the whole post with this sentence in mind makes a big difference… And, it is is a little different from my partner Cote‘s perspective on the subject

My apology for the inconvenience.

Israel

Cloud Computing: Agile Deployment for Agile QA Testing

with 9 comments

Annie Shum‘s original thinking has often been quoted in this blog. Her insights are always characterized by seeing the world through the prism of fractals principles.  And, she always relentlessly pursues the connecting of the dots. In this guest post, she examines in an intriguing manner both the tactical and the strategic aspects of large scale testing in the cloud.

Here is Annie:

Cloud Computing: Agile Deployment for Agile QA Testing
Annie Shum twitter@insightspedia
Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.
One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic game-changer (and transform society) or a tactical short-term opportunity. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective
Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to comprehensive and high quality (iterative) QA testing is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Sophisticated Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovationi. However, the downside is each iteration can lead to new testing needs and further compounding the QA woes.
By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress testing in the Cloud is a good starting point. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. In addition, the flexibility and on-demand elasticity of Cloud Computing meet the iterative nature of Agile on an on-going basis. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective
While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.
Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.
The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.
i A shout out to Israel Gat for his insightful comment on chunking as a vehicle for innovation.

Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but, as Clive Thompson of Wired Magazine observed, more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.

One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic or tactical. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective

Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to conducting comprehensive, iterative and massively parallel QA test cases is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Advanced Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovation. However, the downside is the rapid accumulation of new testing needs.

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective

While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.

Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.

The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.

P.S.  A shout out to Israel Gat for not only allowing me to post my piece today but for his always insightful comments in our daily email exchanges.

A Note on “The Tool is the Method”

with one comment

The post The Tool is the Method generated a lively discussion. Whether you agree or disagree with the post, you are likely to find the experience reported by Rod McLaren in Jira, Greenhopper, Lighthouse: tools vs practice in Scrum of interest. Here is an excerpt from Rod’s post:

Our biggest problem with Jira/Greenhopper was that it isn’t as well-aligned to our developing practice of scrum as we needed it to be. The tool got in the way of our learning the practice, and started behaviourally skewing our view of scrum (‘The chosen tool becomes a factor of the first order in determining not “only” how Agile (or any other software method) will be practiced, but what mindset will evolve in the course of the rollout’). We were trying to fit scrum to the tool we had, and were talking more about how to use the tool well than we were about whether the sprint was going well.

I don’t know that the debate tool v. method could or should be resolved here. Absent resolution, the perspective on the subject Annie Shum expressed in Tools, Behavior, Culture is quite a good one to keep in mind:

In thinking about tools in the context of impacting large scale cultural changes, one can’t help but immediately think about the computer/digital computing.  By all definitions, a computer is a tool – in fact, I would venture to characterize a computer as a “universal tool” that can, in theory, perform almost any task.  Moreover, I can’t think of any other tool that can surpass the computer and digital computing as the most disruptive transformation agents of our society and cultural shifts in the 21st Century.

Written by israelgat

August 5, 2009 at 6:55 pm

Tools, Behavior, Culture

with 2 comments

Colleague Annie Shum sent me her thoughts on the e-discussion that evolved around The Tool is the Method and Four Priciples, Four Cultures, One Mirror. As always, her thoughts connect a lot of dots. Here is Annie:

“Change versus shifts.  For me, I like to focus today’s discussion on the latter and let me just define a shift as large scale changes; notably transformative cultural changes.  Historically, cultural shifts are not top-down engineered by master planners; instead self-organizing emergence is the process by which all shifts happen on this planet. Similar to an ant colony where each ant works individually without pre-planned central orchestrations, our individual actions on a local level can collectively impact the emergence process. Positive as well as negative Feedback Loops are the fuel of the emergence process. Emergence can result in shifts that can both benefit (e.g. WWW, Information Age) as well as harm our society (e.g. wars, riots).

In thinking about tools in the context of impacting large scale cultural changes, one can’t help but immediately think about the computer/digital computing.  By all definitions, a computer is a tool – in fact, I would venture to characterize a computer as a “universal tool” that can, in theory, perform almost any task.  Moreover, I can’t think of any other tool that can surpass the computer and digital computing as the most disruptive transformation agents of our society and cultural shifts in the 21st Century.  According to James Moor, the computer revolution is occurring in two stages.  We are now at the cusp of the 2nd stage – “one that the industrialized world has only recently entered — is that of “technological permeation” in which technology gets integrated into everyday human activities and into social institutions, fundamentally  changing and transforming society at its very core.”  http://plato.stanford.edu/entries/ethics-computer

Written by israelgat

July 12, 2009 at 9:47 am

Recipe for Success in 2009

leave a comment »

Colleague Annie Shum sent me the following excerpt from Slate’s Daniel Gross article on P.F. Chang’s spectacular performance in 2009:

The reason: mainstream mall appeal, affordable offerings, and especially good management – based heavily on the principles of “kaizen” or continuous improvement pioneered by Toyota and other Japanese manufacturers. P.F. Chang’s made it to $1 billion in sales by taking cues from successful Asian businesses. Now by focusing on process improvement rather than helter-skelter growth, it seems to be doing so again. Continuous improvement, the philosophy pioneered by Japanese companies such as Toyota in which managers and workers relentlessly seek out small modifications that add up to big profits, seems to be the recipe for success in 2009.

I don’t really know that the excerpt above has any relevance to software engineering. Gross, however, proposes a potential linkage at the end the article:

Low-end standardized service jobs make up more than 40 percent of all U.S. employment. Imagine if more restaurants and service companies started to act like P.F. Chang’s. Innovation and rising productivity are the underpinnings of higher wages, and happy and engaged employees the key to more continuous improvement.

Written by israelgat

May 19, 2009 at 10:51 pm

Posted in Lean, Macro-economic Crisis

Tagged with ,

“Agility” in Action

leave a comment »

agility1

Colleague Annie Shum sent me the cartoon above. Perhaps not quite the way we use “Agility” in this blog, but thought provoking nonetheless…

Written by israelgat

March 8, 2009 at 11:08 pm

Posted in The Agile Life

Tagged with