The Agile Executive

Making Agile Work

Posts Tagged ‘S3

Cloud Computing Meet the Iterative Requirements of Agile

leave a comment »

It so happened that a key sentence fell between my editing fingers while publishing Annie Shum‘s splendid post Cloud Computing: Agile Deployment for Agile QA Testing. Here is the corrected paragraph with the missing sentence highlighted:

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Reading the whole post with this sentence in mind makes a big difference… And, it is is a little different from my partner Cote‘s perspective on the subject

My apology for the inconvenience.

Israel

Advertisements

Cloud Computing: Agile Deployment for Agile QA Testing

with 9 comments

Annie Shum‘s original thinking has often been quoted in this blog. Her insights are always characterized by seeing the world through the prism of fractals principles.  And, she always relentlessly pursues the connecting of the dots. In this guest post, she examines in an intriguing manner both the tactical and the strategic aspects of large scale testing in the cloud.

Here is Annie:

Cloud Computing: Agile Deployment for Agile QA Testing
Annie Shum twitter@insightspedia
Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.
One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic game-changer (and transform society) or a tactical short-term opportunity. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective
Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to comprehensive and high quality (iterative) QA testing is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Sophisticated Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovationi. However, the downside is each iteration can lead to new testing needs and further compounding the QA woes.
By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress testing in the Cloud is a good starting point. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. In addition, the flexibility and on-demand elasticity of Cloud Computing meet the iterative nature of Agile on an on-going basis. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective
While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.
Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.
The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.
i A shout out to Israel Gat for his insightful comment on chunking as a vehicle for innovation.

Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but, as Clive Thompson of Wired Magazine observed, more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.

One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic or tactical. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective

Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to conducting comprehensive, iterative and massively parallel QA test cases is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Advanced Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovation. However, the downside is the rapid accumulation of new testing needs.

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective

While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.

Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.

The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.

P.S.  A shout out to Israel Gat for not only allowing me to post my piece today but for his always insightful comments in our daily email exchanges.

Scrum at Amazon – Guest Post by Alan Atlas

with 6 comments

Rally’s Alan Atlas shares with us his experience as the first full-time Agile trainer/coach with Amazon. His account is both enlightened and enlightening. He connects the “hows”, “whats” and “whys” of Scrum in the Amazon context, making sense for the reader of what took place and what did not at Amazon. You will find additional insights by Alan in The Scrum Mechanic.

Alan has been professionally involved in high tech for nearly thirty years. His career spans top technology companies such as Bell Labs and Amazon as well as various intriguing start-ups. He brought to market numerous products, including OSF Motif and Amazon’s S3. His passion for Scrum has recently led him to make a career switch into full-time Agile Coaching and Training with Rally Software.

Here is Alan on what he learned about Scrum transition at Amazon.com:

Agile practices were present at Amazon.com as early as 1999, but it wasn’t until the years 2004 – 2009 that widespread adoption of Scrum occurred throughout Amazon’s development organizations. Amazon.com’s unplanned, decentralized Scrum transformation is of interest because it is different from the current orthodoxy regarding enterprise Scrum transitions, and its strengths and weaknesses reveal some fundamental lessons that can be applied to other enterprise Scrum transitions.

Here are the major forces that played in the transition.

Permission

Teams (including local management, of course) at Amazon have historically been given wide latitude to solve their problems (coupled with responsibility to do so without waiting for outside help) and are usually not overburdened with detailed prescriptive practices promulgated by centralized corporate sources. The emphasis is on creating, delivering, and operating excellent software in as streamlined and process-light a way as possible. Teams at Amazon have permission to choose.

Teams

The corporate culture at Amazon.com has always been surprisingly consistent with and friendly towards Agile practices. The 2 Pizza Team concept has been written about many times over the years (click here), and a close look shows that a 2 Pizza Team is basically a Scrum team without the Scrum. Teams at Amazon, especially 2 Pizza Teams, are stable and long-lived. Usually a development team reports to a single direct manager.

Knowledge

All it took to light the fire was someone who was willing to spend a little time educating interested parties about Scrum. Teams who learned about Scrum were able to make local decisions to implement it. Results were demonstrated that kindled interest in other teams.

Impetus

Over time, an email-based Scrum community formed. Scrum Master training was provided on an occasional basis by someone who simply wanted to do so. Basic Scrum education continued on an ad hoc and voluntary basis. Eventually enough teams had adopted Scrum that a need was seen and a position of Scrum Trainer/Coach was created. Having a full-time Trainer and Coach available made adoption easier and improved the quality of scrum implementations. By mid-2008 the community was able to support an Open Space Scrum Gathering within the company.

What was (and one assumes is still) missing was higher level engagement at the organization and enterprise levels. No executive support for Scrum ever emerged, and the transition was therefore limited primarily to the team level, with many organizational impediments still in place.

The success of Scrum at Amazon validates one easy, frictionless way to begin a Scrum transition.

  1. Establish stable teams
  2. Make Agile and Scrum information widely and easily available
  3. Give permission to adopt Scrum

The advantage of this approach is that it requires a minimum of enterprise-wide planning and it allows teams to select Scrum, rather than mandating it. All of the rest of an enterprise Scrum transition can be accomplished by simply responding to impediments as raised by the teams and providing management support for change. Based on experience, the impediments raised will include demand (pull) for coaching, scaling, training, organizational change, a Transition Team, PMO changes, and all of the other aspects of an enterprise transition that many organizations labor so mightily to plan and control. Leadership for this kind of transition can only be Servant Leadership from the C-level, which is exactly the right kind for an Agile initiative, isn’t it?

The only impediment to Scrum adoption at Amazon was lack of knowledge. Teams were in place, and permission was part of the culture. When knowledge was provided, teams adopted Scrum. The strength of this process was based on the fact that only teams that were interested in trying Scrum actually tried it. There was no mandate or plan or schedule for this uptake.  Nobody was forced to use Scrum. Teams made an independent, informed decision to try to solve some of their problems. Lean and Agile thinkers will recognize that this as a pull-based incremental approach and not a plan-driven, command and control, push-based approach.

What about the things that didn’t happen at Amazon? The transition stalled at the team level due to failure to engage either middle or upper management in a meaningful way.  Both of those groups are required to bring a transition to its full potential. Training for middle managers, in particular, is crucial, but will usually reach them only with executive sponsorship.  A Transition Team is crucial when organizational and enterprise-wide impediments begin to be unearthed. Support from a source of advanced knowledge and experience (trainer/coach) is key.

Was Scrum good or bad overall for Amazon? There is only spotty, anecdotal data to report. Certainly there are many stories of teams that used Scrum very successfully. The Amazon S3 project, which not only delivered on time after about 15 months of work, but nearly achieved the unusual result of having the entire development team take a week’s vacation leading up to the launch day. It was not the crunch-time, last minute, panic-drenched effort that is common with projects of this scope and complexity. There was the team that “hasn’t been able to deliver any software for 8 months” that, sure enough, delivered some software a month later at the end of their first sprint. Another team reported that their internal customers came to them some time after the team had adopted Scrum, asking that they implement a whole list of random features. “We know these aren’t your responsibility, but you’re the only team that is able to respond to our requests.” Finally, there was the platform team that had literally dozens of internal customers. When that team adopted Scrum, they organized their customers into a customer council of sorts and let them simply decide each month what the team would work on, for the good of all, in order of value to Amazon overall. But even if none of these anecdotes were available to tell, the mere fact that teams opted on their own to adopt Scrum implies that something about Scrum helped them be more successful and more satisfied professionally. If that were not true, then they would not have stayed with it at all, right?

Written by israelgat

July 20, 2009 at 12:20 am