The Agile Executive

Making Agile Work

Posts Tagged ‘Scrum

Cloud Computing Meet the Iterative Requirements of Agile

leave a comment »

It so happened that a key sentence fell between my editing fingers while publishing Annie Shum‘s splendid post Cloud Computing: Agile Deployment for Agile QA Testing. Here is the corrected paragraph with the missing sentence highlighted:

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Reading the whole post with this sentence in mind makes a big difference… And, it is is a little different from my partner Cote‘s perspective on the subject

My apology for the inconvenience.

Israel

Cloud Computing: Agile Deployment for Agile QA Testing

with 9 comments

Annie Shum‘s original thinking has often been quoted in this blog. Her insights are always characterized by seeing the world through the prism of fractals principles.  And, she always relentlessly pursues the connecting of the dots. In this guest post, she examines in an intriguing manner both the tactical and the strategic aspects of large scale testing in the cloud.

Here is Annie:

Cloud Computing: Agile Deployment for Agile QA Testing
Annie Shum twitter@insightspedia
Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.
One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic game-changer (and transform society) or a tactical short-term opportunity. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective
Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to comprehensive and high quality (iterative) QA testing is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Sophisticated Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovationi. However, the downside is each iteration can lead to new testing needs and further compounding the QA woes.
By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress testing in the Cloud is a good starting point. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. In addition, the flexibility and on-demand elasticity of Cloud Computing meet the iterative nature of Agile on an on-going basis. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.
Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective
While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.
Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.
The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.
i A shout out to Israel Gat for his insightful comment on chunking as a vehicle for innovation.

Invariably, the underlying questions at the heart of every technology or business initiative are less about technology but, as Clive Thompson of Wired Magazine observed, more about the people (generally referred to as the users and consumers in the IT industry). For example, “How does this technology/initiative impact the lives and productivity of people?” or “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” Remarkably, very often the answers to these questions will directly as well as indirectly influence whether the technology/initiative will succeed or fail; whether its impact will be lasting or fleeting ; and whether it will be a strategic game-changer (and transform society) or a tactical short-term opportunity.

One can approach some of the Cloud-friendly applications, e.g. large scale QA and load stress testing in the Cloud, either from a tactical or from a strategic perspective. As aforementioned, the answer to the question “What happens to the uses/consumers when they are offered new power or a new vehicle of empowerment?” can influence whether a new technology initiative will be a strategic or tactical. In other words, think about the bacon-and-eggs analogy where the chicken is involved but the pig is committed. Look for new business models and innovation opportunities by leveraging Cloud Computing that go beyond addressing tactical issues (in particular, trading CapEx for OpEx). One example would be to explore transformative business possibilities stemming from Cloud Computing’s flexible, service-based delivery and deployment options.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Tactical Perspective

Nowadays, an enterprise organization is constantly under pressure to demonstrate ROI of IT projects. Moreover, they must be able to do this quickly and repeatedly. So as they plan for the transition to the Cloud, it is only prudent that they start small and focus on a target area that can readily showcase the Cloud potential. One of the oft-touted low hanging fruit of Cloud Computing is large scale QA (usability and functionality) testing and application load stress testing in the Cloud. Traditionally, one of the top barriers and major obstacles to conducting comprehensive, iterative and massively parallel QA test cases is the lack of adequate computing resources. Not only is the shortfall due to budget constraint but also staff scheduling conflicts and the long lead time to procure new hardware/software. This can cause significant product release delays, particularly problematic with new application development under Scrum. An iterative incremental development/management framework commonly used with Agile software development, Scrum requires rapid successive releases in chunks, commonly referred to as splints. Advanced Agile users leverage this chunking technique as an affordable experimentation vehicle that can lead to innovation. However, the downside is the rapid accumulation of new testing needs.

By providing virtually unlimited computing resources on-demand and without up-front CapEx or long-term commitment, QA/load stress and scalability testing in the Cloud is a good starting point. Especially, the flexibility and on-demand elasticity of the Cloud Computing meet the iterative requirements of Agile on an on-going basis. More than likely it will turn out to be one of the least risky but quick ROI pilot Cloud projects for enterprise IT. Case in point, Franz Inc, opted for the Cloud solution when confronted with the dilemma of either abandoning their critical software product testing plan across dozens of machines and databases or procuring new hardware and software that would have been cost-prohibitive. Staging the stress testing study in Amazon’s S3, Franz completed its mission within a few days. Instead of the $100K capital expense for new hardware as well as additional soft costs (such as IT staff and other maintenance costs), the cost of the Amazon’s Cloud services was under $200 and without the penalty of delays in acquisition and configuration.

Approaching Large-scale QA and Load Stress Testing in the Cloud from a Strategic Perspective

While Franz Inc. leverages the granular utility payment model, the avoidance of upfront CapEx and long-term commitment for a one-off project, other entrepreneurs have decided to harness the power of on-demand QA testing in the Cloud as a new business model. Several companies, e.g. SOASTA, LoadStorm and Browsermob are now offering “Testing as a Service” also known as “Reliability as a Service” to enable businesses to test the real-world performance of their Web applications based on a utility-based, on-demand Cloud deployment model. Compared to traditional on-premises enterprise testing tool such as LoadRunner, the Cloud offerings promise to reduce complexity without any software download and up-front licensing cost. In addition, unlike conventional outsourcing models, enterprise IT can retain control of their testing scenarios. This is important because comprehensive QA testing typically requires an iterative process of test-analyze-fix-test cycle that spans weeks if not months.

Notably, all three organizations built their service offerings on Amazon EC2 infrastructure. LoadStorm launched in January 2009 and Browsermob (open source) currently in beta, each enable users to run iterative and parallel load tests directly from its Website. SOASTA, more established than the aforementioned two startups, recently showcases the viability of “Testing as a Service” business model by spawning 650 EC2 Servers to simulate load from two different availability zones to stress test a music-sharing website QTRAX. As reported by Amazon, after a 3-month iterative process of test-analyze-fix-test cycle, QTRAX can now serve 10M hits/hour and handle 500K concurrent users.

The bottom line is there are effectively two different perspectives: tactical (“involved”) versus the strategic (“committed”) and both can be successful. Moreover, the consideration of tactical versus strategic is not a discrete binary choice but a granularity spectrum that accommodates amalgamations of short term and long-term thinking. Every business must decide the best course to meet its goals.

P.S.  A shout out to Israel Gat for not only allowing me to post my piece today but for his always insightful comments in our daily email exchanges.

Scrum at Amazon – Guest Post by Alan Atlas

with 6 comments

Rally’s Alan Atlas shares with us his experience as the first full-time Agile trainer/coach with Amazon. His account is both enlightened and enlightening. He connects the “hows”, “whats” and “whys” of Scrum in the Amazon context, making sense for the reader of what took place and what did not at Amazon. You will find additional insights by Alan in The Scrum Mechanic.

Alan has been professionally involved in high tech for nearly thirty years. His career spans top technology companies such as Bell Labs and Amazon as well as various intriguing start-ups. He brought to market numerous products, including OSF Motif and Amazon’s S3. His passion for Scrum has recently led him to make a career switch into full-time Agile Coaching and Training with Rally Software.

Here is Alan on what he learned about Scrum transition at Amazon.com:

Agile practices were present at Amazon.com as early as 1999, but it wasn’t until the years 2004 – 2009 that widespread adoption of Scrum occurred throughout Amazon’s development organizations. Amazon.com’s unplanned, decentralized Scrum transformation is of interest because it is different from the current orthodoxy regarding enterprise Scrum transitions, and its strengths and weaknesses reveal some fundamental lessons that can be applied to other enterprise Scrum transitions.

Here are the major forces that played in the transition.

Permission

Teams (including local management, of course) at Amazon have historically been given wide latitude to solve their problems (coupled with responsibility to do so without waiting for outside help) and are usually not overburdened with detailed prescriptive practices promulgated by centralized corporate sources. The emphasis is on creating, delivering, and operating excellent software in as streamlined and process-light a way as possible. Teams at Amazon have permission to choose.

Teams

The corporate culture at Amazon.com has always been surprisingly consistent with and friendly towards Agile practices. The 2 Pizza Team concept has been written about many times over the years (click here), and a close look shows that a 2 Pizza Team is basically a Scrum team without the Scrum. Teams at Amazon, especially 2 Pizza Teams, are stable and long-lived. Usually a development team reports to a single direct manager.

Knowledge

All it took to light the fire was someone who was willing to spend a little time educating interested parties about Scrum. Teams who learned about Scrum were able to make local decisions to implement it. Results were demonstrated that kindled interest in other teams.

Impetus

Over time, an email-based Scrum community formed. Scrum Master training was provided on an occasional basis by someone who simply wanted to do so. Basic Scrum education continued on an ad hoc and voluntary basis. Eventually enough teams had adopted Scrum that a need was seen and a position of Scrum Trainer/Coach was created. Having a full-time Trainer and Coach available made adoption easier and improved the quality of scrum implementations. By mid-2008 the community was able to support an Open Space Scrum Gathering within the company.

What was (and one assumes is still) missing was higher level engagement at the organization and enterprise levels. No executive support for Scrum ever emerged, and the transition was therefore limited primarily to the team level, with many organizational impediments still in place.

The success of Scrum at Amazon validates one easy, frictionless way to begin a Scrum transition.

  1. Establish stable teams
  2. Make Agile and Scrum information widely and easily available
  3. Give permission to adopt Scrum

The advantage of this approach is that it requires a minimum of enterprise-wide planning and it allows teams to select Scrum, rather than mandating it. All of the rest of an enterprise Scrum transition can be accomplished by simply responding to impediments as raised by the teams and providing management support for change. Based on experience, the impediments raised will include demand (pull) for coaching, scaling, training, organizational change, a Transition Team, PMO changes, and all of the other aspects of an enterprise transition that many organizations labor so mightily to plan and control. Leadership for this kind of transition can only be Servant Leadership from the C-level, which is exactly the right kind for an Agile initiative, isn’t it?

The only impediment to Scrum adoption at Amazon was lack of knowledge. Teams were in place, and permission was part of the culture. When knowledge was provided, teams adopted Scrum. The strength of this process was based on the fact that only teams that were interested in trying Scrum actually tried it. There was no mandate or plan or schedule for this uptake.  Nobody was forced to use Scrum. Teams made an independent, informed decision to try to solve some of their problems. Lean and Agile thinkers will recognize that this as a pull-based incremental approach and not a plan-driven, command and control, push-based approach.

What about the things that didn’t happen at Amazon? The transition stalled at the team level due to failure to engage either middle or upper management in a meaningful way.  Both of those groups are required to bring a transition to its full potential. Training for middle managers, in particular, is crucial, but will usually reach them only with executive sponsorship.  A Transition Team is crucial when organizational and enterprise-wide impediments begin to be unearthed. Support from a source of advanced knowledge and experience (trainer/coach) is key.

Was Scrum good or bad overall for Amazon? There is only spotty, anecdotal data to report. Certainly there are many stories of teams that used Scrum very successfully. The Amazon S3 project, which not only delivered on time after about 15 months of work, but nearly achieved the unusual result of having the entire development team take a week’s vacation leading up to the launch day. It was not the crunch-time, last minute, panic-drenched effort that is common with projects of this scope and complexity. There was the team that “hasn’t been able to deliver any software for 8 months” that, sure enough, delivered some software a month later at the end of their first sprint. Another team reported that their internal customers came to them some time after the team had adopted Scrum, asking that they implement a whole list of random features. “We know these aren’t your responsibility, but you’re the only team that is able to respond to our requests.” Finally, there was the platform team that had literally dozens of internal customers. When that team adopted Scrum, they organized their customers into a customer council of sorts and let them simply decide each month what the team would work on, for the good of all, in order of value to Amazon overall. But even if none of these anecdotes were available to tell, the mere fact that teams opted on their own to adopt Scrum implies that something about Scrum helped them be more successful and more satisfied professionally. If that were not true, then they would not have stayed with it at all, right?

Written by israelgat

July 20, 2009 at 12:20 am

John Heintz on the Lean & Kanban 2009 Conference

with 23 comments

Colleague John Heintz has kindly compiled the summary below for the benefit of readers of The Agile Executive. John is well known to Agile Austin folks as well as to out-of-town/state companies to which he consults through his company. You can get a glimpse of his Agile/Lean thinking by reading his blog.

Here is John’s summary of the conference:

The Lean Kanban conference last week in Miami was astounding. David Anderson did a fantastic job, and everyone who contributed had great presentations.

I am humbled and emboldened at the same time. I’ve been involved in Agile since 1999 and Lean since 2004, so I thought this was going to be familiar to me, old hat.

Here’s my confession: I’ve pretty much ignored Kanban, writing it off as just slightly different than what good XP or Scrum teams practice anyway.

Wow, those small differences make a huge impact. I am very glad I decided to go to the conference, some internal hunch finally winning.

Here’s what I thought Kanban was before last week:

  • A Big Visible Board
  • A Prioritized Backlog
  • Close communication, minimizing hand-offs
  • Rules about cards on the wall

No Iteration/Sprint boundaries (I’m thinking more efficient but maybe losing something important…)

That’s all well and good and true enough. Easy to justify writing it all off with “I already know enough to help teams make a big difference”. In fact, Kanban can be boiled down to one single rule:

  • Limit the number of things in work to a fixed number.

But, if that’s all there is too it, why then did I hear things like these:

  • Kanban is easier to introduce to teams than Agile/Scrum/XP
  • “People who never say anything were offering ideas” (I’m pretty sure I heard this three time the first day…)
  • The team felt comfortable dropping estimates/retrospectives/standup questions/…

Wait, you say, this was the first conference and obviously full early adopters! Of course people are going to succeed because they self-selected for success. Good point, but that’s not everything that’s here. For example, Chris Shinkle’s presentation was a case study of rolling out Kanban to many teams who hadn’t asked for Kanban.

So between furiously scratching down notes[1], listening and tweeting[2], I started to think to myself:

  • Why does this make such a difference?
  • Easier to create thinking and reflective teams! Isn’t that cultural change?

I had the pleasure of wrestling this “why” question out with several people, especially Alan Shalloway.

The first answers people gave me were entirely unsatisfying:

  • David Anderson’s reply tweet: “Kanban is easier than Scrum because you don’t change any roles or workflow and you don’t teach new practices.”
  • Alan Shalloway first response: “Kanban cuts out the noise and reduces thrashing.”

Sure, sure, but none of those (good) things seem likely to create: cultural change, engaged teams, or reflective individuals. Those answers are technical, details, and generally not the “emotionally” important things needed for change. Mind you, I’m not really well versed in cultural or emotional change, but being the stubborn person I am, I kept digging.

Here’s where Alan and I got, please add any insights[3]:

  • My hypothesis: Kanban has concrete reflective tools: like “should WIP be 4 or 5?”. Very reflective, but not very abstract or hand-wavy. People can’t often use abstract reflective tools like Retrospectives.
  • Paraphrasing Alan Shalloway: Kanban reduces the fear of committing to a per story estimate – a significant risk in some teams. Less fear can lead to cultural change.
  • (not sure who): Kanban changes the focus away from blaming an individual to examining why stuff is stuck on the board. (I hear Deming…)

—-
On to the actual trip report. Here is an abbreviated transcription of the proceedings of the conference. (Very abbreviated!)

  1. Alan Shalloway started the conference off with no small announcement: the formation of the Lean Software and Systems Consortium. He also mentions that this consortium will be creating a body of knowledge and promoting a distributed certification process. Certifications will be a very interesting topic, my initial reaction was negative. Now I’m just skeptical 😉 I’ve got a hunch that TWI, a hidden influence of Lean, may hold some of the secrets for a successful certification method. We’ll see how this plays out.
  2. Dean Leffingwell gave a keynote on a Lean and Scalable Requirements Model for Agile Enterprises. Very clear from executives down to team activity: maps from Themes to Epics to Features to Stories. This immediately cleared up some questions I and a client were having.  My favorite quote: “If you don’t know hot to get the story out of the iteration – don’t let it in” referring to acceptance tests.
  3. Peter Middleton presented material from “Lean Software Strategies“, co-authored by James Sutton who presented next. Peter is a professor at Queen’s University in Belfast and was the first person to really talk about the people issues. Much of what Peter related was how various practices caused people problems: recruiting and training goals (10 per week) would require recruiters and trainers to push even unqualified people into the company. That led to poor service, high-turnover, and greater costs.
  4. James Sutton has a small personal goal: to save the middle class. His presentation did a good job ranging over various Lean and Systems thinking topics, connecting the dots to Agile. Key quote comparing Lean and Agile: “Getting Prepared” vs “Getting Started.
  5. Sterling Mortensen presented a case study of introducing Lean into the Hewlett Packard printer development division. He said HP was already the “best of breed” and still became much more efficient and effective. My favorite quote: “Stop Starting, Start Finishing“. Sterling also said the “One” metric was continuous Flow. I’m not sure I understand that all the way; I’d been working under the assumption the One metric was customer to customer cycle time (from concept to cash.)
  6. Amit Rathore gave a personal case study of Lean in a start-up, http://runa.com. Amit showed many examples and talked really honestly about his experience. My favorite quote: “not released equals not done”.
  7. Corey Ladas presented on his book Scrumban and his experience at Corbis (with David Anderson) and other projects. I bought a copy of his book out of his backpack and made him sign it.
  8. Jean Tabaka presented a thoughtful presentation on Lean, learning, ignorance, and people. Her narrative helped me further realize how Lean, and Kanban, play into the personal issues of learning and reflecting.
  9. Alina Hsu presented a case study of using Lean to organize the work of procuring a COTS (commercial off-the-shelf) software solution, not development. She had some great things to say about how delays cause major cost overruns. One thing that reduced the delays she mentioned was to change how agreement was reached. The team defined consensus as “I can live with it”, with the rules 1) I won’t go into the hall and try to subvert it, and 2) I won’t lose any sleep. These definitions help teams make decisions faster and reduced waste.
  10. Alan Shalloway presented on a model for understanding Lean and moving it beyond Toyota. He organized all the various concepts down into Lean Science, Lean Management, and Lean Education. Connecting this back to the Lean SSC announcement in the morning he said the consortium working to create value in those three areas.

And that was just the first day. Did anybody mention the conference day started before 8am and lasted till after 6pm? Oh, and everyone was glued into the room.

  1. David Anderson presented a keynote on the principles and evolution of Kanban. So much information! You’ll have to read his presentation and see the video on InfoQ, but just to provide a fragment from each I wrote down:
    • Principled recipe for success (including Balance Demand Against Throughput)
    • Metrics (like WIP is a leading indicator)
    • Agile Decision Filter questions
    • Lean Decision Filter questions
    • Kanban decouples input cadence, cycle time, release cadence
  2. Karl Scotland continued the detailed treatment of Kanban. Karl spoke about the Lean concept of Flow as expressed with Kanban – and even rename typical process steps to avoid any baggage with waterfall terminology. If you want to know more about how work actually gets done in a Kanban system, watch his presentation. His interesting names for for process steps are: Incubate, Illustrate, Instantiate, Demonstrate, Liquidate.
  3. Rob Hathaway presented a case study of his work building a game portal for a publishing business. He believed very strongly that teaching from principles (Value, Prioritization, WIP limits, Quality) led to success.
  4. Alisson Vale presented a tool… that enchanted everyone in the room. David Anderson himself said that Vale “has the highest maturity software team on the planet”. Now, tool support often isn’t the answer, and many teams get real value with a physical board – a tool isn’t a Silver Bullet. If a tool makes sense for you – this tool absolutely blew us away. I asked Alisson about buying or helping with the tool and he said they were considering open sourcing it! I offered my coding skills in extending it for my own clients to help reach that goal.
  5. Linda Cook presented a case study of using Kanban at the Motley Fool. Her presentation does a good job of showing how little is necessary to get a lot of value out of Kanban.
  6. Eric Landes gave a great case study about using Kanban in an IT development shop. His team went from struggling to turn requests around (41 days) to a rapid 9 day turn around. Again, his discussion of the team dynamics and reflection were interesting to how a tiny bit of Kanban can have a huge impact.
  7. Eric Willeke’s presentation was visually beautiful, but you’ll have to watch the InfoQ video to get the value out of it. It contained only two words in a quote bubble (from memory “Momma! Pirates!”) and was the backdrop for the story that he told. His story highlighted to me, again, that Agile doesn’t always stick but Kanban seems to.
  8. Chris Shinkle presented a multi-case study on rolling Kanban into a large software consultancy. Very interestingly, and contrary to much discussion before, Chris presented a practices first, principles later message. This again resonated strongly to me that Kanban practices are somehow special in encouraging people to reflect and reach for the principles.
  9. David Laribee presented an opinionated view on leadership and change using Lean. This quote stuck with me: “people support a world they help create”. His style of leading is to drive from “Values -> Practices -> Tools” and his presentation wove a story of Agile/Lean process change. Also, I really enjoyed his injecting reference to hardcore technologies: REST, Git and OSGi were fantastic to see in a Lean/Kanban presentation.

That was day two. I’d said we were all glued to the room before, now as I type this I realize our brains were coming a bit unglued at this point. Every presentation was top-notch, barely time for questions, breaks were cut short, and we came back for more as fast as we could. Oh, and apparently we collectively drank 2.5 times as much coffee as the hotel usually allocates for a group our size.

I’m not going to summarize the Open Space. Too many topics and changes in direction. You just had to be there 🙂

Cheers,
John Heintz

[1] I used the first 25 pages of a brand new notebook… for a 2.5 day conference… Every session had an overwhelming amount of information, and I’m glad InfoQ recorded video.
[2] My twitter account is http://twitter.com/jheintz, and you can follow everyone’s conference coverage at http://twitter.com/#search?q=%23lk2009.
[3] I’m going to keep following up on this topic in my personal blog: http://johnheintz.blogspot.com

Marauder Strategy for Agile Companies

with one comment

Colleague Annie Shum sent me the URL to a recent post by Clayton Christensen in The Huffington Post. In this post Christensen characterizes “disruption” in the following manner:

Disruption is the causal mechanism behind the “creative destruction” that [economist Joseph] Schumpeter saw so pervasively at work in capitalist economies. [Links added by IG]

Christensen’s post is largely about the automobile industry. It, however, ties nicely to an email exchange Jeff Sutherland and I had about Agile as a disruption inside the company vis-a-vis its intentional use as a disruptive methodology in the market. To quote Jeff:

We are starting to see organizations like yours that can use Scrum to disrupt a market. There is a tremendous amount of low hanging fruit out there. Dysfunctional companies that can’t deliver. I’ve been recommending a “Marauder” strategy to the venture group. Find a company who has a large amount of resources. Set them loose like pirates on the ocean and they seek out slow ships and take them out.

Carlota Perez, who has been often cited in this blog (click here, here and here), is a disciple of Schumpeter. I really like the way the “dots” are connected: Schumpeter –> Perez –> Christensen –> Schumpeter. Their theories of disruption and constructive destruction express themselves nicely in the business design proposed by Jeff.

Active Releases

with one comment

A Continuum

The traditional way of examining software from a life-cycle perspective is phase-by-phase. The software is developed; deployed; monitored; maintained; changed; and, eventually retired.

True though this description is, more and more executives these days actually view it all as one continuum. An application is developed, and  then deployed and maintained as part of some business process. At a certain level it might not really matter to a business executive what the software life-cycle is and which party carries out what phase. The thing that matters is that some service is performed to customer satisfaction. One could actually do complete  Business Process Outsourcing, chartering  a third party to take care of all the “headaches” in the continuum – from coding a critical software component to repairing a delivery truck to answering calls like, “My shipment has not arrived on the promised date.”

This post looks inside the continuum to comments on the implications of increasing the number of releases. Aspects related to, IT operationsIT service management and the customer as a strategic partner will be discussed in subsequent posts.

Number of Active Releases

The delivery of value to the customer is a fundamental tenet of Agile. The whole Agile development process is geared to that end. Customer value, however,  is realized when the customer is able to start using the product . As deployment cycles for enterprise software can be quite long, time gained through Agile development is not necessarily of immediate value to the customer. For customer value to materialize, the deployment cycle needs to be fast as well.

Companies that use Agile successfully can be quite different with respect to deployment practices. Here is a quick comparison of BMC Software, PatientKeeper and Google:

  • The successful implementation of Scrum at BMC Software led to producing 3-4 releases of the BMC Performance Manager per year.  However, time to deployment by various customers varied greatly.  Hence, a relatively small number of releases was active at any point in time.
  • With PatientKeeper,  Sutherland exploited the hyper-productivity of his Agile teams to produce a relatively large number of releases. When a customer needed a “release” it was downloaded via VPN and installed in fairly short order. Some 45 active releases of the software existed at any point in time.
  • Companies such as Google expedite deployment by using Software as a Service (SaaS) as the delivery mechanism. Google has only one actively deployed release at a time, but produces many fast releases.

As one increases the number of releases, a reasonable balance between velocity of development versus velocity of deployment in the continuum must be struck. To streamline end-to-end operations, velocity gains in one should be matched by gains in the other.

It does not Really Matter if you can Tell the Egg from the Chicken

One can speculate on how things evolve between development and deployment. Whether Agile software development leads to improvement in deployment, or Software as a Service “deployment” induces faster development processes. The Agile philosophy is well expressed in either case. In either direction, the slowere speed area is considered an opportunity, not a barrier. A Software as a Service Operations person who pushes for faster development speed lives the Agile philosophy even if he knows nothing about Agile methods.

Written by israelgat

February 10, 2009 at 2:58 pm

Scaling Agility: From 0 to 1000

with one comment

Walter Bodwell delivered an excellent presentation on the subject in Agile Austin last night. The presentation is quite unique in seeing both the “forest” and the “trees”. Walter addresses the operational day-to-day aspects of Scrum in the trenches in parallel with providing insights on the roll-out at the executive level. Highly recommended!

Written by israelgat

February 4, 2009 at 9:25 am

Allocating Your Agile $$

leave a comment »

“50-50” is the rule of a thumb many audiophiles use for configuring a good stereo system. 50% of the budget goes toward the speakers; the other 50% toward everything else in the stereo system. The reason is quite simple: stereo systems succeed or fail on the merits of the speakers.

Whatever your Agile budget might be, a good starting point is to spend about 50% of the budget on training, consulting and coaching during the first year of Agile rollout. This figure will probably go down to about 25% during the second year;  precious little thereafter. Such budget allocation was used at BMC Software in its Agile rollout and operation between 2004 and 2008: about 50% of the total Agile budget in 2005; 25% in 2006; single digit percentages in 2007 and 2008.

First Year Agile Budget

Starting with 50% for consulting, training and coaching during the first year is rooted in the software engineer being a craftsman. A craftsman learns and develops through apprenticeship. He learns from the masters. The popular Program With the Stars sessions  in various software development conferences recognize the power of the apprecnticeship paradigm. A good review of the power of such a session in the Agile 2008 conference can be found in the Working Together… with Technology post by Andrew Shafer.

If you accept the premise of the software engineer as a craftsman, you need to invest  in consulting and coaching more than in training. A two or three day Scrum Training class for numerous product managers, developers and testers in your company is, of course,  a good start. However, it is the day-in day-out consulting and coaching that will make the training applicable. There is no substitute to a competent Agile coach saying in the middle of a stand-up meeting “Folks, what happened to our collaboration? We are not getting the benefits of the wisdom of teams.”

In addition to the coaching needs of the teams, you as an Agile executive will probably need some coaching by a sure-footed Agile executive coach. Topics should include the following:

  • Your role in the Agile roll-out
  • Deliverable you own in the Agile roll-out
  • Behaviors that are supportive of Agile
  • Identification of promising indicators for Agile roll-out
  • Identification of early warning sign for Agile roll-out going the wrong way
  • Synchronizing release trains across multiple software development methods
  • Impact of Agile on downstream functions

Don’t think about these coaching items as luxury you can’t afford. Rather, it is money well spent. It is the cost you as an executive need to pay for being on the Agile train. The train might leave the station without you if you do not invest in your Agile education.

Second and Third Year Agile Budgets

The rationale for reducing the investment in consulting, training and coaching in the second and third year is simple. Various Agilists in your company would become experts themselves. Needless to say you will continue to invest in their skills by sending them to a conference such as Agile 2009. Much of the consulting and coaching, however, should over time be done by your home-grown Agilists.

Consider it an early warning sign if the assimilation of expertise in your company has not generated Agile experts in your teams within a couple of years. You could, of course, allocate your Agile $$ in the second and third year on the 50-50 basis recommended above for the first year. But, chances are something is not working well with your Agile roll-out.  The norm in successful Agile roll-outs is to harvest a whole bunch of quotes like the following quip made in 2006 by a QA Director with BMC Software:

 There had never been a thought towards returning to Waterfall. We only think about how to be more Agile, how to do this better. No one wants to go back!

Written by israelgat

January 22, 2009 at 10:47 am

The Core Principle Behind Agile

with 2 comments

Hyper-productive Agile teams, reaching twice, thrice and even higher level of productivity compared to industry average, have been reported by various consultants and practitioners, including Jeff Sutherland, Michael Mah and me. I run into a lot of questions about the reported case studies. Often times I sense that the executives quizzing me about the accomplishments of BMC Software are wishing at some level to find something extraordinary that would explain how my project teams accomplished hyper-productivity. In other words, the reported productivity figures are sometimes considered too good to be true in general.

IMHO Agile hyper-productivity stems from a very simple universal principle: everyone on the team does only the most important things at any point in time. Effectiveness and efficiency are the results of systemic elimination of less important features, functions and tasks.

Various executives ask me the question “Should I adopt Lean Agile or should I do Scrum ?” The answer I invariably give is “To my way of thinking, the two apply the same principle: Lean Agile focuses on eliminating waste; Scrum focuses on the elimination of “waste” in form of the less important.”

I will cover the “secret sauce” of BMC Software’s sucess with Agile methods in forthcoming posts. Before doing so, I would like the reader to come to terms with the following view: the secret sauce we used at BMC was “just” creating the environment within which the less important was eliminated.  What we did was a successful implementation of the core principle: “Do only the most important things at any point in time”.

In a way, our secret sauce was grasping this fundamental principle and developing a whole socio-technical system to free the principle from the metaphorical chains of archaic methods.

Written by israelgat

January 15, 2009 at 5:53 pm

Posted in Starting Agile

Tagged with , ,