The Agile Executive

Making Agile Work

Posts Tagged ‘Highsmith

Schedule Constraints in the Devops Triangle

leave a comment »

Last week’s post “The Devops Triangle” demonstrated the extension of Jim Highsmith‘s Agile Triangle to devops. The extension relied on adding compliance to the three traditional constraints of software development: scope, schedule, cost. A graphical representation of this extension is given in Figure 1.

Figure 1: Compliance as the Fourth Constraint in Devops Projects

This blog post examines how time/schedule should be governed in the devops context. It does so by building on the concluding observation in the previous post:

The Devops Triangle and the corresponding Tradeoff Matrix demonstrate how governance a la Agile can be extended to devops projects as far as compliance goes. The proposed governance framework however is incomplete in the following sense: schedule in devops projects can be a much more granular and stringent constraint than schedule in “dev only” projects.

For the schedule constraint in devops, I propose a schedule set.  It consists of  four components:

  • Lead Time or Engineering Time
  • Time to change
  • Time to deploy
  • Time to roll back

Lead Time/Engineering Time: These are customary metrics used in Kanban software development, as demonstrated in Figure 3.

Figure 3: The Engineering Time Metric Used by the BBC (David Joyce in the LSSC10 Conference)

Time to change: The amount of time it takes for the various stakeholders (e.g., dev, test, ops, customer support) to review the code to be deployed, approve its deployment and assign a time window for the deployment.

Time to deploy: The amount of time from (metaphorically speaking) pushing the Deploy “button” to completion of deployment.

Time to roll back: The amount of time to undo a deployment. (Rigorous that the engineering practices and IT processes might be, the time to roll back a deployment can’t be ignored – it is a critical risk parameter).

A graphical representation of these four schedule metrics together with the Devops Triangle is given in the figure below:

Figure 4: The Devops Triangle with a Schedule Set

Using hours as the common unit of measure, a typical schedule set could be {100, 48, 3, 2}. In this hypothetical example, it takes a little over 4 days to carry out the development of the code increment; 2 days to get approval for the change; 3 hours to deploy the code; and, 2 hours to roll back.

Whatever your specific schedule numbers might be, it is highly recommended you apply value stream mapping (see Figure 5 below) to your schedule set. Based on the findings of the value stream mapping, apply statistical process control methods like those illustrated in Figure 3 to continuously improving both the mean and the variances of the four schedule components.

Figure 5: An Example of Value Stream Mapping (Source: Wikipedia entry on the subject)

The Devops Triangle

with one comment

The Agile Triangles was introduced by Jim Highsmith as an antidote to the Iron Triangle. Instead of balancing development between cost, schedule and scope, the Agile Triangle strives to strike a balance between value, quality and constraints:

Figure 1 – The Agile Triangle (based on Figure 1-3 in Agile Project Management: Creating Innovative Products.)

Consider the Iron Triangle in the context of devops. Value, quality and constraints apply to IT operations as meaningfully as they apply to software development. IT can go beyond cost, schedule and scope to focus on value and quality just as the Agile software development team does. Between development and operations the specific tasks to be carried out change, but the principles embodies in the triangle remain invariant.

In addition to cost, schedule and scope, devops projects must cope with another constraint: compliance. For example, a bank that implements a ‘follow the sun’ strategy with respect to trading must finish reconciling transaction that took place in London before the start of trade in Wall Street. From the bank’s point of view, its IT department needs to be mindful of four constraints: compliance, cost, schedule and scope. This view is represented in Figure 2 below.

Figure 2 – The Devops Triangle

Balancing the four constraints – compliance, cost, schedule, and scope – is not a trivial task. However, just like the Agile Triangle, the Tradeoff Matrix used in Agile software development applies to IT. In its software development variant, the Tradeoff matrix is an effective tool to decide between conflicting constraints, as follows:

Table 1 – Tradeoff Matrix (based on Table 6-1 in Agile Project Management: Creating Innovative Products.)

For devops, the matrix is extended to include a compliance row and a Reluctantly Accept column as follows:

Table 2 – Tradeoff Matrix for Devops

The Devops Triangle and the corresponding Tradeoff Matrix demonstrate how governance a la Agile can be extended to devops projects as far as compliance goes. The proposed governance framework however is incomplete in the following sense: schedule in devops projects can be a much more granular and stringent constraint than schedule in “dev only” projects. The subject of schedule constraints in devops projects will be addressed in a forthcoming post.

Technical Debt at Cutter

leave a comment »

No, this post is not about technical debt we identified in the software systems used by the Cutter Consortium to drive numerous publications, events and engagements. Rather, it is about various activities carried out at Cutter to enhance the state of the art and make the know-how available to a broad spectrum of IT professionals who can use technical debt engagements to pursue technical and business opportunities.

The recently announced Cutter Technical Debt Assessment and Valuation service is quite unique IMHO:

  1. It is rooted in Agile principles and theory but applicable to any software method.
  2. It combines the passion, empowerment and collaboration of Agile with the rigor of quantified performance measures, process control techniques and strategic portfolio management.
  3. It is focused on enlightened governance through three simple metrics: net present value, cost and technical debt.

Here are some details on our current technical debt activities:

  1. John Heintz joined the Cutter Consortium and will be devoting a significant part of his time to technical debt work. I was privileged and honored to collaborate with colleagues Ken Collier, Jonathon Golden and Chris Sterling in various technical debt engagements. I can’t wait to work with them, John and other Cutter consultants on forthcoming engagements.
  2. John and I will be jointly presenting on the subject Toxic Code in the Agile Roots conference next week. In this presentation we will demonstrate how the hard lesson learned during the sub-prime loans crisis apply to software development. For example, we will be discussing development on margin…
  3. My Executive Report entitled Revolution in Software: Using Technical Debt Techniques to Govern the Software Development Process will be sent to Cutter clients in the late June/early July time-frame. I don’t think I had ever worked so hard on a paper. The best part is it was labor of love….
  4. The main exercise in my Agile 2010 workshop How We Do Things Around Here in Order to Succeed is about applying Agile governance through technical debt techniques across organizations and cultures. Expect a lot of fun in this exercise no matter what your corporate culture might be – Control, Competence, Cultivation or Collaboration.
  5. John and I will be doing a Cutter webinar on Reining in Technical Debt on Thursday, August 19 at 12 noon EDT. Click here for details.
  6. A Cutter IT Journal (CITJ) on the subject of technical debt will be published in the September-October time-frame. I am the guest editor for this issue of the CITJ. We have nine great contributors who will examine technical debt from just about every possible perspective. I doubt that we have the ‘real estate’ for additional contributions, but do drop me a note if you have intriguing ideas about technical debt. I will do my best to incorporate your thoughts with proper attribution in my editorial preamble for this issue of the CITJ.
  7. Jim Highsmith and I will jointly deliver a seminar entitled Technical Debt Assessment: The Science of Software Development Governance in the forthcoming Cutter Summit. This is really a wonderful ‘closing of the loop’ for me: my interest in technical debt was triggered by Jim’s presentation How to Be an Agile Leader in the Agile 2006 conference.

Standing back to reflect on where we are with respect to technical debt at Cutter, I see a lot of things coming nicely together: Agile, technical debt, governance, risk management, devops, etc. I am not certain where the confluence of all these threads, and possibly others, might lead us. However, I already enjoy the adrenaline rush this confluence evokes in me…

How Many Metrics do You Need to Effectively Govern the Software Process?

with 4 comments

A Simple Metrics-Driven Software Governance Framework Based on Jim Highsmith’s Agile Triangle Framework

In my recent Cutter Blog post entitled Three Governance Metrics I recommended using just three metrics:

  • Value
  • Cost
  • Technical debt

The heart pf this recommendation is that all three can be expressed in dollar terms as depicted in the figure above. An apples-to-apples comparison is made through the common denominator – $$. For example, something is likely to be either technically, methodically or governance-wise wrong if the technical debt figure exceeds the cost figure for a prolonged period of time. One can actually characterize such a situation as accruing debt faster than building equity.

I am often asked about adding metrics to this simple governance framework. For example, should not productivity be included in the framework?

‘Less is more’ is my usual response to such questions. IMHO value, cost and technical debt address the most important high level governance considerations:

  • Value –> Why are we doing the project?
  • Cost –> Can we afford the project?
  • Technical debt –> Is the execution risk acceptable?

Please pay special attention to the unit of measure of any metric you might add  to this simple governance framework. As long as the metric is a dollar-based metric, the cohesion of the governance framework can be maintained. However, metrics which are not expressed in dollars will probably superimpose other frameworks on top of the simple governance framework. For example, you introduce a programming framework if you add a productivity metric which is measured in function points per man month. Sponsors who govern using value, cost, technical debt and productivity will need to mentally alternate between the simple governance framework and the programming framework whenever they try to combine the productivity metric with any of the other three metrics.

The System Assumes the System is Right

with 2 comments

The post It Won’t Work Here proposed tactics for dealing with a specific class of arguments why a company should not adopt Agile:

  • Uniqueness: “Some very unique elements exist in our company. These elements render industry data inapplicable.”
  • Secret sauce: “Something very special element existed in the companies reporting great success with Agile. Our company does not possess nor have access to the ‘secret sauce’ that enabled success elsewhere.”
  • Cultural change: “For the Agile initiative to succeed, our corporate culture needs to change. The required cultural change takes a lot of time and involves a great deal of pain. All in all, the risk of rolling Agile is unacceptably high.”
  • Affordability: “The company is strapped to the degree that investment in another software method is a luxury it can’t afford.”
  • Software is not core to us: “We are not a software company, nor is software engineering our core competency. Software is merely one of the many elements we use in our business.”

This posts augments It Won’t Work Here by shedding light on the reflexive behavior of the system the Agile champion is likely to run into while applying the recommended tactics.

The fundamental distinction the Agile champion needs to keep in mind is between individuals and organizations. To quote from Charles Perrow‘s work on the subject of complex organizations:

One cannot explain organizations by explaining the attitudes and behavior of individuals or even small groups within them. We learn a great deal about psychology and social psychology but little about organizations per se in this fashion.

To successfully function as an entity, an organization must assume that the system put in place to carry out its tasks is right. If the system is not right, the order and integration required for the proper functioning of the organization can’t be maintained. This assumption about being right tends to become all-inclusive. It is applied to both essential tasks and non-essential trivia.

The key to the success of Agile adoption in the face of the “system is right” reflex, is to budget your battles. As an Agile champion you fight only for things that are core to Agile methods, not for the myriad of contextual details about which the system assumes it is right.

For example, I would not spend a lot of energy on determining the unit of measure to be used in the Agile initiative. As an Agilist I prefer stories as the standard unit, e.g. release 2.0 was 500 stories while releases 3.0 constitutes 1,000 stories. But, I would accept lines of code or functions points as the standard unit of measure if the system so prefers.

Conversely, I would not accept system convictions when they violate core principles of Agile. For example, The Agile Triangle advocated by Jim Highsmith views scope, schedule and cost as constraints. This way of viewing Agile software is non-negotiable in my book. The reason for this strongly held conviction of mine is simple – the Agile initiative is likely to fail if the three (scope, schedule, cost) are considered as goals.

Source: based on Figure 1-3 in Jim Highsmith‘s Agile Project Management: Creating Innovative Products.

Definition: Agile Methodology

leave a comment »

Agile Methodology is actually a bit of a controversial termVarious authors consider Agile a method, as distinct from a methodology. Others, prefer methodology over method. For example, using the Merriam-Webster dictionaries, Alistair Cockburn makes the following distinction between methodology and method in Agile Software Development: The Cooperative Game :

  • Methodology: A series of related methods or techniques
  • Method: Systematic procedure

Alistair views Agile as a methodology in the sense defined above. For example, he discusses Crystal as a family of methodologies. The reader is referred to Alistair’s book for a an excellent analysis of the various aspects of methodologies. As a matter of fact, Alistair tracks down the confusion between method and methodology to certain inconsistencies between various versions of the Oxford English Dictionary.

On the other hand, best I can tell from various conversations with him, Jim Highsmith seems to prefer the term Agile Method. This preference is reflected in Agile Project Management: Creating Innovative Products. It is possible that Jim’s preference is due to writing his book from a project management perspective.

Rather than getting in-depth to the method versus methodology controversy, I would simply cite two definitions I find useful in capturing the essence of Agile methodology, or method if you prefer.

An interesting metaphor for Agile has been used by Jim Highsmith in a 2009 Cutter Advisory:

Visualize a house structure with a roof, a foundation, and three pillars… The roof is business goals — the rationale for implementing agile methods and scaling to larger agile projects. The foundation is agile values or principles — principles that need careful interpretation as to how to apply them to larger teams. And finally, the three pillars: organization, product backlog, and process/practice.

The simplicity of the metaphor makes it quite effective in communicating what Agile is in a concise way without losing the richness of the various elements in Agile.

Using Scrum as an example, colleague David Spann gives the following down-to-earth summary of the key structural components of Agile in a 2008 Cutter Executive Report:

Scrum, as a management methodology, is elegant in its design, requiring only three roles (i.e., product owner, ScrumMaster, and self-organized team), three ceremonies (sprint/iteration planning, daily Scrum/debrief, and sprint review meetings), and three artifacts (product and sprint backlogs and the burndown chart) — just-enough practical advice so agile teams do not overcomplicate the development lifecycle with too much ceremony and documentation.

Needless to say, the structural elements will change from one Agile methodology to another. However, examining an Agile methodology through the {roles, ceremonies, artifacts} “lens” is an excellent way to summarize an Agile methodology. Furthermore, it enables easy comparison between the ‘usual suspects’ of Agile – Crystal Methods, Dynamic Systems Development, Extreme Programming, Feature Driven Development, and Kanban. The reader is referred to The Business Value of Agile Software Methods: Maximizing ROI with Just-in-Time Processes and Documentation for detailed comparisons between the various methods/methodologies.

Standish Group Chaos Reports Revisited

with 9 comments

The Standish Group “Chaos” reports have been mentioned in various posts in this blog and elsewhere. The following figure from the 2002 study is quite representative of the data provided in the Standish annual surveys of the state of software projects:

Standish Group

The January/February 2010 issue of IEEE Software features an article entitled The Rise and Fall of the Chaos Report Figures. The authors – J. Laurenz Eveleens and Chris Verhoef of the VU University, Amsterdam – give the following summary of their findings:

In 1994, Standish published the Chaos report that showed a shocking 16 percent project success. This and renewed figures by Standish are often used to indicate that project management of application software development is in trouble. However, Standish’s definitions have four major problems. First, they’re misleading because they’re based solely on estimation accuracy of cost, time, and functionality. Second, their estimation accuracy measure is one-sided, leading to unrealistic success rates. Third, steering on their definitions perverts good estimation practice. Fourth, the resulting figures are meaningless because they average numbers with an unknown bias, numbers that are introduced by different underlying estimation processes. The authors of this article applied Standish’s definitions to their own extensive data consisting of 5,457 forecasts of 1,211 real-world projects, totaling hundreds of millions of Euros. The Standish figures didn’t reflect the reality of the case studies at all.

I will leave it to the reader to draw his/her conclusion with respect to the differences between the Standish Group and the authors. I would, however, quote Jim Highsmith‘s deep insight on the value system within its context we measure performance. Following excerpt is from Agile Project Management: Creating Innovative Products:

It we are ultimately to gain the full range of benefits of agile methods, if we are ultimately to grow truly agile, innovative organizations, then, as these stories show, we will have to alter our performance management systems…. We have to be as innovative with our measurement systems as we are with our development methodology.

See pp. 335-358 of Jim’s book for details on transforming performance management systems. His bottom line is elusively simple:

The Standish data are NOT a good indicator of poor software development performance. However, they ARE an indicator of systemic failure of our planning and measurement processes.

Jim is referring to the standard definition of  project “success” – on time, on budget, all specified features.

I will be working with a client to carry out the performance management ideas articulated by Jim later this month. Jim indicated he has a customer engagement in February where he expects to learn about  interesting ways in which the client is using the Agile Triangle (which is conceptually quite related to the fundamental question what to measure). Client confidentiality permitting, I am confident we will soon be able  to brief readers of The Agile Executive on our progress.