The Agile Executive

Making Agile Work

Standish Group Chaos Reports Revisited

with 11 comments

The Standish Group “Chaos” reports have been mentioned in various posts in this blog and elsewhere. The following figure from the 2002 study is quite representative of the data provided in the Standish annual surveys of the state of software projects:

Standish Group

The January/February 2010 issue of IEEE Software features an article entitled The Rise and Fall of the Chaos Report Figures. The authors – J. Laurenz Eveleens and Chris Verhoef of the VU University, Amsterdam – give the following summary of their findings:

In 1994, Standish published the Chaos report that showed a shocking 16 percent project success. This and renewed figures by Standish are often used to indicate that project management of application software development is in trouble. However, Standish’s definitions have four major problems. First, they’re misleading because they’re based solely on estimation accuracy of cost, time, and functionality. Second, their estimation accuracy measure is one-sided, leading to unrealistic success rates. Third, steering on their definitions perverts good estimation practice. Fourth, the resulting figures are meaningless because they average numbers with an unknown bias, numbers that are introduced by different underlying estimation processes. The authors of this article applied Standish’s definitions to their own extensive data consisting of 5,457 forecasts of 1,211 real-world projects, totaling hundreds of millions of Euros. The Standish figures didn’t reflect the reality of the case studies at all.

I will leave it to the reader to draw his/her conclusion with respect to the differences between the Standish Group and the authors. I would, however, quote Jim Highsmith‘s deep insight on the value system within its context we measure performance. Following excerpt is from Agile Project Management: Creating Innovative Products:

It we are ultimately to gain the full range of benefits of agile methods, if we are ultimately to grow truly agile, innovative organizations, then, as these stories show, we will have to alter our performance management systems…. We have to be as innovative with our measurement systems as we are with our development methodology.

See pp. 335-358 of Jim’s book for details on transforming performance management systems. His bottom line is elusively simple:

The Standish data are NOT a good indicator of poor software development performance. However, they ARE an indicator of systemic failure of our planning and measurement processes.

Jim is referring to the standard definition of  project “success” – on time, on budget, all specified features.

I will be working with a client to carry out the performance management ideas articulated by Jim later this month. Jim indicated he has a customer engagement in February where he expects to learn about  interesting ways in which the client is using the Agile Triangle (which is conceptually quite related to the fundamental question what to measure). Client confidentiality permitting, I am confident we will soon be able  to brief readers of The Agile Executive on our progress.

Advertisement

11 Responses

Subscribe to comments with RSS.

  1. In WhatIsFailure, Martin Fowler says: “Rather than saying that a project is failed because it is late, or has cost overruns – I would argue that it’s the estimate that failed. So the CHAOS report isn’t chronicling software project failure, it’s chronicling software estimation failure.”

    Philip Schwarz

    January 11, 2010 at 5:28 pm

  2. […] THE REASONS FOR PROJECT FAILURE?  – The updated 2001 list of reasons why projects fail from the Standish Group study is ranked in the far left column, along with a survey of members of the National Association for […]

  3. Hi Israel,

    The graphic you show in the blog is data from user surveys. It shows that in general internal IT systems are overbuilt from the user’s perspective. It is unrelated to the project success data that is disputed in the IEEE article. Why show an unrelated image? It increases confusion in your article.

    User research on Microsoft Office has reported even lower rates of features used compared to features built.

    Robin Dymond

    March 3, 2011 at 9:47 am

    • Not certain I get your point. The graphic in the post is the original graphic Standish used in their 2002 study.

      Israel

      israelgat

      March 3, 2011 at 9:36 pm

  4. […] design if the market has changed and the module will never be used.  A somewhat controversial  study by the Standish Group suggested that 64% of functionality delivered is ”rarely or never” used.  Some argue […]

  5. […] to measure” because these studies, published periodically since 1995, have recently been questioned, especially by proponents of agile development. While the techniques of the CHAOS studies can be […]

  6. […] they define success as meeting cost and time expectations. All this report is highlighting is the continued failure of […]

  7. […] Standish Group Study*, 64% of features/functions in a typical system are rarely or never used. (*as reported by Jim Johnson at […]

  8. […] Standish Group Chaos Reports Revisited « The Agile Executive […]

  9. […] only some major functions that users are supposed to check, its development takes less time. Some studies show us that the average consumer doesn’t utilize around 60% of all functions. This method allows […]

  10. Indeed.

    israelgat

    October 31, 2020 at 6:48 pm


Leave a Reply to israelgat Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: