Estimation is at the root of most software project failures


I believe estimation, and the way it’s regularly misused, is at the root of the majority of software project failures. In this article I will try to explain why we’re so bad at it and also why, once you’ve fallen into the trap of trying to do anything but near future estimation, no amount of TDD, Continuous Delivery or *insert your latest favourite practice here* will help. The simple truth is we’ll never be very good at estimation, mostly for reasons outside of our control. However many prominent people in the Agile community still talk about estimation as if it’s a problem we can solve and whilst some of us are fortunate to work in organisations who appreciate the unpredictable nature of software development, most are not.

Some Background

I recently tweeted something which clearly resonated with a lot of people:

"With software estimation you've only realistically got a choice of 5 mins, 1 hour, 1-2 days, about a week, and then all bets are off."

It was in response to this article Mike Cohn posted proposing a technique for providing estimates for a full backlog of 300(!) stories when you haven’t yet gone any historical data on the team’s performance. The article does provide caveats and say it isn’t ideal, but I think it’s extremely harmful to even suggest that this is something you could do.

I’ve long had a fascination with estimation and particularly why we’re so bad at it. It started when I was involved in a software project which, whilst in many respects was a success, was also a miserable failure resulting in a nasty witch hunt and blame being put at the foot of those who provided the estimates.

I’ve previously written about the estimation fallacy, that formative experience, and ran my “Dealing with the Estimation Fallacy” session at 2/3 conferences including SPA 2009. There are also plenty more estimation related articles in the archive.


Why we’ll always be crap at estimating

“It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
– Douglas Hofstadter, Godel, Escher, Bach: An Eternal Golden Braid

Since my rocky experiences with estimation and my subsequent research into the area I’ve gone to great lengths to try to improve the way estimations occurs. At my current organisation we collect data from all our teams on how long work items take (cycle time) and have this going back over two years. It’s a veritable mountain of data from which we’re able to ascertain the average and standard deviation for cycle times.

An example of some cycle times from a team's work data
We can then take a list of feature requests, do some high level t-shirt size estimation on them and should comfortably be able to give a range of dates within which the team is most likely to reach particular milestones (the range represents the uncertainty – low, medium and high based on the average +/- the standard deviation, smart eh?).



We were finding that unless we were talking about stuff coming up in the next few weeks – by which point we’d generally done more detailed analysis – we were either at the far end of the estimation range or missing the predicted milestones dates altogether. It was almost always taking longer than we expected. You could argue that it was poor productivity on the behalf of the developers, but the projections were based on their previous performance so it’s a difficult one to argue, especially as our data was also suggesting that the cycle times for work items had actually been going down!

There are two main reasons things were taking longer than expected:

1. Cognitive Bias

From Wikipedia“Cognitive bias is a general term that is used to describe many observer effects in the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation”.

In terms of estimation, cognitive bias manifests itself in the form of Optimism Bias“the demonstrated systematic tendency for people to be overly optimistic about the outcome of planned actions”.

There is also Planning Fallacy“a tendency for people and organizations to underestimate how long they will need to complete a task, even when they have past experience of similar tasks over-running”.

It’s a huge problem, especially when we’re trying to do long range high level estimation. Our brains are hard-wired, not only to be optimistic about our estimates, but also to tend towards only thinking of the best cases scenarios. There is very little we can do about this apart from be aware it happens.

It’s only when you get round to doing the proper analysis of the work that you start seeing other things you need to do. I remember Dan North once saying in a presentation that estimation is like “measuring the coast of Britain”, the closer you look the more edges you’ll see and the longer it gets.

2. “Known unknowns and unknown unknowns”

Donald Rumsfeld got soundly mocked for his infamous “unknown unknowns” quote, but he was right. With software development, just like any Complex Adaptive System, there are many things you simply cannot plan for. We can at best be aware there will be pieces of work we’ve not accounted for in our estimations (known unknowns), but we’ve got no way of knowing how much work they’ll represent. The best we can do is know they’re there. The unknown unknowns? Well they’re even harder to predict :)

Why estimation can be so harmful

Estimates are rarely ever just estimates

There is always a huge amount of pressure to know when things will be done and with good reason. People want to be able to do their jobs well and it’s hard if they can’t plan for when things they’re dependent upon will be ready. This is why an estimate is rarely ever “just” an estimate. People use this information, otherwise they wouldn’t be so keen to know in the first place. You can tell them all you like that they can’t be relied upon, but they will be. As Dan North recently tweeted, “people asking for control or visibility really want certainty. Which doesn’t exist”.

Some of us are fortunate to breath the rarefied air of an enlightened organisation where the unpredictable nature of software development is generally accepted, but lets face it, most are not.

Re-framing the definition of success (or highlighting the failure of estimation)

The most significant impact of providing estimates is that as soon as they’re in the public domain they have a habit of changing the focus of the project, especially when dates start slipping. Invariably a project of any reasonable duration, which started with the honourable objective of solving a business problem, quickly changes focus to when it will be completed.

Well, there’s actually a bigger problem here already. For most organisations the definition of success remains “on time and on budget”, not whether the project has made money, or improved productivity, or brought in new customers. It is often said the vast majority of software projects are considered failures and the Standish Choas Report is usually cited here. The problem is even they define success as meeting cost and time expectations. All this report is highlighting is the continued failure of estimation.

At best providing long range estimates just supports the “on time, on budget” mentality. At worst it takes projects started with the best intentions and drags them back down to this pit of almost inevitable failure and you – as the provider of the estimate – along with it.

The consequences…

All the typical issues dogging software development raise their heads when estimation lets us down. It’s the primary cause of death marches. Corners get cut, internal quality compromised (at the supposed sake of speed), people start focusing more on how the team works, pressure mounts to “work faster”, moral drops and so on – all resulting in a detrimental impact to the original objective (unless it was to be on time and on budget of course, but as we know even then…).

It’s a vicious cycle repeated in organisations all over world, over and over again. It’s my strong belief that the main reason it keeps occurring is because we can’t escape our obsession with estimation.

“But we can’t just not provide estimates”

This is the typical response when I say we should start being brave and just saying no to requests to provide estimates on anything more than 1 or 2 months (at most) work. My answer to that is if there’s as much chance of you coming up with something meaningful by rolling some dice or rubbing the estimate goat then what purpose are you satisfying by doing so?

What’s the least worst situation to be in? Having an uncomfortable situation with your boss now or when – based on your estimates – the project’s half a million over budget and 6 months late?


Some people still seem to want to cling on to the idea that estimation is still a valuable activity and particularly that we are able to provide meaningful long range estimates, but as I’ve explained here it’s a fools errand. My advice to anyone who’s put in the situation of providing long range estimates is to be brave and simply say it’s not something we can do, and then do your best to explain why. It’s against the spirit of Agile software development, but that’s not even the point, mostly it’s just in no one’s best interests – well, unless you’re a software agency who rely on expensive change requests to keep the purse full, but that’s a whole different story.

Epilogue – what do we do instead?

It’s obviously no good to just say “no” and walk away and I’m certainly not advocating that, but it’s also not the point of this article to try and explain what the alternatives are. There are a plethora of options out there, especially in Agile & Lean related literature and from well known Agile and Lean proponents, but crikey, if there’s one thing that everyone within our passionate communities can agree it’s that the best software comes from iterative, feedback driven development, something which is completely at odds with the way estimation is currently being used by most organisations.



14 thoughts on “Estimation is at the root of most software project failures

  1. David Lowe

    Hi Rob,

    Nice post. I’m not going to rekindle the same debate that has already on above, but I’d like to extend the discussion to the more micro estimates that go into building a burn down chart. I had an exchange with Mike Cohn recently ( in response to my post where I suggested we should “burn the burn down” (see

    Of be interested to hear your thoughts.

  2. Pingback: Realistic and Practical Story Estimation | OpenView Blog

  3. Pingback: The Hardest Challenges of Implementing Scrum, #4: Realistic and Practical Estimation - Welcome to Openview Labs

  4. Paul Dyson

    Nice article Rob, clear and well-written. There’s an awful lot I can agree with … but not the conclusion. Here’s two real world questions I’ve had to answer, one when I ran a consultancy to deliver bespoke software and one in my current product/Saas business (but they’re examples, I’ve had to answer questions like this on a regular and frequent basis all through my professional life):

    1. “We [Philips] have committed to our biggest ever marketing spend on sponsoring the World Cup. We need to deliver an extension to the existing system to support this. Can it be done and what do you need in order to do it?” [answer: yes and 9 months and a team of 20(ish)]

    2. “A [for Singletrack] big customer wants to buy our product but they need to extend it an awful lot. These extensions will teach us a lot about the business domain and we’ll be able to roll much of it back into the product. How much should we charge?” [answer: enough to cover the cost of a team of 2.5 for 6 months]

    Maybe I’m not that clever but the way I answered both questions was by doing a bunch of estimation. For 1. we locked myself and three senior devs in a room for 3 days and estimated the entire project in terms of S (1 day), M (2-3 days) or L (4-6 days). Where there was uncertainty we either did a spike during that time period or estimated the worst-case risk and factored that in. For 2. we did something similar with the whole Singletrack dev team.

    What we didn’t then do was set these estimates in stone and beat ourselves up when they turned out to be wrong. We didn’t promise the customers that the estimates were perfect. What we did say was, given what we know today, we should be able to meet Philips’ deadline and Singletrack should be able to make a profit if we charge £XXXk.

    Then we managed by exception, keeping an eye on the priority objectives (a hard deadline in Philips’ case, a balance of not making a loss and a happy customer in Singletrack’s) and responding to change as we went.

    I don’t see how this is at odds with Agile or Lean. I don’t see how we wasted the time spent estimating. We didn’t make any promises we couldn’t keep (although we did make commitments which is a different thing and a part of XP practice that seems to be fading from view IMO). But we were able to provide useful answers – “I don’t know, let’s see how it goes” just wasn’t acceptable in either case. It wasn’t luck that I was working with/in organisations that understood the difference between estimates and promises or that what we knew at one point in time might not hold true at a later point. I made sure this was the case: it is common understanding in running businesses and I’ve always found it to be a relatively easy conversation to have about why software delivery is no different from any other business process affected by change and uncertainty.

    As a reflective practitioner I like estimating. There are many things you get from estimates apart from the numbers (I wrote about this some time ago: but even the numbers tell you a lot about what you do and don’t know. For example I got *really* good at estimating delivery on a particular technology after working with it for 12 years but was shockingly bad when I moved to a brand new set of cloud technologies (what a surprise!). By estimating, and reflecting on my estimates, I’ve got a lot better quite quickly and now me and the Singletrack dev team can answer questions like 2. without great risk of it bankrupting the company. To me this is a clear demonstration of our shift from ignorance to competance to a degree of mastery.

    I’m afraid I find it hard to accept that ‘we’re so bad at estimating’ when I know so many people who are good at it and I’ve seen many people get much better with a bit of practice. And I guess this is what motivates me to take part in the debate rather than just keeping quiet and carrying on doing what works for me. Estimation is a learned skill, having some idea of how long a piece of work will take (or why you really have no idea) is part of being a skilled practitioner, getting better at estimation is part of a wider learning process. And if we actually get better at estimating, get better at explaining what estimates are, and get better at ensuring estimates are used correctly, we can better support businesses that sometimes do have to work to deadlines and budgets. The alternative is to say that businesses who have deadlines and budgets are doing it wrong and need to fall in line with us developers and I just don’t buy that.

    Sorry for monopolising your comments page but there’s so much interesting and good stuff here a simple answer wouldn’t do it justice.



  5. Dan Rough

    Hey Rob,

    Great article; well written and enjoyable, I’m also going to have to take a contradictory stance to its conclusion though.

    Estimating is a valuable activity as it provides you and those requesting the estimate, with information that itself holds value. There’s a cost associated with that information, of course, and there is absolutely a point past which a continued investment in estimating will outweigh the value of the information received (most likely a U-Curve optimisation). Knowing where that trade off lies is key, and being able to articulate it as a trade off is just as important, in my opinion. By way of an example, I remember a time at 7digital, early on, when we chose to divert the focus of one team so that it worked with another based on a set estimates we had received for a piece of work – the Blackberry services work if I remember correctly. That information was of immense value to company at the time, and the cost associated with getting it was relatively low. I do appreciate your point about enlightened organisations of which I’d certainly consider 7digital one, too.

    I currently believe that the value derived from estimating is that it helps you identify the size of something, and consequently whether it needs to be broken down so that it is smaller in size. I used to think that estimating could be used to provide options, but now I tend to think that you only need options because it takes too long to test the assumptions provided in the idea that you’re wanting to develop i.e. to get feedback.

    I’m not sure either that it’s the way that estimates are misused that is the root of the problem that you express, instead I think that their misuse is in fact a symptom of a deeper issue – the typical relationship that exists between IT departments / software vendors and the rest of their business.

    Hopefully we’ll get to discuss further over a beer soon, Dan.

  6. Matt Jackson

    Sorry gentlemen, I’m with Rob.

    What is the point of a number that *may* be right 68% (or even 95%) or of the time, and often will be wildly outside that range of one or two standard deviations?

    There are lots of pieces of work in a software project. Lots of potential “outliers”. What do you say to the customer when you have a few pieces of work out of 30 or 50 that you didn’t estimate correctly and things run a month, two months or 6 months late? “Oh we were just unlucky”, or “Oh there were a few unseen problems”?.

    What you tell your clients and how you monetize the project is not something I have an answer to, it differs from company to company and client to client, but long range forward estimation is certainly not something I think is valuable.

  7. Dan Rough

    Matt, I might not have made my point sufficiently, so let me reiterate in a more concise manner. I’m not suggesting that the resultant number is where the majority of the value derived from estimating lies. In my opinion, the activities that you undertake to get to the number is where the majority of the number lies. Its those activities that give you early feedback, the information that I refer to above and that is where the value lies.

    I’m in part agreement with Rob about the misuse of estimates and how that is harmful. Where I differ though is that I don’t think that it’s the root of the problem, instead a symptom of a deeper problem, the typically poor relationship between IT vendors and their customers.

  8. rob Post author

    Hi Paul,

    I can’t argue with your experiences and if you’ve managed to make estimation work for you that’s only a good thing, however I don’t think that’s enough to convince me that, for most people and in most ways it’s used, estimation is working or that we can get around some pretty fundamental universal laws of nature.

    One thing you mention is how through practice you can get better at estimation, which is no doubt true, especially if you’re aware of cognitive bias and so on. However the difference between estimation and a skill like TDD is the consequences are much worse if you get it wrong. The skill of estimating is not only in improving the reliability of your estimates, but how you manage their use within the business. In many places that is something which is outside of your control, especially if you are “just” a lowly developer in a large bureaucratic organisation.

    Paul, you are no doubt an extremely experienced practitioner who, regardless of the organisation you’ve worked with has managed to make estimation work for you. I have to admit I have too, but certainly not in the way that most people are using it or most people who advocate estimation talk about it. The main gateway in to Agile remains Scrum and the second course people tend to go on after CSM is Scrum Estimating and Planning. Having been on one of these courses myself I can attest to the fact what they’re teaching is downright harmful – that somehow through the magic power of story points, velocity and burn down charts you can have some confidence, over long periods of time, of when something will be “done”. People are naively taking this back to their workplaces, brimming with confidence in the new world of Agile and getting themselves in really awful situations. That’s why I think it’s just super harmful to be advocating long range estimating and especially that Agile has some answer to this.

    And that’s just “Agile” practitioners who still represent a minority of the people making software. Outside that sphere it’s just an abomination.

  9. rob Post author

    Hi Dan,

    Yes thank you for reminding me about that piece of work and how valuable the estimation we did was. However the important information we got from the exercise was not /when/ the work would be done, but that it was going to take a lot more effort and time than people thought. In that circumstance (and some similar since) it’s proved very useful indeed, but that’s a very different thing to estimating for the purpose of knowing when something will be done.

    I’m not saying all estimation is bad, but the way it’s used in the majority of organisation’s certainly is and I strongly believe it’s one of the main causes for software project failure.

    I don’t think it’s a symptom of a deeper issue – I think it’s core to why IT depts. tends to have such poor relationships with their organisations. It’s the continued failure of software projects to be delivered “on time and on budget” which has lead to the widely held cynicism around software delivery. In pretty much every circumstance which reinforces this perception it will be because estimates were provided, deadlines and budgets were set accordingly and inevitably they were missed, generally by a factor.

    By providing long range estimates we are perpetuating the myth that we can have certainty in software development when we know that’s not true. It might be a more painful start to a project to resist the urge for that certainty and push alternative ways of managing risk, but considering what happens now that’s got to be preferable.

  10. Paul Dyson

    Well if the argument is actually that Scrum courses teach a lot of stupid stuff to people stupid enough to pay for it we are 100% in agreement :)

    But the argument isn’t about Scrum/CSM courses but about whether estimation has a place in software development. I’ve seen this argument rage over a number of topics over the years (specifically: patterns, tools, TDD/refactoring, full-blown XP, different languages, etc. and so on) and I’m afraid that the position “its dangerous because most people don’t get it and get burned by doing it wrong” isn’t one I can support.

    Here’s what I know about estimation:

    * Most people don’t get it and can’t do it
    * I, and other people I know (including you it turns out ;)), do get it and can do it
    * Practice leads to improvement, and improvement brings benefits
    * Some people will never get it and should steer clear

    I could have said exactly the same about XP in 1997. But rather than focusing on the last group of people all the early adopters of XP focussed on improving themselves and helping others improve. Mainstream adoption was never achieved – and possibly never could or should have been – but I believe there was an overall improvement in the state of software practice. If some people got burned by implementing XP naively or tried XP and failed because they didn’t really get it, I’m very sorry for them but that has no bearing on how I feel about XP or, in this case estimation.

    But now, as in 1997, I’m happy to say ‘it works for me’ and if the conclusion is I’m uniquely brilliant at this stuff and just because I can do it doesn’t mean any one else can, what can I say? Apart from that I know I’m not uniquely brilliant and if I can make it work for me, there are many people who could make it work as well if not better.

  11. Paul Dyson

    If the numbers really were that bad, I’d agree with you. But in my experience its rare, not often, that the numbers are ‘wildly outside that range’. Usually when that happens its because of some major event which is just as visible to the customer/business as it is to the dev team and so can be the topic of a sensible conversation.

    In an iterative, reflective process you’d never get to a position where you say “we’re 6 months late because there were a few unseen problems”. As soon as you see the actuals starting to diverge from the estimates you have useful data you can act upon, long before the point where you have to start apologising.

    The clue is in the name: they’re estimates not promises or guarantees. They say “all things being equal, this is roughly what we think”. If things don’t turn out to be equal, they give a useful mechanism for understand the impact of what’s changed. If things do pan out to be equal they provide valuable information for course-correction and individual improvement, not to mention, as Dan describes, discussion and debate about what is being delivered.

  12. rob Post author

    The point of my article was not to say that all estimation is bad, but most estimation as it’s currently applied is at the root of most software project failures. I could have been less militant in my conclusion, but again I feel that’s just leaving the door open.

    Scrum is just the tip of the iceberg of course. It’s just particularly frustratingly that a supposedly “agile” methodology is advocating such bad practice, which I’m sure is a sop to all the organisations who want to hang on to the idea we can be predictable so they (SA) can sell more courses.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>