Monthly Archives: June 2008

embed sprint 3: whiteboard power

The biggest lesson I’ve learnt is that the whiteboard is by far the most effective method of communication available. Requirements gathering with the customer in insanely fast times, meeting notes all can see and contribute to and the great thing is, they stay there for a few days where we can all see them. If I could programme on a whiteboard I would.

Meetings

Unsurprisingly, in our first retrospective together, Mike said he felt there were too many meetings. Essentially, this was because of the sprint planning meeting which involved a lot of estimating. We’ve now split the meeting in two – a prioritisation meeting with Mike in the preceeding sprint and then a planning, analysis and estimation meeting when we begin the sprint which we have at our desks so we can ask Mike questions if we need him.

The power of perception

On a very positive note, Mike is delighted with our progress and is feeling things are really getting done. Of course its not like we weren’t doing anything before (in fact, our team is now barely a quarter of the size it was before we moved in with them) its just that now they are deciding what we do on a bi-weekly basis and seeing the results in very little time.

Team Leading

I’m still hardly doing any coding, but that’s OK, its a bad idea to “own” any work as I can’t guarantee I’m able to complete it. When I do have the time I am using it to pair programme. This way I spread knowledge and best practices and can be across as much as possible without being too controlling.

The Estimation Fallacy

I’ve had a lot of reasons to think about estimation recently and I’ve come to a firm conclusion – it’s a complete waste of time. There are so many things you could be doing that will add value to your project – estimating adds nothing. In fact it has has the adverse effect of making your project less likely to succeed. I will explain why:

We cannot predict the unpredictable

More often than not, the factors that have the biggest impact on the duration of a project we simply did not see coming. Afterwards we look back and say “ah, well we know that for next time so we won’t make the same mistake again”. We investigate the reasons things went wrong, blame people or processes and move on to the next challenge confident this time it will be a success. This is the fatal flaw. What we don’t recognise is that the problem was not the particular event that delayed the project,but that something unexpected happened at all. This is an extremely costly mistake which eventually ends with people losing their jobs and a lot of money being thrown away. Some people may argue that when they estimate they allow for this by applying a “margin of error”. How much then? 5, 10, 20 percent? The problem with these unpredictable events or Black Swans is that no margin of error could possibly account for them, especially so if the object of your estimate is to win business or commit your organisation’s finances for the next X months. Unfortunately its in the nature of our business that we will constantly be presented with “unknown unknowns” and the sooner we realise this the better.

Even without these “unpredictable” events. We are useless at predicting the future

Until recently, I was a believer in McConnell’sCone of Uncertainty which argues that the further away you are from a project deadline the more exponentially unreliable your estimates will be (this is not improved by putting more effort into the estimation process). Well I now think this is invalid. For one thing the graph is symmetrical. If this was based on reality it would mean we overestimate as much as we underestimate. If that was the case we would deliver early on as many projects as we deliver late (stop laughing). Also, it suggests that our estimates get better as the project progresses. Even with iterative development and when we estimate at the last responsible moment (e.g. the week before) and assuming no big surprises came our way (which always do), I have found we are mostly way out (I would consider anything above a 10% error margin to be enough to make it a worthless exercise). On the project I’ve been working on for over a year now, with roughly the same team (a really good team, the best I’ve ever worked with), the accuracy of our estimation has not improved in the slightest.* All we can say is (assuming no Black Swans come our way which as I’ve stressed, always do) the closer we get to the finish line (i.e. the less work there is in the backlog) the less there is to go wrong.

It is not in the interests of the customer

If the idea is to give our customers something they can use to forecast budgets then we’re not doing it. As we cannot predict the future, what we end up giving them is next to useless, in fact its likely to have a detrimental effect by lulling them into a false sense of security and dissuading them from allowing for uncertainty in their budgeting.

Dr Dobbs’ Journal did a survey on how we define success. They found:

61.3 percent of respondents said that it is more important to deliver a system when it is ready to be shipped than to deliver it on time.87.3 percent said that meeting the actual needs of stakeholders is more important than building the system to specification.79.6 percent said that providing the best return on investment (ROI) is more important than delivering a system under budget.87.3 percent said that delivering high quality is more important than delivering on time and on budget.

So why are we so obsessed with it? The most common criticism I hear of agile methodologies is if a customer is given the choice between a company that says they’ll deliver in X months and cost £X and one that will not promise anything (sic) they’re bound to go with the former. Well, the survey above would suggest otherwise, as would I. In my last job I was in the position of choosing an agency to build a website and can assure you the last thing on our mind was how good they were at meeting deadlines. We were most impressed by the agency (sadly now defunct) who, for their pitch, did research into our customers and actually started building the site rather than knocking up any estimates.

What about when projects deliver on time and on budget?

Whilst some projects do deliver on time and on budget much of this can accounted for by chance rather than excellent estimation skills. These projects get scrutinised for what went so well (at least they should if your organisation is in any way decent) and the lessons are taken away to the next project. However whilst some of the lessons learnt may well be valid, no consideration is given to the enormous impact of blind luck! Adversely to when projects go bad, people and processes are given too much credit for success. This all results in aconfirmation bias. Every time you do this is like looking for a higher piece of cliff top to balance on the edge of.

Conclusion

Estimates are good for one thing – showing how pointless estimating is. We are able to use them track a project’s progress and show where events took it on a different course that no one had expected.

Only working in an iterative process where you’re presenting your productivity to the customer on a regular basis are they going to be in a position where they can make informed decisions on the effectiveness and ongoing viability of the work being undertaken. Fail faster, fail better.

* Instead of trying to improve our estimates (again) we decided to spend less time doing it. In our sprint planning meeting we no longer break our stories down into tasks. Therefore we do not measure our progress during the sprint in such detail. So far this has had no adverse effect, but has had the effect of freeing up many hours of development time.