Monthly Archives: August 2008

Software is not like a house

“Building a house” is perhaps the most overused software analogy out there. Sure, there are many overlaps and it’s a concept that non-technical people can grasp easily when we need to explain something, but it simply doesn’t add up. I’ve ran into this analogy frequently in my current obsession with estimation (someone even used it to comment on one of my recent articles) so I’ve been compelled to take this on as well ;-). Below is is a typical example of an argument I hear all the time. It was posted in response to this article on InfoQ:

“So, you’re going to remodel your old house. It’s a two-story Victorian, and you want to knock out half of the back exterior wall, expand the kitchen, and a 3/4 bathroom, and put a new bedroom above all that on the second floor.

You call up a contractor, and she says that she won’t give you any estimates, but she will show clear progress on a weekly basis, and you’re free to kill the contract at the end of any given week if you’re not satisfied.

So you begin work. They blast out the back wall, frame up the new rooms, and they’re beginning work on the kitchen expansion, when you begin to realize that you’re burning through your savings faster than you expected, and you’re not sure if you’ll be able to complete the job as you’d like. In fact, you’re beginning to worry if you’ll be able to get the thing completed in any useful way at all. At this point, you ask the contractor for an estimate of the work remaining to be done, and she gleefully responds, ‘I don’t give estimates, but you can cancel at any time!'”

If a house was like software and I was a contractor given the above conditions this is what I would do: I would build something that was usable and fulfilled the absolute minimum requirements as soon as possible (e.g. a wooden shed extension with a camping stove), see what the customer thinks and then rebuild it, say, every two weeks adding more and more of the requirements and responding to their feedback as we go, making sure that at the end of every two weeks the customer still had something they could use. In the end we’d have fulfilled as many of the requirements possible given the allocated money and time frame. We wouldn’t have gone over budget, been able to respond to unexpected events (e.g. finding out the foundations are not sufficient) and the customer could change his mind (on windows, room layouts etc) as we go.

Crucially, what the author of the above comment fails to grasp is the difference between incremental and iterative development (see end of article). Developing software (notice how no one says “building” software) is not like building a house – we have the luxury that it’s relatively inexpensive (if we’ve followed good design principles) to go back and change it as often as we like. I say, tell me honestly how much you’ve got to spend/how much time you have and I’ll give you the best possible value for those conditions. If an unexpected event occurred which means we can’t deliver it, well, it would have happened if we’d estimated it anyway. At least my way (well, it’s not my way is it…) we’re more likely to find out sooner and hopefully save the customer some money/embarrassment.

If you’re going to argue with me (which I dearly hope you will) please do not use this tired analogy. At least now rather than engaging with you I can point you this way and I don’t have to repeat myself 🙂

Further reading:

Jeff Patton explains on his blog the important difference between iterative and incremental development. – an exert from an excellent presentation I saw him give at XPDay 2007
The Software Construction Analogy is Broken. A very thorough article posted on kuro5hin a while back.

How do you measure success?

In my last post, I railed against the list makers and pondered why people are still so keen to pursue such fine grained detail. I hinted at something I’ve been thinking about for a while, that it comes down to who your working for.

Probably the biggest benefit of working so intimately with our customer has been the amount of work we don’t undertake. The problem with this is it’s very difficult (read impossible) to measure. The customer may be very happy, but there’s nothing to show for it. In fact, you could argue this works against us as we’re doing ourselves out of a job.

Managers can only evaluate our productivity on the work we do. Where I work our project manager is required to issue a progress report every 2 weeks. It consists of our spending (which I feel is quite justified), a list of all the stories we’ve been working on, their status and our velocity (in story points). If something is taking a lot longer than expected she is required to explain why. A lot of time is spent analysing our progress this way, but very little time getting feedback from our customer (shouldn’t there be a customer happiness rating in this bi-weekly report?).

However, I have sympathy with my managers. It’s their job to report on our progress to their managers, justify our existence and protect us from people looking for excuses to cut budgets. This is where I feel the irrational desire for measurement emanates. Somewhere up the line someone who is too busy to get their hands dirty with the detail wants a really simple way of diagnosing the health of a project (“Costing 50K a month but only delivering 20 points? Hmm… clearly something wrong there.”). If the velocity suddenly drops, does this necessarily mean there’s something wrong? Unfortunately anecdotal evidence or gut feelings don’t translate very well to very hierarchical structures (which I think says more about the structure).

Imagine this scenario: A bank has a crucial trading system which requires no development work for the foreseeable future. You have a small team of domain expert developers who’ve worked on the system for a long time, but now having nothing to do (they’re not doing nothing though, being enthusiastic devs they’re investigating new technologies and better ways of doings things as well a spiking new ideas which could improve the system). A manager sees a report which shows they’re no adding “no value” and makes them all redundant. Very soon after there is a problem with the system, it falls over and all trading has to cease until its fixed. As all the domain experts are no longer there it takes a very long time and the bank loses tens of millions. If the manager had kept the team on they’d have fixed it within a fraction of the time and saved the bank a fortune. If, rather than looking at productivity reports, the manager had appreciated the value of the system to the customer I doubt he would have made the decision he did.

Customers couldn’t give a stuff about velocity, story points or any other spurious measurement technique you care to come up with. They do care about you asking them questions and always seem delighted to be involved when we want to show them something or give them some feedback (tip: it’s important that this never takes more than 5-10 minutes). If we’re having problems we don’t try and hide them, we explain them to our customer in a way they’ll understand (so far they’ve been very understanding). Ultimately (and directly in our case), it’s our customer who’s paying the bills, not our managers.

If our ultimate goal is to provide as much value to our company as possible then its certainly not by trying to measure the highly subjective productivity of the developers*. If there’s any measurement to be done we should be focusing our efforts on how much we’re satisfying our customer’s needs. There are many ways we could approach this such as seeing how they use our software (or, more importantly, don’t), asking them to provide feedback, imagining if something we built was taken away and how it would impact them, roughly estimating how many people it would take to do the jobs our systems replace and so on. However, I feel it will take some people a lot of convincing that this is the way to go…

*Which, as Fowler points out, we cannot measure anyway