Creating a TeamCity currently failed builds page

Frustrated with seeing too many failed builds and wanting to make the issue more visible we are planning on putting an LCD monitor on the wall to display all the failing builds. However, unfortunately TeamCity does not have such a page available. There are various ways of receiving the status of builds such as through RSS feeds and the Status Widget, but the information is no more than you can see on the overview page and if you have a lot of builds it’s not easy to see where the problems are.

First, I experimented with using Linq to mine the RSS feeds and was getting somewhere but it wasn’t exactly a five minute job. Then Kirill Maximov, a JetBrains TeamCity developer responded to my cries of help on Twitter saying I could modify the External Status Widget and a whole world of fun was opened up to me.

How to create a failed builds page

  1. Create an html page that includes the External Status Widget as described here. By default this page will only show the current status of all projects as you see on the overview page, but we only want to know about failed builds.
  2. Add the wrapping <c:if test=”${not buildType.status.successful}”></c:if> to the file *TeamCity*/webapps/ROOT/status/externalStatus.jsp (it is probably worth making a backup of this file first).
  3. Thats it! However as you have now realised you can do a lot more customisation. If you have a root through some of the jsp files you’ll see there are a lot of features you can take advantage of to customise further.

You can download my html page and modifed externalStatus.jsp files here

Team principles

On my new team we’ve just started our first proper iteration. We’ve agreed to commit to the following principles:

  • No multi-tasking
  • All new or changed code must be thoroughly unit tested
  • No more than 2 pieces of work in “Active Work” at any time (our team is 3 developers)
  • We always work on the highest priority task
  • Our definition of done is “In UAT”
  • Leave it in a better condition than you found it
  • No hidden work
  • No overtime
  • No disruptions
  • Don’t SysTest your own work (we don’t have a tester yet)
Have you done something similar? What are your team’s principles?

TDD keeps you focused

One of the less stated benefits of doing TDD is that it concentrates your mind on the problem.

Recently I was looking at a section of code which was creating a Lucene search string from criteria passed in from the user. I needed to add some new functionality, but it was a bit higgledy-piggledy so following the broken windows principle I decided to refactor it to the builder pattern before I added my new feature.

I ran NCover on the code and it reported 70% coverage which considering what I knew about the history of the system was surprisingly high. I decided there was enough coverage, paired with a colleague and dived in. At first we were doing well. Our changes broke the tests a few times which reassured us about the coverage, but after a while it felt that we were losing a bit of steam. We’d done a lot of refactoring but it didn’t feel like we’d got very far.

Skip to a couple of days later – I’d not been able to touch the code and neither had my colleague who had now been taken off to deal with another project, so I pulled in the other dev on my team who is probably the most fanatical TDD nut I’ve ever met. Immediately he was uncomfortable, he’d quickly spotted the tests were really only integration tests (something I had put to the back of my mind) and got jittery as we continued from where I’d left off. I didn’t like to see him in so much pain so quickly relented and let him write new tests even though I felt it was just going to slow us down, maybe even to the point where we wouldn’t get it finished. However he assured me that although it felt slow now we’d be racing along soon enough.

Not only was he right about that, but as we were doing it the objective was so much clearer. Writing the new tests meant we were forced to really think about each move we made so our refactoring had clear direction. We spent about the same time working on the problem but probably made twice as much progress as my previous attempt and ended up with a proper unit test suite around the code.

Some day soon people are probably going to do away with the need to do TDD, which I think will be a shame as it never ceases to amaze me how many benefits it has.

Software is not like a house

“Building a house” is perhaps the most overused software analogy out there. Sure, there are many overlaps and it’s a concept that non-technical people can grasp easily when we need to explain something, but it simply doesn’t add up. I’ve ran into this analogy frequently in my current obsession with estimation (someone even used it to comment on one of my recent articles) so I’ve been compelled to take this on as well ;-). Below is is a typical example of an argument I hear all the time. It was posted in response to this article on InfoQ:

“So, you’re going to remodel your old house. It’s a two-story Victorian, and you want to knock out half of the back exterior wall, expand the kitchen, and a 3/4 bathroom, and put a new bedroom above all that on the second floor.

You call up a contractor, and she says that she won’t give you any estimates, but she will show clear progress on a weekly basis, and you’re free to kill the contract at the end of any given week if you’re not satisfied.

So you begin work. They blast out the back wall, frame up the new rooms, and they’re beginning work on the kitchen expansion, when you begin to realize that you’re burning through your savings faster than you expected, and you’re not sure if you’ll be able to complete the job as you’d like. In fact, you’re beginning to worry if you’ll be able to get the thing completed in any useful way at all. At this point, you ask the contractor for an estimate of the work remaining to be done, and she gleefully responds, ‘I don’t give estimates, but you can cancel at any time!'”

If a house was like software and I was a contractor given the above conditions this is what I would do: I would build something that was usable and fulfilled the absolute minimum requirements as soon as possible (e.g. a wooden shed extension with a camping stove), see what the customer thinks and then rebuild it, say, every two weeks adding more and more of the requirements and responding to their feedback as we go, making sure that at the end of every two weeks the customer still had something they could use. In the end we’d have fulfilled as many of the requirements possible given the allocated money and time frame. We wouldn’t have gone over budget, been able to respond to unexpected events (e.g. finding out the foundations are not sufficient) and the customer could change his mind (on windows, room layouts etc) as we go.

Crucially, what the author of the above comment fails to grasp is the difference between incremental and iterative development (see end of article). Developing software (notice how no one says “building” software) is not like building a house – we have the luxury that it’s relatively inexpensive (if we’ve followed good design principles) to go back and change it as often as we like. I say, tell me honestly how much you’ve got to spend/how much time you have and I’ll give you the best possible value for those conditions. If an unexpected event occurred which means we can’t deliver it, well, it would have happened if we’d estimated it anyway. At least my way (well, it’s not my way is it…) we’re more likely to find out sooner and hopefully save the customer some money/embarrassment.

If you’re going to argue with me (which I dearly hope you will) please do not use this tired analogy. At least now rather than engaging with you I can point you this way and I don’t have to repeat myself 🙂

Further reading:

Jeff Patton explains on his blog the important difference between iterative and incremental development. – an exert from an excellent presentation I saw him give at XPDay 2007
The Software Construction Analogy is Broken. A very thorough article posted on kuro5hin a while back.

How do you measure success?

In my last post, I railed against the list makers and pondered why people are still so keen to pursue such fine grained detail. I hinted at something I’ve been thinking about for a while, that it comes down to who your working for.

Probably the biggest benefit of working so intimately with our customer has been the amount of work we don’t undertake. The problem with this is it’s very difficult (read impossible) to measure. The customer may be very happy, but there’s nothing to show for it. In fact, you could argue this works against us as we’re doing ourselves out of a job.

Managers can only evaluate our productivity on the work we do. Where I work our project manager is required to issue a progress report every 2 weeks. It consists of our spending (which I feel is quite justified), a list of all the stories we’ve been working on, their status and our velocity (in story points). If something is taking a lot longer than expected she is required to explain why. A lot of time is spent analysing our progress this way, but very little time getting feedback from our customer (shouldn’t there be a customer happiness rating in this bi-weekly report?).

However, I have sympathy with my managers. It’s their job to report on our progress to their managers, justify our existence and protect us from people looking for excuses to cut budgets. This is where I feel the irrational desire for measurement emanates. Somewhere up the line someone who is too busy to get their hands dirty with the detail wants a really simple way of diagnosing the health of a project (“Costing 50K a month but only delivering 20 points? Hmm… clearly something wrong there.”). If the velocity suddenly drops, does this necessarily mean there’s something wrong? Unfortunately anecdotal evidence or gut feelings don’t translate very well to very hierarchical structures (which I think says more about the structure).

Imagine this scenario: A bank has a crucial trading system which requires no development work for the foreseeable future. You have a small team of domain expert developers who’ve worked on the system for a long time, but now having nothing to do (they’re not doing nothing though, being enthusiastic devs they’re investigating new technologies and better ways of doings things as well a spiking new ideas which could improve the system). A manager sees a report which shows they’re no adding “no value” and makes them all redundant. Very soon after there is a problem with the system, it falls over and all trading has to cease until its fixed. As all the domain experts are no longer there it takes a very long time and the bank loses tens of millions. If the manager had kept the team on they’d have fixed it within a fraction of the time and saved the bank a fortune. If, rather than looking at productivity reports, the manager had appreciated the value of the system to the customer I doubt he would have made the decision he did.

Customers couldn’t give a stuff about velocity, story points or any other spurious measurement technique you care to come up with. They do care about you asking them questions and always seem delighted to be involved when we want to show them something or give them some feedback (tip: it’s important that this never takes more than 5-10 minutes). If we’re having problems we don’t try and hide them, we explain them to our customer in a way they’ll understand (so far they’ve been very understanding). Ultimately (and directly in our case), it’s our customer who’s paying the bills, not our managers.

If our ultimate goal is to provide as much value to our company as possible then its certainly not by trying to measure the highly subjective productivity of the developers*. If there’s any measurement to be done we should be focusing our efforts on how much we’re satisfying our customer’s needs. There are many ways we could approach this such as seeing how they use our software (or, more importantly, don’t), asking them to provide feedback, imagining if something we built was taken away and how it would impact them, roughly estimating how many people it would take to do the jobs our systems replace and so on. However, I feel it will take some people a lot of convincing that this is the way to go…

*Which, as Fowler points out, we cannot measure anyway

nMock vs RhinoMocks

I’ve recently started using RhinoMocks instead of nMock, mainly because it’s strongly-typed. However, I’ve found a few other little treats:

Stubs

In nMock, if you want to stub some method calls on a mocked interface you have to do something like this:

Mockery mockery = new Mockery();
IService mockService = mockery.NewMock();
Stub.On(mockService).Method("MethodA");
Stub.On(mockService).Method("MethodB");
Stub.On(mockService).Method("MethodC");
...

Which is cumbersome and noisy. In RhinoMocks you can do this:

MockRepository Repository repository = new MockRepository();
IService serviceMock = repository.Stub();
...

…and RhinoMocks will ignore all calls to that interface. This is really nice as you generally only test the SUT’s interaction with one dependency at a time.

Dynamic Mocks

If you only want to test one interaction with a dependency and ignore all others you can create a dynamic mock.

MockRepository Repository repository = new MockRepository();
repository .DynamicMock();
...

All calls to the mocked dependency will be ignored unless they are explicitly expected (e.g. Expect.Call(mockService.MethodA)…..). This is the same as not saying mockery.VerifyAllExpectationsHaveBeenMet() in nMock. It’s always annoyed me that you have to remember to do this in nMock and I much prefer that the default for RhinoMocks is to fail when encountering an unexpected method call.

Raising Events

nMock does not natively support raising events, which is a pain, but there are ways around it (I’ve extended his example to support custom EventArgs which you can download here). With RhinoMocks it’s much simpler. Rather than explaining it myself, check out J-P Boodhoo’s great example here.

embed sprint 3: whiteboard power

The biggest lesson I’ve learnt is that the whiteboard is by far the most effective method of communication available. Requirements gathering with the customer in insanely fast times, meeting notes all can see and contribute to and the great thing is, they stay there for a few days where we can all see them. If I could programme on a whiteboard I would.

Meetings

Unsurprisingly, in our first retrospective together, Mike said he felt there were too many meetings. Essentially, this was because of the sprint planning meeting which involved a lot of estimating. We’ve now split the meeting in two – a prioritisation meeting with Mike in the preceeding sprint and then a planning, analysis and estimation meeting when we begin the sprint which we have at our desks so we can ask Mike questions if we need him.

The power of perception

On a very positive note, Mike is delighted with our progress and is feeling things are really getting done. Of course its not like we weren’t doing anything before (in fact, our team is now barely a quarter of the size it was before we moved in with them) its just that now they are deciding what we do on a bi-weekly basis and seeing the results in very little time.

Team Leading

I’m still hardly doing any coding, but that’s OK, its a bad idea to “own” any work as I can’t guarantee I’m able to complete it. When I do have the time I am using it to pair programme. This way I spread knowledge and best practices and can be across as much as possible without being too controlling.

The Estimation Fallacy

I’ve had a lot of reasons to think about estimation recently and I’ve come to a firm conclusion – it’s a complete waste of time. There are so many things you could be doing that will add value to your project – estimating adds nothing. In fact it has has the adverse effect of making your project less likely to succeed. I will explain why:

We cannot predict the unpredictable

More often than not, the factors that have the biggest impact on the duration of a project we simply did not see coming. Afterwards we look back and say “ah, well we know that for next time so we won’t make the same mistake again”. We investigate the reasons things went wrong, blame people or processes and move on to the next challenge confident this time it will be a success. This is the fatal flaw. What we don’t recognise is that the problem was not the particular event that delayed the project,but that something unexpected happened at all. This is an extremely costly mistake which eventually ends with people losing their jobs and a lot of money being thrown away. Some people may argue that when they estimate they allow for this by applying a “margin of error”. How much then? 5, 10, 20 percent? The problem with these unpredictable events or Black Swans is that no margin of error could possibly account for them, especially so if the object of your estimate is to win business or commit your organisation’s finances for the next X months. Unfortunately its in the nature of our business that we will constantly be presented with “unknown unknowns” and the sooner we realise this the better.

Even without these “unpredictable” events. We are useless at predicting the future

Until recently, I was a believer in McConnell’sCone of Uncertainty which argues that the further away you are from a project deadline the more exponentially unreliable your estimates will be (this is not improved by putting more effort into the estimation process). Well I now think this is invalid. For one thing the graph is symmetrical. If this was based on reality it would mean we overestimate as much as we underestimate. If that was the case we would deliver early on as many projects as we deliver late (stop laughing). Also, it suggests that our estimates get better as the project progresses. Even with iterative development and when we estimate at the last responsible moment (e.g. the week before) and assuming no big surprises came our way (which always do), I have found we are mostly way out (I would consider anything above a 10% error margin to be enough to make it a worthless exercise). On the project I’ve been working on for over a year now, with roughly the same team (a really good team, the best I’ve ever worked with), the accuracy of our estimation has not improved in the slightest.* All we can say is (assuming no Black Swans come our way which as I’ve stressed, always do) the closer we get to the finish line (i.e. the less work there is in the backlog) the less there is to go wrong.

It is not in the interests of the customer

If the idea is to give our customers something they can use to forecast budgets then we’re not doing it. As we cannot predict the future, what we end up giving them is next to useless, in fact its likely to have a detrimental effect by lulling them into a false sense of security and dissuading them from allowing for uncertainty in their budgeting.

Dr Dobbs’ Journal did a survey on how we define success. They found:

61.3 percent of respondents said that it is more important to deliver a system when it is ready to be shipped than to deliver it on time.87.3 percent said that meeting the actual needs of stakeholders is more important than building the system to specification.79.6 percent said that providing the best return on investment (ROI) is more important than delivering a system under budget.87.3 percent said that delivering high quality is more important than delivering on time and on budget.

So why are we so obsessed with it? The most common criticism I hear of agile methodologies is if a customer is given the choice between a company that says they’ll deliver in X months and cost £X and one that will not promise anything (sic) they’re bound to go with the former. Well, the survey above would suggest otherwise, as would I. In my last job I was in the position of choosing an agency to build a website and can assure you the last thing on our mind was how good they were at meeting deadlines. We were most impressed by the agency (sadly now defunct) who, for their pitch, did research into our customers and actually started building the site rather than knocking up any estimates.

What about when projects deliver on time and on budget?

Whilst some projects do deliver on time and on budget much of this can accounted for by chance rather than excellent estimation skills. These projects get scrutinised for what went so well (at least they should if your organisation is in any way decent) and the lessons are taken away to the next project. However whilst some of the lessons learnt may well be valid, no consideration is given to the enormous impact of blind luck! Adversely to when projects go bad, people and processes are given too much credit for success. This all results in aconfirmation bias. Every time you do this is like looking for a higher piece of cliff top to balance on the edge of.

Conclusion

Estimates are good for one thing – showing how pointless estimating is. We are able to use them track a project’s progress and show where events took it on a different course that no one had expected.

Only working in an iterative process where you’re presenting your productivity to the customer on a regular basis are they going to be in a position where they can make informed decisions on the effectiveness and ongoing viability of the work being undertaken. Fail faster, fail better.

* Instead of trying to improve our estimates (again) we decided to spend less time doing it. In our sprint planning meeting we no longer break our stories down into tasks. Therefore we do not measure our progress during the sprint in such detail. So far this has had no adverse effect, but has had the effect of freeing up many hours of development time.

embed – sprint 1

This week the project manager arrived to complete the small team I am in. What makes our project interesting is we are going to be “embedded” with the customer (who is internal). As far as can tell this is supposed to be the ultimate environment for successful agile software development. I intend to write about my experiences here.

The other interesting news is I will be the team leader. Whilst I was a team lead in my previous job it was really only by default and I intentionally avoided this position when I came back from my travels. Now I’m more than ready, but there will be plenty of challenges along the way and I’ll do my best to write about them as well.

We’ve completed one sprint and already I’ve found the customer (to be known as Mike) to be very resistant to getting involved in the process of creating their product. Mike clearly has a very busy job and is hoping we’ll just be reporting back once every couple of weeks with lots of amazing things that exactly fit his expectations (as we have amazing mind reading skills). We’ve asked him to get involved with planning, retrospectives and stand ups and he’s expressed a keen desire not to be involved. It will be very interesting to see how we will get around this resistance. To create brilliant software the customer has to be involved and I will consider it a personal failure if we do not succeed in drawing them in.

On the team lead front, unsurprisingly, I have found I am spending very little time coding and have been struggling to keep on top things. You simply cannot do everything that is asked of you so to keep my head above the water I’ve been concentrating my efforts on recognising what is most important and doing that well. This is quite a skill as some people are very good at making a lot of noise, whilst other more important issues may timidly bubble under the surface (waiting to explode like a geyser). I’ve also tried to delegate as much as I can to the other members of my team which makes me feel uncomfortable, but I know it has to be done. The urge to constantly check up on their progress is hard to resist but they’re good people and I trust them so I’m doing my best not to be too much of a nag.

Over the next few sprints I’ll be focusing on getting Mike more interested in taking part and bringing our PM (who is new to agile) up to speed. Hopefully I’ll also get to cut a bit more code…